Jump to content

Recommended Posts

Posted

Wonder what some of you would think of this.

 

https://lifeuniverseblog.wordpress.com/2017/09/15/my-ethics-system-a-framework-for-making-a-moral-system/

 

See at bottom for how this system can be used to describe theories of morality based on purely consequence, action or intention.

My ethics system asserts the following:

There is reality.

Some states of reality are preferable to others.

If you do not find all states of reality equally preferable,  (if not then you have no use for this tool)

My ethics system prescribes the following:

1.Find overarching values that predict preferences between states of reality.

2. (optional step) Find global arrangements of reality where-in those states of reality are maximized.

3. (optional step) Find behavior based on perfect knowledge that favors a, in total, more preferable ensemble of states of reality.

ex: If you were omniscient, what system for deriving a decision of action from information (morality) would always prescribe the action most aligned with your values in all possible realities (situations)?

4. Find behavior based on imperfect knowledge that if you had in each instance would favor a, in total, more preferable ensemble of states of reality.

ex: If you were who you are now (or the set of agents your moral theory address), what system for deriving a decision of action from the information available to you in any situation would most maximize the alignment of reality with your values?

 

5. Find best attainable behavior based on imperfect knowledge and imperfect capacity for information processing and decision making that if you followed in each instance would favor a, in total, more preferable ensemble of states of reality as compared to alternative behaviors within your reach.

ex: Given the decision making process of my brain, I am simply not sufficiently well equipped in both speed and quality of cognition to ideally process information into decisions based on step 4. I should be like someone who behaves like X, however I can’t achieve that person and associated behavior. The second best behavior Y can be achieved by me, so I should go for this one instead.

trivia: You might run the risk of not achieving the best behavior you can reasonably expect to achieve, and the rest of the probability, the part in which you don’t achieve it, makes the choice to pursue the best achievable behavior inferior due to the risk premium included. Thus you may prefer to strive in another path that even though you won’t achieve the best that path to offer, you can fall back on some improvements still.

Conclusion: if you completed the above steps, you now have a immediately practical set of instructions to get more of what you like. If you don’t implement it, it’s because you forgot to value laziness or other distracting values or, in an alternative conceptual perspective, have failed to recognize your limitations and have set an unrealistic course of action. If however you achieved a realistic course of action, well now you know so of you go!


Optional steps are helpful intermediates in arriving to the moral system however they are not as well as useful and sometimes irrelevant (such as when simple behaviors are the values themselves)

There’s a lot of rule utilitarianism, how is this compatible with non-consequential theories? It’s compatible if you correctly identify the values that describe those non-consequentialist or only partially consequentialist moralities. If you think otherwise, I welcome the challenge: just message me and I’ll be glad to test it!


I think this system captures the complexity and unfilled gaps of what can be done in ethics. If I remember correctly, I already posted my values and some of my steps.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.