Jump to content

Recommended Posts

Posted

Stefan points out that the axioms described in the “UPB in a Nutshell” section of UPB are ones that must bind any moral system. As such, the ethical framework I forward will hopefully arise under these axioms without contradiction.

 

In regards to Utilitarianism Stefan had this to say: “I do not believe that morality can be defined or determined with reference to “arguments from effect,” or the predicted consequences of ethical propositions. Utilitarianism, or “the greatest good for the greatest number,” does not solve the problem of subjectivism, since the odds of any central planner knowing what is objectively good for everyone else are about the same as any - 10 - central economic planner knowing how to efficiently allocate resources in the absence of price – effectively zero. Also, that which is considered “the greatest good for the greatest number” changes according to culture, knowledge, time and circumstances, which also fails to overcome the problem of subjectivism. We do not judge the value of scientific experiments according to some Platonic higher realm, or some utilitarian optimisation – they are judged in accordance with the scientific method. I will take the same approach in this book.”

This is quite a piercing and accurate criticism. The problem any theory which tries to aggregate some good faces, whether it’s pleasure, happiness, or some other “good”, is that it inevitably cannot measure that good objectively. Therefore, there is no way an individual, let alone a central planner could assess what the correct moral  decision would be. Call this the measurement problem. Utilitarianism faces various other problems as well (such as the repugnant conclusion, and the utility monster and probably a few others I’m unaware of). Though this seems to be the most condemning problem of them all, since without its solution, Utilitarianism doesn’t get off the ground. Let’s turn to a quick thought experiment.

 

There is a choice machine. This machine knows all of your current preferences and it updates each time your preferences change. This machine has a phenomenal grasp of deterministic physical forces and can assess the various outcomes of any set of decisions. This machine will best choose the decisions you have before you, in accordance with whatever outcome would best suit your preferences. The machine does this job without fault. Do you choose to let the machine run your life?

 

There is a second choice machine. It too knows all of your current preferences and it updates each time your preferences change. While it doesn’t make the best possible choices in accordance with outcomes and your preferences, it makes the choice you would have made. Do you let this machine run your life? Making decisions can be quite stressful and you would get the exact same life you would have had, but without any of that stress.

 

If you would not let either of these machines run your life, it is likely because you are a rational person. Any rational person innately values choice.

But what if we plugged choice itself into the principle, “whatever maximizes x, is good”?

 

Well let’s take a quick look at the meaning of choice. Choice is the willful use of a conscious mind. So a coma patient that is still conscious is making choices about how to direct his attention. Lifting your leg is a choice, as is not lifting your leg.  So choosing not to make a choice is itself a choice. Seems pretty straightforward. In fact, so long as there is a conscious person by themselves(in total isolation from other actors), they are making as many choices as they can. That is, choice is always maximized

 

On the other hand, when more than one moral agent gets involved, things get interesting. If A and B decide on an exchange, both of them will the exchange to happen and so you have two choices resulting in one exchange (mutual assent). However, if A were to replace B’s items with his own, despite the fact that B may have traded anyway, there is one less choice than there could have been. A abolished a choice of B’s. When A murders B, A takes away B’s choice for euthanasia, or to remain alive. When A rapes B, the option with greater choice is that where B and A had consensual sex. I know what you’re thinking, A’s a real asshole. I agree. But aside from that, where do A and B get off claiming they have the right to bodily integrity? Well Mill makes a good point (at least there’s one), that human beings are themselves outside the consideration of exclusively being means to other ends. That is, an any given time, we humans are ends in and of ourselves. Preservation of conscious entities and indirectly the shells that sustain them are inherently valuable.

If you see a baby drowning in a puddle and there’s no one around, you at the very least get the baby out of the puddle. If not, you are A a psychopath and B not fully rational. More on this later.

Choice and Markets

How could one go about maximizing choice? Doesn’t this version of Utilitarianism suffer the same measurement problem as the others? Well, no.  So, choice is binary in that you either opt for A, or you don’t. In another sense it’s pluralistic in that you may be giving up a litany of other choices by making any one choice. This is what economists refer to as opportunity cost. By saying that somehow making a choice isn’t a choice because of all the other opportunities you pass up, you’re sneaking in a requirement for choice that wasn’t in the original definition. In any given circumstance with more than one conscious actor, you can have an exchange where there is mutual assent. In each of these exact instances there is maximal choice occurring.

 

What’s interesting is that went a voluntary interaction occurs something new is generated. If A gives B his pencil for a dollar, A values B’s dollar more than his own pencil and B values A’s pencil more than his own dollar. Also, A values B’s dollar more than B does and B values A’s pencil more than A does.  Net economic value is necessarily created. Involuntary exchanges, however, are zero-sum.

 

With the creation of economic value comes the expansions of resources. Choices are expanded directly as resources expand. If a market is merely the sum of all consensual interactions (including gifts, charity, etc…) then the market is at that point the most ethical state of affairs possible.  What’s fascinating is that we know empirically that markets create the most resources netted compared to central planning or other involuntary exchanges, while simultaneous creating the greatest distribution of those resources to the greatest number of people. Voluntarism is necessarily the moral course of action as a direct result of maximizing choice.

 

Then there is the question of how resources come to be decided upon, and by whom. For this, see Rothsbard’s Applications and criticism from the Austrian school. The basics are, in regards to the control of materials external to the body, if you work with something that no one else has a claim to, it becomes yours after a time. A farmer comes to a new territory, puts up a fence, and works the land. That land is his. I reject that the ownership of the self is necessary through these means, because the integrity of the body and its directed use by consciousness within its brain, is innately valuable. Other than that, this method of acquisition is unlimited.

 

Let’s recap. We have free-markets as dictated by the maximization of choice, the respect for property rights due to the homesteading theory, and the intrinsic value of human life.

 

The Intrinsic Value of Preserving Conscious Creatures

Let’s revisit the conclusions we drew about the drowning baby. From the notion that one happens upon a baby drowning in a puddle, it is reprehensible that anyone should let it drown, despite perhaps desiring not to bend down. That is, bending down and saving the baby wouldn’t be their choice. This strikes most as intuitive.

Similarly,

 

if someone were to happen upon someone who had experience vertigo and fallen into a body of water and was drowning, it would be equally vile not to throw that person a nearby self-inflating life vest.

 

There is also the popular flag pole scenario (though this seems the most farfetched I’ll admit) where a man on a balcony of a very high building loses balance somehow and falls off the balcony. Luckily he grabs onto the flagpole outside the window of the story immediately below. Should a person deny the hanging man the ability to kick in the glass, or land on the balcony(which magically appeared to change this into a different thought experiment), or worse yet actively seeks to defend his balcony from the man trying to set down on it, we would say this strikes us as unsettling behavior.

 

Yet none of the options that would ruffle our feathers in any of the above scenarios is one which there would be less choice. There is another value at play, and as you might have

guessed that value one intrinsic to preserving conscious creatures. Only in these rare instances which pose negligible risk to moral actors does that intrinsic value necessarily outweigh the value of the choice to do nothing.

 

This has certain restrictions. If for instance the person in the river did not fall there because of vertigo, and instead was trying to drown themselves, no one should interfere. Worst

case scenario, the suicidal person is rescued and must commit suicide slightly after they had wished to initially.

Also if you try and maximize conscious creatures, al the value problems and other problems of utilitarianism arise.  That only happens if you try and directly create a greater aggregate preservation of consciousness. If in all circumstances where there are conscious actors with the ability to preserve their own lives(such that they are not in danger), choice is deferred to as the vehicle of maximization, then all is well in the measurement department.

 

An immediate imposition of danger would be required for the value of life to be considered and what qualifies as “immediate” or as a “negligible risk” to the moral actor is no more or less than how we use those words. All words with meaning get that meaning through inter-subjective agreement.

What would it matter if we could say such people were wrong for refraining from acting?

A lot to the families of those imperiled. Compensatory damages to the families or the victims themselves, should they survive but not unscathed, would be a moral right. By abstaining from action, the bad actor injured the victim.  They would be obligated to make them whole again. Similarly in the event of death, emotional damages should be compensated to the extent that they could be.

What you wouldn’t get is some central planner telling people they have to wipe the noses of passers by.

 

If you have read this far, thanks. The only thing I ask, before you issue your criticism is that you do your very best to defend this position first. Before you ask your question or point out a flaw, anticipate how I might respond and play out the back and forth. Again thanks for reading and have fun tearing it to pieces lol.

 

EDIT: Mill further describes the relationship of people being ends in themselves, and this establishing the argument for self defense, where he says "the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others"

Posted

The choice machine thought experiments are not mine. I wasn't sure about their authorship today until my professor responded to me. The credit goes to James Taylor of T.C.N.J. Also, fun fact: denying you value choice or life is a performative contradiction.

Posted

when i get back, i will tell you all the assumptions/invalid arguments i spot in your theory. For simple comment, every time you link two ideas see if you successfully proved the link (logical steps).

Posted

Stefan points out that the axioms described in the “UPB in a Nutshell” section of UPB are ones that must bind any moral system. As such, the ethical framework I forward will hopefully arise under these axioms without contradiction.

 

In regards to Utilitarianism Stefan had this to say: “I do not believe that morality can be defined or determined with reference to “arguments from effect,” or the predicted consequences of ethical propositions. Utilitarianism, or “the greatest good for the greatest number,” does not solve the problem of subjectivism, since the odds of any central planner knowing what is objectively good for everyone else are about the same as any - 10 - central economic planner knowing how to efficiently allocate resources in the absence of price – effectively zero. Also, that which is considered “the greatest good for the greatest number” changes according to culture, knowledge, time and circumstances, which also fails to overcome the problem of subjectivism. We do not judge the value of scientific experiments according to some Platonic higher realm, or some utilitarian optimisation – they are judged in accordance with the scientific method. I will take the same approach in this book.”

This is quite a piercing and accurate criticism. The problem any theory which tries to aggregate some good faces, whether it’s pleasure, happiness, or some other “good”, is that it inevitably cannot measure that good objectively. Therefore, there is no way an individual, let alone a central planner could assess what the correct moral  decision would be. Call this the measurement problem. Utilitarianism faces various other problems as well (such as the repugnant conclusion, and the utility monster and probably a few others I’m unaware of). Though this seems to be the most condemning problem of them all, since without its solution, Utilitarianism doesn’t get off the ground. Let’s turn to a quick thought experiment.

 

There is a choice machine. This machine knows all of your current preferences and it updates each time your preferences change. This machine has a phenomenal grasp of deterministic physical forces and can assess the various outcomes of any set of decisions. This machine will best choose the decisions you have before you, in accordance with whatever outcome would best suit your preferences. The machine does this job without fault. Do you choose to let the machine run your life? This seems like a simple thought experiment, but its too vague. Does the machine present me with decisions, or does it force me to follow it? If it tells me to go to medical school, will it tel me the right answers on the MCAT? If all it does is simply tell me the best choice, i do not think i have a problem with it. IF it does force me to make choices, that is when most people will object.

 

There is a second choice machine. It too knows all of your current preferences and it updates each time your preferences change. While it doesn’t make the best possible choices in accordance with outcomes and your preferences, it makes the choice you would have made. Do you let this machine run your life? Making decisions can be quite stressful and you would get the exact same life you would have had, but without any of that stress.

 

If you would not let either of these machines run your life, it is likely because you are a rational person. Any rational person innately values choice.

But what if we plugged choice itself into the principle, “whatever maximizes x, is good”? For some people, the machine may be the rational choice since they have made a mess of running their own lives. When when you say any rational person innately values choice, what does that mean? If i am about to sit down for breakfast and a man bursts into my room and points a gun in my face and utters the phrase "The lead or the bacon," am i better off because now i have one more choice i did not have before? You cannot just assert these things, you need to defend them.

 

Well let’s take a quick look at the meaning of choice. Choice is the willful use of a conscious mind. So a coma patient that is still conscious is making choices about how to direct his attention. Lifting your leg is a choice, as is not lifting your leg.  So choosing not to make a choice is itself a choice. Seems pretty straightforward. In fact, so long as there is a conscious person by themselves(in total isolation from other actors), they are making as many choices as they can. That is, choice is always maximized When you say choice is willful use of a conscious mind, does that mean choice is about what is happening in your head? A man in a coma is by definition an unconscious man. I do not know if this counts, but a conscious person by themselves might just want someone else to talk to, but they cannot choose that, so how are they making as many choices as they can. Do you mean given the circumstances,  they are free to make any choice available to them (within their choice set)?

 

On the other hand, when more than one moral agent gets involved, things get interesting. If A and B decide on an exchange, both of them will the exchange to happen and so you have two choices resulting in one exchange (mutual assent). However, if A were to replace B’s items with his own, despite the fact that B may have traded anyway, there is one less choice than there could have been. A abolished a choice of B’s. When A murders B, A takes away B’s choice for euthanasia, or to remain alive. When A rapes B, the option with greater choice is that where B and A had consensual sex. I know what you’re thinking, A’s a real asshole. I agree. But aside from that, where do A and B get off claiming they have the right to bodily integrity? Well Mill makes a good point (at least there’s one), that human beings are themselves outside the consideration of exclusively being means to other ends. That is, an any given time, we humans are ends in and of ourselves. Preservation of conscious entities and indirectly the shells that sustain them are inherently valuable.

If you see a baby drowning in a puddle and there’s no one around, you at the very least get the baby out of the puddle. If not, you are A a psychopath and B not fully rational. More on this later. Your claim is that if A were to replace B's item with his own, there is one less choice than there could have been. Doesn't this open up other choices, like B seeking legal restitution? The argument you seem to pose here is that choice maximization is best and when you force someone you minimize their choice. But then you run into the problem of maximum choice for the maximum number. A man with a billion dollars has a decent amount of choices, but a man with no penny has very little. If i take a hundred thousand from the rich guy to give to the poor, would the poor not gain a lot more choices than the rich lost? How do you measure if choice has in fact been maximized or is it assumed? I am not familiar with Mills argument on why we are ends in ourselves, can you give me a short version of the argument? DO animals count as conscious entities? If so, how to we preserve conscious entities given the interaction between different species is not very kind. The baby thing came out of nowhere. I am not sure if not helping the baby has moral content based on your arguments so far.

Choice and Markets

How could one go about maximizing choice? Doesn’t this version of Utilitarianism suffer the same measurement problem as the others? Well, no.  So, choice is binary in that you either opt for A, or you don’t. In another sense it’s pluralistic in that you may be giving up a litany of other choices by making any one choice. This is what economists refer to as opportunity cost. By saying that somehow making a choice isn’t a choice because of all the other opportunities you pass up, you’re sneaking in a requirement for choice that wasn’t in the original definition. In any given circumstance with more than one conscious actor, you can have an exchange where there is mutual assent. In each of these exact instances there is maximal choice occurring. Don't just tell is that there is maximal choice occurring, prove it. When you make a choice that significantly diminishes your other choices, is that also choice maximization? For example, lets say you had to sign a contract stating you cannot work at any other firm for the duration of your employment at your current firm and for up to 6 months after leaving it, does that maximize your choice?

 

What’s interesting is that went a voluntary interaction occurs something new is generated. If A gives B his pencil for a dollar, A values B’s dollar more than his own pencil and B values A’s pencil more than his own dollar. Also, A values B’s dollar more than B does and B values A’s pencil more than A does.  Net economic value is necessarily created. Involuntary exchanges, however, are zero-sum.

 

With the creation of economic value comes the expansions of resources. Choices are expanded directly as resources expand. If a market is merely the sum of all consensual interactions (including gifts, charity, etc…) then the market is at that point the most ethical state of affairs possible.  What’s fascinating is that we know empirically that markets create the most resources netted compared to central planning or other involuntary exchanges, while simultaneous creating the greatest distribution of those resources to the greatest number of people. Voluntarism is necessarily the moral course of action as a direct result of maximizing choice. As stated earlier, who's choice is maximized? Is it our collective choice or my individual choice? Do not just state things that are true, provide references or arguments for why they are in fact true. What is economic value? 

 

Then there is the question of how resources come to be decided upon, and by whom. For this, see Rothsbard’s Applications and criticism from the Austrian school. The basics are, in regards to the control of materials external to the body, if you work with something that no one else has a claim to, it becomes yours after a time. A farmer comes to a new territory, puts up a fence, and works the land. That land is his. I reject that the ownership of the self is necessary through these means, because the integrity of the body and its directed use by consciousness within its brain, is innately valuable. Other than that, this method of acquisition is unlimited.

 

Let’s recap. We have free-markets as dictated by the maximization of choice, the respect for property rights due to the homesteading theory, and the intrinsic value of human life.

 

The Intrinsic Value of Preserving Conscious Creatures

Let’s revisit the conclusions we drew about the drowning baby. From the notion that one happens upon a baby drowning in a puddle, it is reprehensible that anyone should let it drown, despite perhaps desiring not to bend down. That is, bending down and saving the baby wouldn’t be their choice. This strikes most as intuitive. Why should i be forced to make a choice because a baby is drowning in a shallow pool? Does that not minimize my choice?

Similarly,

 

if someone were to happen upon someone who had experience vertigo and fallen into a body of water and was drowning, it would be equally vile not to throw that person a nearby self-inflating life vest.

 

There is also the popular flag pole scenario (though this seems the most farfetched I’ll admit) where a man on a balcony of a very high building loses balance somehow and falls off the balcony. Luckily he grabs onto the flagpole outside the window of the story immediately below. Should a person deny the hanging man the ability to kick in the glass, or land on the balcony(which magically appeared to change this into a different thought experiment), or worse yet actively seeks to defend his balcony from the man trying to set down on it, we would say this strikes us as unsettling behavior.

 

Yet none of the options that would ruffle our feathers in any of the above scenarios is one which there would be less choice. There is another value at play, and as you might have

guessed that value one intrinsic to preserving conscious creatures. Only in these rare instances which pose negligible risk to moral actors does that intrinsic value necessarily outweigh the value of the choice to do nothing.

 

This has certain restrictions. If for instance the person in the river did not fall there because of vertigo, and instead was trying to drown themselves, no one should interfere. Worst

case scenario, the suicidal person is rescued and must commit suicide slightly after they had wished to initially.

Also if you try and maximize conscious creatures, al the value problems and other problems of utilitarianism arise.  That only happens if you try and directly create a greater aggregate preservation of consciousness. If in all circumstances where there are conscious actors with the ability to preserve their own lives(such that they are not in danger), choice is deferred to as the vehicle of maximization, then all is well in the measurement department.

 

An immediate imposition of danger would be required for the value of life to be considered and what qualifies as “immediate” or as a “negligible risk” to the moral actor is no more or less than how we use those words. All words with meaning get that meaning through inter-subjective agreement.

What would it matter if we could say such people were wrong for refraining from acting?

A lot to the families of those imperiled. Compensatory damages to the families or the victims themselves, should they survive but not unscathed, would be a moral right. By abstaining from action, the bad actor injured the victim.  They would be obligated to make them whole again. Similarly in the event of death, emotional damages should be compensated to the extent that they could be.

What you wouldn’t get is some central planner telling people they have to wipe the noses of passers by.

 

If you have read this far, thanks. The only thing I ask, before you issue your criticism is that you do your very best to defend this position first. Before you ask your question or point out a flaw, anticipate how I might respond and play out the back and forth. Again thanks for reading and have fun tearing it to pieces lol.

 

EDIT: Mill further describes the relationship of people being ends in themselves, and this establishing the argument for self defense, where he says "the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others"

Posted

For some people, the machine may be the rational choice since they have made a mess of running their own lives. When when you say any rational person innately values choice, what does that mean? If i am about to sit down for breakfast and a man bursts into my room and points a gun in my face and utters the phrase "The lead or the bacon," am i better off because now i have one more choice i did not have before? You cannot just assert these things, you need to defend them.

 

Presenting an ultimatum actually limits choice and is therefore immoral. "neither" is a possibility. But I see your point. They are limiting your choice to not give them bacon without being shot.

 

As stated earlier, who's choice is maximized? Is it our collective choice or my individual choice? Do not just state things that are true, provide references or arguments for why they are in fact true. What is economic value? 

 

Right, the class of people who's choices can be limited or maximized are those who are interacting with other conscious agents. Each instance where there is a transfer of property, gift, or charity (these are the only kinds of exchanges that really come to mind) is a situation where there is mutual assent, or there isn't. So long as each instance has mutual assent, aggregate choice is maximized, but that is strictly dependent on local maximization of choice. Choice cannot be considered outside of the local however, because choices are merely between those whose lots are in consideration. The fact that if each local choice is mutual and is therefore maximized, makes aggregate maximization true only incidentally, though unequivocally. 

 

Why should i be forced to make a choice because a baby is drowning in a shallow pool? Does that not minimize my choice?

 

I thought I spelled this out, but I did not do so (it was very late at night when I wrote a lot of this). intrinsic values are two-fold (at least), choice and preservation of conscious entities. Because the preservation of conscious creatures  presents with all the problems of classical utilitarianism, it is almost never useful in making a judgement call about value. Quantifying and comparing is a particular point of contention. The thought experiment is supposed to demonstrate two things. 1. we value conscious creatures intrinsically. 2. that value subverts or outweighs the value of the choice to do nothing, necessarily making action compulsory. 

 

I think the third intrinsic value I'll tack on is self-preservation. Not only is it an obvious biological imperative, but it is exactly what Mill is getting at when he argues that we have the moral right to protect our bodily integrity.  

 

 

There is a huge caveat however: Each evaluation of choice is only in regards to the options before moral agents at that moment, with no calculation of future choice. Each moment assent is given is distinct. So raping a woman and saying that because a baby is created and there is more choice, due to another moral agent coming into existence, is not viable. It is backwards looking only factoring events (how property is divided and what is now a moral agent) up until the moment of decision, despite decisions be made based on projections about the future. Once you begin looking at choice as non-instantaneous, which I think is just prima facie definitively incorrect, all the classic Utilitarian problems arise.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.