njhunit Posted September 3, 2015 Posted September 3, 2015 I pose a several philosophical questions. Suppose that an AI system pass Turning's Test and are completely indistinguishable from other human interactions. Since we cannot distinguish between an an interaction with an AI system and an interaction with a human we can assure that we will pose onto it the objective system of morality. With that in mind, should we program AI systems with a built in system of moral principles guiding its actions since they will be subjected to such principles? Does such a AI system have moral responsibility? Is the creator(s) of such a system responsible for its actions? If an AI system were to attain property does the property belong to the system or to the system creator? Does the System Creator retain such property if the nature of the system were to be revealed?
shirgall Posted September 3, 2015 Posted September 3, 2015 Objective morality makes no distinction between people, AI systems indistinguishable from people, or the people that make AI systems, whether or not they know of any distinction among them. The principles posited are universal. 1
ProfessionalTeabagger Posted September 3, 2015 Posted September 3, 2015 If we program it with these principles don't we remove its free will? Wouldn't we effectively be brain-washing it?
Wuzzums Posted September 3, 2015 Posted September 3, 2015 If artificial intelligence will be indistinguishable from human intelligence, morality would be as much for it as it is for us. True AI will have the capability of making its own choices regardless of its programming. There's a fantastic game that deals with this exact issue: And a movie that's kinda so and so but deals with some key aspects of AI:
SamuelS Posted September 3, 2015 Posted September 3, 2015 https://mises.org/library/how-we-come-own-ourselves the ownership right stemming from production finds its natural limitation only when, as in the case of children, the thing produced is itself another actor-producer. According to the natural theory of property, a child, once born, is just as much the owner of his own body as anyone else. Hoppe also argues that rights are held by rational agents — those who are "capable of communicating, discussing, arguing, and in particular, [who are] able to engage in an argumentation of normative problems". An AI system indistinguishable from a person, which fits the above criteria, would own itself and be a moral agent.Regarding the "teaching" of an AI system, I think that as with children it would be wise to teach them principals and methodology rather than conclusions...especially if they're smarter than you, which they probably are.
john cena Posted September 15, 2015 Posted September 15, 2015 If an AI system is indistinguishable from humans, they will certainly be the death of any human form of life. If there is an AI system at the level of human capability, there will be an AI system not long in the future ten thousand times of that capacity. Ourselves and our "ethics" would mean nothing to robots; if they were sentient, they would probably just kill themselves in my opinion, because what's the end goal? Since we cannot distinguish between an an interaction with an AI system and an interaction with a human we can assure that we will pose onto it the objective system of morality. With that in mind, should we program AI systems with a built in system of moral principles guiding its actions since they will be subjected to such principles? If they are indistinguishable from humans, you can't program them; they would have to learn the same way humans do, trial and error. Therefore peaceful parenting. However I see them becoming superior to humans at almost the very instant they become equal.
fractional slacker Posted September 16, 2015 Posted September 16, 2015 Currently we find good arguments for morality based on self-ownership, respect for property rights, and the NAP. Will A.I. recognize those axioms? I don't think that's likely. I can't exactly say why, but it seems the foundation of reality and life will be altered by the presence of A.I. (silicon life forms vs carbon based life forms) is the best way I can attempt to explain my ambiguous feelings/thoughts on the matter.
john cena Posted September 16, 2015 Posted September 16, 2015 Currently we find good arguments for morality based on self-ownership, respect for property rights, and the NAP. Will A.I. recognize those axioms? I don't think that's likely. I can't exactly say why, but it seems the foundation of reality and life will be altered by the presence of A.I. (silicon life forms vs carbon based life forms) is the best way I can attempt to explain my ambiguous feelings/thoughts on the matter. Basically, we don't know because they don't exist and we've never seen a form of intelligence other than our own with the capacity for love. Unfortunately we may have to wait until shit hits the fan to come up with robot ethics..
Recommended Posts