Jump to content

Recommended Posts

Posted

I pose a several philosophical questions.

 

Suppose that an AI system pass Turning's Test and are completely indistinguishable from other human interactions.

 

Since we cannot distinguish between  an an interaction with an AI system and an interaction with a human we can assure that we will pose onto it the objective system of morality.

 

With that in mind, should we program AI systems with a built in system of moral principles guiding its actions since they will be subjected to such principles?

 

Does such a AI system have moral responsibility? Is the creator(s) of such a system responsible for its actions?

 

If an AI system were to attain property does the property belong to the system or to the system creator?

 

Does the System Creator retain such property if the nature of the system were to be revealed?

Posted

Objective morality makes no distinction between people, AI systems indistinguishable from people, or the people that make AI systems, whether or not they know of any distinction among them. The principles posited are universal.

  • Upvote 1
Posted

If artificial intelligence will be indistinguishable from human intelligence, morality would be as much for it as it is for us. True AI will have the capability of making its own choices regardless of its programming.

 

There's a fantastic game that deals with this exact issue:

 

 

And a movie that's kinda so and so but deals with some key aspects of AI:

Posted

https://mises.org/library/how-we-come-own-ourselves
 

 

the ownership right stemming from production finds its natural limitation only when, as in the case of children, the thing produced is itself another actor-producer. According to the natural theory of property, a child, once born, is just as much the owner of his own body as anyone else.

 

 

Hoppe also argues that rights are held by rational agents — those who are "capable of communicating, discussing, arguing, and in particular, [who are] able to engage in an argumentation of normative problems".


An AI system indistinguishable from a person, which fits the above criteria, would own itself and be a moral agent.

Regarding the "teaching" of an AI system, I think that as with children it would be wise to teach them principals and methodology rather than conclusions...especially if they're smarter than you, which they probably are.

  • 2 weeks later...
Posted

If an AI system is indistinguishable from humans, they will certainly be the death of any human form of life. If there is an AI system at the level of human capability, there will be an AI system not long in the future ten thousand times of that capacity. Ourselves and our "ethics" would mean nothing to robots; if they were sentient, they would probably just kill themselves in my opinion, because what's the end goal?
 

Since we cannot distinguish between  an an interaction with an AI system and an interaction with a human we can assure that we will pose onto it the objective system of morality.

 


 
With that in mind, should we program AI systems with a built in system of moral principles guiding its actions since they will be subjected to such principles?
 

 

 

If they are indistinguishable from humans, you can't program them; they would have to learn the same way humans do, trial and error. Therefore peaceful parenting. However I see them becoming superior to humans at almost the very instant they become equal.

Posted

Currently we find good arguments for morality based on self-ownership, respect for property rights, and the NAP.  Will A.I. recognize those axioms? I don't think that's likely. I can't exactly say why, but it seems the foundation of reality and life will be altered by the presence of A.I. (silicon life forms vs carbon based life forms) is the best way I can attempt to explain my ambiguous feelings/thoughts on the matter.

Posted

Currently we find good arguments for morality based on self-ownership, respect for property rights, and the NAP.  Will A.I. recognize those axioms? I don't think that's likely. I can't exactly say why, but it seems the foundation of reality and life will be altered by the presence of A.I. (silicon life forms vs carbon based life forms) is the best way I can attempt to explain my ambiguous feelings/thoughts on the matter.

Basically, we don't know because they don't exist and we've never seen a form of intelligence other than our own with the capacity for love. Unfortunately we may have to wait until shit hits the fan to come up with robot ethics..

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.