I pose a several philosophical questions.
Suppose that an AI system pass Turning's Test and are completely indistinguishable from other human interactions.
Since we cannot distinguish between an an interaction with an AI system and an interaction with a human we can assure that we will pose onto it the objective system of morality.
With that in mind, should we program AI systems with a built in system of moral principles guiding its actions since they will be subjected to such principles?
Does such a AI system have moral responsibility? Is the creator(s) of such a system responsible for its actions?
If an AI system were to attain property does the property belong to the system or to the system creator?
Does the System Creator retain such property if the nature of the system were to be revealed?