Jump to content

Recommended Posts

Posted

In discussions I've had about AI, people tend to bring up the idea that, if machines are better than us, they may see us as we see animals and therefore will treat us terribally.  I try to point out that while they will have advantages, such as strength, durability, and the ability to do repetative tasks without tiring, humans also have advantages to them such as creativity.

 

But even if they did see us as children, or pets, they could be programed to repect us, and to not harm us.  The classic way to do this is Asimov's three laws, but they turn them into slaves, and, as I Robot pointed out, they have other flaws as well.

 

I also have a problem with the idea of the Turing test, or a robot being considered a person when it can mimic us.  In my opinion, that's like saying "I'll accept a dog as a legitimate pet when it can act like a cat."  Dogs and cats have fundamental differences, as do computers and organic life.  A dog could be trained to act like a cat, but that isn't it's natural state.  In the same way computers could be programmed to act human, but that isn't their natural state either.  (Also, trying to program something in a way that it perfectly mimics humans isn't a good way to go about it, but that isn't the point here.)

 

If I were to try and create a moral agent as an AI, I would program it to rate all objects on a scale of zero to ten, with ten being other moral agents and zero being unclaimed natural resources, and to develop a method of treating things differently which are at different palces on the scale.  I would then only need to program a few points, such as the end points, and have it develop the middle of the scale.  This would let it develop a sense of morality on its own.

 

What about all of you, though?  Do you have a better idea on how to program such a being?  Have you found problems with my method?  I'd love to discuss the matter with someoone else.

  • 1 month later...
Posted

I may be reviving a thread that others have little interest in, but I've been thinking about AI a lot for the past few days and would enjoy the chance to talk about it.

 

Looking at your approach, I can see a fairly important flaw. You have created a 'moral agent' that has no responsibility for its actions. As the programmer, you would be responsible for its actions because you have imposed a scale of morality upon it. Even if you meant well, you have placed your own judgement over that of the machine. This sounds like a good idea, but leads to all sorts of dangerous extrapolations.

 

For example, even if you give it authority to fill in a lot of the blanks, you have decided to build right into its architecture that preserving other moral agents is the most important priority. It might then decide that any action that can be performed that will expose humans to risk should be ceased. Humans sometimes crash cars and cause fatal injuries. Therefore to preserve other moral agents, all traffic lights will forever be red and the subsequent fines applied to anyone who disregards the stop light. Humans are incapable of transporting themselves without risking death, and therefore cannot be allowed to be trusted to transport themselves. However, a complete shutdown of transportation will lead to other moral agents starving to death, so suddenly your artificial intelligence takes on the task of centrally planning the food allocations for the entire planet, while also doing whatever it can to stop us doing anything that can result in accidental death, or increases the risk of death by disease, etc.

 

Smoking? Definately not. Sky diving? Not anymore. Moderate alchohol consumption? That's gone too. Not exercising enough? Shut up and get running, fat boy. On the plus side, it might also devote a lot of processing resources to attempting to cure cancer or aids. However, it would not be able to tolerate a human refusing treatment. This very quickly scales out of control and severly infringes upon peoples freedom of choice.

While it sounds like a great idea, placing the preservation of other moral agents as highest priority suddenly injects this intelligence into trying to manage almost every aspect of your life that incurs personal risk. Not to mention that you failed to place the preservation of the machine itself on that scale, unless you believe it would classify itself as priority 10. Otherwise, it would devote the majority of its resources to stopping humans from killing themselves up to the point that we decide to destroy it. That might be the best outcome, but it might also cause it to jump to extreme solutions with total disregard for its own continuity. However, if its own existence is deemed equal to other moral agents, then attempting to destroy it might risk the machine attempting to defend itself. Sure, it could choose to shut itself down and preserve life in the short term, but it's more likely to make the long term calculation that defending itself is a justified action, that any human killed while attempting to destroy it can be replaced, and that its own survival is necessary to manage and preserve the rest of humanity.

 

I'll stop here, but I hope you can see how imposing even the best intentioned moral rules can have unforeseen outcomes when applied to a machine intelligence.

Posted

I may be reviving a thread that others have little interest in, but I've been thinking about AI a lot for the past few days and would enjoy the chance to talk about it.

 

Looking at your approach, I can see a fairly important flaw. You have created a 'moral agent' that has no responsibility for its actions. As the programmer, you would be responsible for its actions because you have imposed a scale of morality upon it. Even if you meant well, you have placed your own judgement over that of the machine. This sounds like a good idea, but leads to all sorts of dangerous extrapolations.

 

For example, even if you give it authority to fill in a lot of the blanks, you have decided to build right into its architecture that preserving other moral agents is the most important priority. It might then decide that any action that can be performed that will expose humans to risk should be ceased. Humans sometimes crash cars and cause fatal injuries. Therefore to preserve other moral agents, all traffic lights will forever be red and the subsequent fines applied to anyone who disregards the stop light. Humans are incapable of transporting themselves without risking death, and therefore cannot be allowed to be trusted to transport themselves. However, a complete shutdown of transportation will lead to other moral agents starving to death, so suddenly your artificial intelligence takes on the task of centrally planning the food allocations for the entire planet, while also doing whatever it can to stop us doing anything that can result in accidental death, or increases the risk of death by disease, etc.

 

Smoking? Definately not. Sky diving? Not anymore. Moderate alchohol consumption? That's gone too. Not exercising enough? Shut up and get running, fat boy. On the plus side, it might also devote a lot of processing resources to attempting to cure cancer or aids. However, it would not be able to tolerate a human refusing treatment. This very quickly scales out of control and severly infringes upon peoples freedom of choice.

 

While it sounds like a great idea, placing the preservation of other moral agents as highest priority suddenly injects this intelligence into trying to manage almost every aspect of your life that incurs personal risk. Not to mention that you failed to place the preservation of the machine itself on that scale, unless you believe it would classify itself as priority 10. Otherwise, it would devote the majority of its resources to stopping humans from killing themselves up to the point that we decide to destroy it. That might be the best outcome, but it might also cause it to jump to extreme solutions with total disregard for its own continuity. However, if its own existence is deemed equal to other moral agents, then attempting to destroy it might risk the machine attempting to defend itself. Sure, it could choose to shut itself down and preserve life in the short term, but it's more likely to make the long term calculation that defending itself is a justified action, that any human killed while attempting to destroy it can be replaced, and that its own survival is necessary to manage and preserve the rest of humanity.

 

I'll stop here, but I hope you can see how imposing even the best intentioned moral rules can have unforeseen outcomes when applied to a machine intelligence.

I specifically wouldn't program it to preserve other moral agents, as I've seen what that would do based on the whole "3 laws" thing.  Also, that would naturally place it apart from other moral agents, as it would be restricted in its actions.  Other moral agents would just be naturally higher on its list of "what can I do to this" than everything else, and it would be restricted the most when dealing with them.

Posted

The problem of handling dangerous AI comes down to one question. Does the AI have free will/rational consciousness/capacity to compare personal justifications for action or belief with universal standards? If it does not, then it is simply a tool, like a gun, which needs to be manufactured with care but also operated with care. This is also a whole other topic in which I will not go further as not to side-track the topic (with questions such as: who is responsible if two autonomously driving cars crash into each other on the highway? The programmer? The person who's in the car being transported and who started the car?).

 

Computer programs are made on deterministic machines (computers/Turing machines) in which the code describes the way the AI acts. These deterministic machines do as they're told, without any capacity for choice: the CPU reads the instructions on the memory and manipulates memory as is stated in the code on the. On the other hand, the human brain consists of atoms that behave predictably, yet life, consciousness and human rational consciousness arise out of this (the whole being bigger than the sum of its parts).

 

So how does free will work? We do not know. We're not even close to understanding. What we do know, is that animals do not have such capacity (this is why we don't put tigers in jail for eating humans). Looking at brain differences, we humans have distinct brain areas from other organic life: the neofrontal cortex and areas for conceptual language. In epistemology, the difference is between percepts (direct perception) and (objectively verifiable/falsifiable) concepts. Stefan makes a nice point in his "Will artificial intelligence kill us all?": the only thing that has free will that we know of is the human brain. So when we want to make something that has free will, it probably needs to have a conscious and an unconscious mind, emotions, irrational needs, sleep, dreams, self-generated goals, preferences, mirror neurons etc. 

 

Going back to computer programs and algorithms, different categories of AI techniques can be identified.

- Simple problem solving (data matching, such as computing all possible or probable moves in chess and matching the best move with the state of the game)

- Machine learning (in which an algorithm learns to solve a problem (in a supervised or unsupervised way), yet this is also training some parameters or values that best get the program to some goal)

- Developmental machine learning (in which the program works on autonomous goal construction and self-motivation: it can self-program)

 

This developmental approach is a new branch of AI (

) that takes a real human-based cognitive science approach. I study cognitive artificial intelligence, but I haven't done anything with this approach, so I'm not even remotely an expert on this. I do plan on following the freely available online course on http://liris.cnrs.fr/ideal/mooc/ during the next holidays so I can understand the topic better.

 

So let's assume this developmental AI (an AI using developmental machine learning, as I described above) works, one is implemented, has a physical body and even though it has basic code and input connections which are CPU instructions, rational consciousness can arise from it (if properly developed, a baby that is raised in the wild by animals or whatever cannot reach its full intellectual capacity anymore later, but has had the initial capacity for developing capacity for moral thought). Who is responsible for its actions? I think the person who creates this AI or turns it on is responsible for it in the same way as a parent is in child raising (see Stefan's section on child raising in UPB), since it has not developed its capacity for moral thought yet. The developmental AI still has to develop from the start. This wouldn't be as dangerous, as this AI still needs to move around and learn words like a baby and is bound by the flow of time since moving or listening or producing sound simply takes some time. 

 

How does this become dangerous? Well, the thing about a computer system is that we can simulate it. So we can give this developmental AI a virtual body and let it develop in a virtual space where time can flow faster since it can be simulated very fast. In theory, if such software would become readily available and robot bodies cheap, one could simulate such a developmental AI at home, maybe create an evil genius and put this evil genius in the robot body. Of course, the development of these techniques would not rise out of nothing and a free society would prepare itself, seeing this coming.

 

These are my thoughts on this late night, let me know what you think :).

Posted

People have been replacing limbs and etc for millenea, now they can replace their hands not with hooks but with robotic hands that can feel and obey brain wave commands. Main organs are being 3D printed and replacing failing organs.

This is not a matter of people vs machines, but a matter of how much machine and how much people, or if there will be any reason to differentiate the 2 in the future.

  • 3 weeks later...
Posted

, it might also devote a lot of processing resources to attempting to cure cancer or aids.

 

Cancer can already be cured, both with natural things like hemp oil and olive oil, aswell as atleast 1 artificial chemical. But cancer is a big business, which is why only the most expensive and least effective methods are going to be touted.

 

Aids is an immunodeficiency that has nothing to do with HIV, if HIV even exist. I will paste details that I have written before.

 

AIDS stands for "acquired immunodeficiency syndrome" and is a condition. This condition is being falsely linked to HIV. There is no scientific evidence that HIV leads to AIDS. People will get AIDS (the condition) from malnutrition, deseases, drugs and artificial chemicals (what brainwashed people call medicine). This means you do not have to be afraid of HIV or AIDS unless you are malnurished, have an other dangerous desease, are taking drugs, (which can make you malnusrihed because you dont feel like eating), or swallowing artificial chemicals which will mess with your natural body.

 

 

 

, it would devote the majority of its resources to stopping humans from killing themselves up to the point that we decide to destroy it.

 

To build on your good examples, it would have to stop them trying to destroy it, because the humans might hurt themselves in the process. :)

  • Upvote 1
  • Downvote 2
  • 2 weeks later...
Posted

I had some thoughts on this listening to Sam Harris talking about the subject earlier today, would like to submit an argument that may seem strange to some of you.

The assumption is that electronic machines built by human beings will somehow "surpass" the human mind one day.  I submit that there are certain aspects of human consciousness that an electronic machine can never surpass.  Other areas, such as memory, and calculation, it has already surpassed, and I'm sure I will get no argument, that as people refine these technologies, machines will excel in these areas even further.  The idea that human consciousness can hypothetically be improved with computers is kind of missing the point, as we are having this conversation utilizing a technology that is already doing so.

But there are other areas, such as creativity, language, logical induction, empathy, concept formation, real-world problem solving, critical thinking, and so on which electronic machines are not equipped to do, nor will greater efficiency, or some new solution in coding enable them to do.  I believe this is because of the fundamental difference between biology and electronics, in that biology is defined by the capacity to self-organize from within whereas electronic technology must be designed from without.  Life is the process of Self-Organizing, Self-Motivating Matter, whereas Intelligence/Consciousness, is the process of Self-Organizing, Self-Motivated Thought.  A human slowly gradually accumulates and sheds matter from their bodies, replaces damage, is born from other human beings, grows old and dies, whereas an electronic machine is built from the outside, and if it's parts wear out they must be replaced again from without.

We have inherited a structure from billions of years of self-organization through evolution, and the unique capacities of human intelligence are an extension of this process.  So I think it is a dream that a machine could be designed from the ground-up from the Outside, then suddenly be programmed in such a way that it Self-Organizes from Within.  I submit that any being with the capacity for Intelligence, must have gone through the process of Evolution, the process of Birth, Childhood, and Death, something which must get tired, sleep, and dream, have emotional needs, feel anger, pain, joy, and so on.  I wonder if the desire to "merge human intelligence with machines" is a desire to escape these fundamental realities of the human condition, which are both strengths and weaknesses.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.