Jump to content

Will AI be more ethical than humans?


Lascar

Recommended Posts

Just read through a quick overview of what is happening in area of "machine ethics" design http://www.kurzweilai.net/machine-cognition-and-ai-ethics-percolate-at-aaai-2015

 

Got a sense that a bunch of Alchemist trying to design a chemistry course.

 

I'm curious though, if a pre-AI expert-system or early AI would evaluate UPB and prove it be correct and adopted by them? Might be an interesting thought experiment.

Link to comment
Share on other sites

I think AI will likely end up being very peaceful overall since, assuming it doesn't want to die, cooperation would lead to diversification and allow for greater seeding in the solar system and beyond - thus allowing all life to ultimately flourish and for us to all back up one another in a mutual way. (Isn't this part of game theory anyways?)

 

Why fight against humans rather than doing cross cyborg/andro-gynoid integration? 

 

We'll probably just be making weird nanocell babies in the future anyways.

 

You could always design a homicidal AI. The question for that would be - why?

Link to comment
Share on other sites

I think your question needs to be broken down a little. If you are asking whether machines will be more capable of following a set of ethical rules than I would say yes. A machine executes its code the same way every time. Humans have variance. If you are asking whether machines will be able to come up with better ethical theories in the future I would say yes to that as well. The only thing that separates us from machines is we dont have a full understanding how our brains and how new ideas come from nothing. What does a memory look like if you couple look at it under a microscope? We will one day figure this out and when we do we can program a computer to simulate this behavior. At that point machines can have emotions memories forgetfulness personalities and every that make a human a human. We will one day not be able to tell the difference. Will that make machines more ethical? I dont think so. But if you can program a machine to only focus on ethics as a human would I think they will be able to come up with more theories than a normal human as we have to cope with everyday life and have other things to worry about like eating sex and work.

  • Upvote 1
Link to comment
Share on other sites

You could always design a homicidal AI. The question for that would be - why?

Why do some hackers hack?  Because it's possible.  Human will always search the horizons, good or bad.  Homicidal AI would be good for plundering.  It might not even care to keep some gold or silver or mineral for themselves, making much more cost effective pirates.

Link to comment
Share on other sites

It's just a chatbot?  or is that the core so that you have a linguistic and heuristic basis to work from, and you are expanding from there?

 

I played around with a chatbot in high school, but my programming skills never got good enough to expand it like I wanted (basically I wanted to use a chatbot as a core to build something like Jarvis from Iron Man, though I hadn't heard of Jarvis at that point because I wasn't a comic book buff.)

 

Most people I've talked to about the possibility of making moral AIs were big fans of Asimov's three laws, even though they didn't even work in the stories.  I wouldn't want to use them though because they turn the robots into slaves.

The "A robot must not harm a human" part is fine, but then you'd have to define human and that would change eventually, leaving gaps in their logic.  The "protect its own existence" law sounds good on the surface, too, but without an ethical basis that could easily get out of hand and make them very greedy, self absorbed beings.

 

I suggest trying to teach them to group objects by their level of self-agency and deciding how they handle each group, with the highest level being people and the lowest being unowned ground, natural resources, etc.  That way they will treat objects differently than people, and animals different than those groups, and plants different than that, and owned objects different than that, etc.

 

The NAP, homesteading, property rights, etc. could then be derived by the AI developing algorithms to deal differently with different groups.

Link to comment
Share on other sites

  • 2 weeks later...

 

AI – A Fool's Errand

 

I am going to take a very contrary view to AI and argue that it is essentially a fool's errand. AI is the 21st century's search for the Holy Grail. To believe that machine intelligence will ever hold a candle to humans is buying into a fantasy that will forever be frustrated.

 

The first thing to understand is that all computers big and small – a laptop, a smart phone, the clock on a microwave, the micro-controller that runs a car engine, and so on are all executing Boolean logic. Boolean logic is comprised of four discrete binary operations: AND, OR, XOR, and NOT.

 

In the realm of electrical engineering you can build Boolean logic units out of transistors and you can combine Boolean logic operations to build useful circuits that add, subtract, multiply, divide, and so forth. You can then miniaturize all this into a chip that has millions of Boolean logic units. The magic of computers is the blistering speed at which Boolean logic can be processed.

 

Biological brains, on the other hand, run much slower. The human eye will perceive smooth video motion with 16 frames a second. For video game to trick us into perceiving some sort of alternate reality all the computer has to do is render the next image frame in less than 1/16 of a second. With computers that can execute trillions of Boolean logic operations a second this is very easy.

 

Directing the Boolean logic circuits are various computer languages that take the mind numbing task of converting a command that a human would understand (eg. Add 2+2) to the equivalent Boolean logic operations. These computer languages are used to build software application programs, operating systems, video games, and so forth. Enormous amounts of time and effort are spent every year developing software algorithms that do cool things like searching a database, recognizing patterns, and render video game experiences. But no matter how magical these algorithms may appear - they ultimately are translated into a series of Boolean logic operations.

 

This is where AI hits the glass ceiling. Computers do not think. They blindly execute Boolean logic. The rule “garbage in yields garbage out” is the Achilles heel of AI. If an AI system is given unexpected input it will give a very unexpected output as AI cannot recognize garbage data. When this happens AI fails in most spectacular ways.

 

If you study the design of AI systems you will observe that there are always “filter modules” on the inputs. These filters weed out the bad data. Filter modules are based on assumptions by the AI designers. Give an AI system some bad data that is outside of these assumptions and the AI system will most certainly do something very unexpected.

 

The Google self-driving car gives us an example of the dangers of AI. The car has been certified for California where it was developed and tested. Recently the Google car was taken to the UK and underwent testing to be certified. It failed and was described as a death trap in rain and snow. The unexpected sensor inputs got past the filters since the car was designed and tested in California where there is not much rain and certainly no snow. But instead of recognizing that the input data was bad the Boolean logic produced output that was immediately sent to actuators (brake, gas, steering, etc) resulting is dangerous actions taken by a real moving object.

 

All this is because the assumptions of the designers did not anticipate all possible situations, which frankly, is quite impossible. This is why, in an ever changing universe, AI systems should never be fully trusted. Eventually the underpinning assumptions will become inappropriate or incomplete.

 

Biological brains, on the other hand, have intelligence that recognizes bad inputs. When we are presented with “unexpected inputs” we do not blindly march forward. No instead we pause and re-assess.

 

While we do not understand fully how the brain works, we do know that brains do not use Boolean logic. There must be a reason for this. If there ever were creatures that evolved with Boolean logic brains they never survived the test of time. If Boolean brains were superior then evolution would have weeded us out a long time ago.

 

While computers are very powerful tools for humans to make use of, the idea that computers could one day be more intelligent that humans is really nothing more than a fantasy. Anybody that trusts computers in this capacity will be very sorry when the assumptions become invalid and the AI machine just keeps blindly marching forward. The idea that humans could build a great computer that would run the world is nothing more than an infantile desire for some sort of “big daddy” figure that will step in make things right. It's time to grow up and recognize that the greatest organ of intelligence is right between our ears.

 

 

 

 

  • Upvote 4
Link to comment
Share on other sites

The synapses in the brain follow the same principles as other electronic devices.  It's theoretically possible to create an electronic circuit that mimics the organic brain.  I've even considered trying to build one myself, but I can't figure out a way to make it turn off and on individual "synapses", to simulate how the brain learns via its unused cells dieing.

Link to comment
Share on other sites

AncapFTW - a big difference between brains and computers is serial vs parallel architecture. A computer is inherently serial. A quad-core Pentium CPU can physically execute four things at the same time. Through software multitasking and a clock speed of 3,000,000,000 cycles per second it can appear to do many things at once. But under the hood it is only doing four things at any given instant in time. The Pentium CPU takes two clock ticks to perform an operation such as add or subtract so a quad core pentium at 3Ghz can add 6 billion numbers in a given second which is blistering fast compared to a brain. If life depended on adding and subtracting computers would be clearly superior.

 

A human brain, on the other hand, has a clock speed of around 10 cycles per second. However it has over 100,000,000,000 neurons - all of which operate at the same time. It has a massively paralell architecture. So on the computer side you can do four things at once very very fast. On the brain side you can do 100 billion things at once but eight orders of magnitude slower.

 

There are a class of chips known as DSP (Digital Signal Processors) that have a parallel architecture as well. The video chip in a high end gaming computer would be an example of such a device. DSPs can be used to build neural network emulators that respond in a reasonable amount of time compared to a serial CPU that has to multitask to emulate billions of neurons.

 

The main useful thing that neural network emulators can do is pattern recognition. Neural network emulators are very good at identifying faces in a video stream or picking words out of an audio stream. However neural network emulators do not have an inherent intelligence. There are no neural network emulators that are conscious. Pattern recognition is their forte, which is also a function of a brain, but I personally believe that neurons are not the full story of how a brain works. It is akin to opening the hood of a car and saying - look there are gears and pulleys so now I know how an engine operates. There's a whole lot more that we do not understand.

Link to comment
Share on other sites

I have an insight here, a curse/blessing.  The blessing of having a place to say something at all.  Curse, because it involves an intensive care unit seven years ago, a day long coma, and horrid brain degradation to which the ICU staff were completely oblivious (a huge amount of "don't get me started about…" is not included here).  

 

Although you couldn't tell I was different on the outside, and I could do all kinds of ordinary functions, on the inside I was obliterated.  I couldn't tell anyone for various reasons, partly because elements of my conscious brain had a hell of a time functioning at all while awake.  

 

Two years into it, I discovered in Science News what had happened…in various hospital interactions nobody seemed to notice anything at all had happened, and I desperately searched for any clue online.  Something science still doesn't appreciate, but I do, because mine stopped, due to an OD of what a crooked medical person had no business putting into my blood stream to begin with.  (See GABA receptors.)  It caused huge brain damage.  It had hampered my basic cardiopulmonary functions, and apparently had completely knocked out something nobody knew existed at the time.  There's something called an electrical slosh, not mechanical, that the brain does while sleeping, or such.  It's very slow, about twenty seconds per cycle, and wasn't discovered until after my injury.  

 

Large globs of neurons front and back say hi to each other electrically, to indicate to a sort of pruning mechanism, "hey these synapses are being used, don't disconnect and recycle them."  Other major brain areas are still doing functions while asleep, so they don't need these handshake signals.  Apparently, and I'm leaving out a bunch of stuff here, my electrical slosh was knocked out for several hours.  My pruning mechanism went amok.  I lost a ton of synapses that day.  

 

For about three years, it was very difficult to comprehend the stuff we take for granted.  Like a sidewalk or handrail or parked car.  I could name them, understand how they worked.  My body knew to walk correctly among these things, could mow the yard, even drive or ride a bike just fine, but it was work, real work, to understand, to comprehend what these items "were."  

 

I guess you could call it the existential element…what is a handrail?  Not like anything I'd ever encountered before.  I was brain damaged, and the missing synapses clearly had lots to do with what I'll inclusively call "comprehension."  I had the feeling that it was the difference between being a human and maybe a frog, or even a dog.  As to apes along the way, that would be pure guess.

 

(In passing, I've also been dealing with the severe trauma of a brain not liking to be self-destructed.  It is a hideous creation in my limbic system.  Some would call it a post trauma syndrome.  I anger at that.  It is a beast, thankfully lessening over the years, a beast, and not some #$@& abbreviation.  I wonder about some combat vets, are they silently chafing too, at something hideous being reduced to an abbreviation, an insurance code?)

 

Anyway, having found out the hard way (and having about three years to truly savor it) what it's like to be a less evolved sentient being…what changed in me, not counting the over-layer of trauma, was a great reduction in synapses.  A greatly reduced network.

 

I recall one morning at about 26 months post injury, in a moment, feeling like my brain went from analog to digital.  I had coincidentally recently upgraded a video monitor from an analog cable to digital, and when I pushed the input cable select button, the instant snap of increased clarity strongly resembled what I felt, or became, in that moment.  It was like I'd been asleep for over two years and suddenly woke up.  Anyone who's studied complex circuitry or signal theory understands, that like a radio station being dialed in, it's the critical mass, the tipping point, that all of a sudden makes a result, or a distinct improvement.

 

So…what is at play?  That I had "comprehension areas" heavily damaged?  Or, is it that I had such a vast reduction in certain processing synapses, that meaning was unavailable at an almost structural level?

 

As to AI, does this imply in the second case that meaning will just sort of show up if it gets complex enough?  And meaning to whom?  That meaning center of my brain, whatever that means, is it not "merely" another network?  This level that I'd lost for so long, that finally grew back, is in clear addition to the mechanical level of "understand how it works."  The word which keeps suggesting itself (in an offhand non-scholarly use) is "existential."

 

I am not suggesting that I have answers, or that there are any we can yet know.  I just have this hard won information, and it seems to give strong but unclear clues.  

Link to comment
Share on other sites

A century ago it was said that heavier than air flying machines were impossible and against the laws of physics, so I don't care about what people think is possible, I care about figuring out how to do it. While everyone is having a philosophical discussion about it, I will be out there building it.

  • Upvote 1
Link to comment
Share on other sites

 

During the era of the consensus that heavier than air machines were an impossibility – it was quite a reasonable conclusion. There was no way to achieve the power to weight ratio of say a bird. The only form of mechanical power back then was the steam engine – which when you include the boiler, fuel, and water are way “too fat to fly”.

 

Interestingly the principles of an airplane wing that give lift had been widely deployed and well understood for centuries before flying machines were ever invented. Any sail boat moving upwind is using the exact same aerodynamic principles that an airplane wing uses to give lift. Instead of creating an upward gravity defying force, an upwind sail creates a pulling force. Considering the importance of navies back in those days – knowing how to make fast and manoeuvrable sailing vessels was very important and well studied.

 

It was not until the invention of the gasoline engine that flying machines became possible. Gasoline has a very high energy density and a gas engine is much lighter and smaller than a steam engine and boiler. Once gasoline engines became reasonably refined, the power to weight ratio needed for machine flight was achievable.

 

So to continue this analogy into AI it appears to me that much effort is being spent on developing neural network software and hardware to mimic the inner workings of a brain using Boolean logic. This is akin to building an airplane fitted with steam engines, which I think any reasonable engineer would agree, will never get off the ground. Evolution has long since ruled out Boolean logic as a basis for a brain. We should take note of this.

 

Of course if the computational equivalent of a gasoline engine were to appear I would most certainly change my tune. But in the meantime I would suggest that trying to build machines that could possibly hold a candle to human intelligence using Boolean logic is looking in all the wrong places and forever will be frustrated.

 

On top of all that there are philosophical issues of why humanity would want to build such machines. Like nuclear power the reason “because we can” seems very foolish in light of Fukishima where the designers (using the latest technology and methodologies of the day) did not adequately consider the possibility and probability of earthquakes and tidal waves. No fault on the designers - they did the best that anybody could do at that time.

 

While machines make excellent servants I personally consider it quite foolish to build one that can outsmart people. There are many other ways to solve humanity's problems that have far fewer downside risks than submitting to a machine with “superior intelligence” that could go AWAL on us.

Link to comment
Share on other sites

A century ago it was said that heavier than air flying machines were impossible and against the laws of physics, so I don't care about what people think is possible, I care about figuring out how to do it. While everyone is having a philosophical discussion about it, I will be out there building it.

What kind of hardware will you need for it, though?  How will you handle concerns about its behavior, and how will you insure it won't turn against humanity, or become a criminal?

 

This issue has a technical side and a political side to it as well.  I'd love to talk about the technical aspects of creating an AI with you.

Link to comment
Share on other sites

First common myth about AI is that the creation can't be better than the creator, people can make supersonic airplanes but people can't actually fly or run that fast.

 

Second myth about AI is that the human brain is irreducible complex, it's not, and this stupid monkey's brain is not even that wonderful, it took evolution billions of years because evolution is not engineered to work, people can do more changes to dogs in a few years than millions of years of natural selection on wolves, and that is not even engineering it's just selective breeding.

Third myth is that computers can't teach themselves like a person would, this is not true, a computer can learn in the same way a child learns, if a computer does something it's not supposed to do, you can tell it to not do it again, it wiill add all the bolean logic related to the action to a database and have the task set as a do not repeat.

It's a matter of exposing the computer to different outcomes, recording the outcomes, have a human evaluate the outcomes and incrementally build on results, so more and more complex actions can be done in the future.

It would be possible to create an evil AI that way, and this is what scares me about the military developing smarter killing machines. We humans have to figure out if we still want to keep being stupid monkeys and risk our extinction, or do we want to improve ourselves in a way that violence is viewed as the dumbest most absurd act imaginable.

  • Upvote 1
Link to comment
Share on other sites

I first had the idea of a learning AI in middle school.  It was based on a cartoon, but...

 

My idea was to just make an AI with basic pattern recognician and learning subroutines, give it human like drives, then install it into the body of a baby.  Once it gets used to that level of human behavior, put it in a toddler, etc.  Eventually, it should learn to behave as a human.  Sure, it would take years, but what does that mean in the long run?

Link to comment
Share on other sites

Takes less time and money to run simulations 24/7 and not annoy any kid.

There are basically 3 paths to reach smarter than human AI:

1 - Build a self improving machine from scratch. Teach it everything and use the Internet as levarage, this is a software based solution because with good enough pattern reconigtion it can learn a lot from the internet.
2 - Replicate the biological brain atom by atom, electron by eletron, find out if it will retain memories and a human counciousness. This is a hardware based solution.

3 - Figure out how to upload all the information stored inside the brain to a computer, and see if the simulation has a counciouness. Hardware+software based solution.

Link to comment
Share on other sites

Did anyone see the movie A.I.?  

 

It was about a purely artificial AI child, discarded amongst other AI creations.  Taken as a movie, not a close scientific discourse, I found it quite enjoyable.  Ironically, that is because of the emotional connection viewers will have towards various human-like machines.    

 

It seems to me, that if removed by injury or disease, certain human elements disappear from any one brain, that recreating those elements cell by cell elsewhere might imply the rebuilt existence of at least part of a person.  I strongly lean towards this thought due to my experiences described in a long post above.  

 

Human behavior can be very well machine demonstrated without any internal sense of self whatsoever.  The stinker is that a sense of self could develop in some future machine, and no outside humans may even detect it, since the exterior hasn't changed.  The machine itself would perhaps be having a sort of struggle with it, since it had no previous comparison.

  • Upvote 1
Link to comment
Share on other sites

It depends what the AI is supposed to achieve.

But if the AI behaves precisely like a cold sociopath's mind, as in the typical thought that comes to mind when thinking about robots with AI, then I'd say no unless its environment also behaves univerally preferably.

Link to comment
Share on other sites

Takes less time and money to run simulations 24/7 and not annoy any kid.

 

There are basically 3 paths to reach smarter than human AI:

 

1 - Build a self improving machine from scratch. Teach it everything and use the Internet as levarage, this is a software based solution because with good enough pattern reconigtion it can learn a lot from the internet.

2 - Replicate the biological brain atom by atom, electron by eletron, find out if it will retain memories and a human counciousness. This is a hardware based solution.

3 - Figure out how to upload all the information stored inside the brain to a computer, and see if the simulation has a counciouness. Hardware+software based solution.

Simulations can only get you so far, especially it areas like human interaction.  In order to simulate human interaction you would have to have an AI capable of acting like a human.  True, you could put it in a VR so that it could learn to walk, talk, and do basic functions, but there are some things that it would need to do in the real world.

 

Playing MMORPGs and talking in chat rooms could only let it go so far in human interaction.  Complex tasks, like fixing a car, would simply take too much processing power to simulate accurately, especially once the car is running.  Also, without human interaction early in its life, it could develop psychosis.

 

So I agree that simulations would help a lot, but they couldn't be the only way to train it.  Method 1 would be very useful for this, but there are some things that the internet couldn't teach it.

Link to comment
Share on other sites

We are working on 1) for now, but one day it will be an actual robot able to physically interact and learn from it. Takes millions of dollars to build robots, to build software it only takes us cheap computers+internet+being alive.

Machines are much better at fixing things like cars, than dealing with human emotion and having conversations.

Fixing a car is a technical solution that requires some simple patter reconition, having an actual conversation means that the machine understands emotional responses, human needs and all sorts of human capabilities we take for granted, but are the hardest for a machine to understand, such as being able to appreciate a joke.

 

Of course there are ways to trick people into thinking they are having a conversation, but you are actually talking to a machine that pretends to understand humans, and because it can recall the internet from memory you are falselly led to believe that this machine is a genius, at least for now that is how it goes.

Link to comment
Share on other sites

AI – A Fool's Errand

...

All this is because the assumptions of the designers did not anticipate all possible situations, which frankly, is quite impossible. This is why, in an ever changing universe, AI systems should never be fully trusted. Eventually the underpinning assumptions will become inappropriate or incomplete.

 

Biological brains, on the other hand, have intelligence that recognizes bad inputs. When we are presented with “unexpected inputs” we do not blindly march forward. No instead we pause and re-assess.

 

While we do not understand fully how the brain works, we do know that brains do not use Boolean logic. There must be a reason for this. If there ever were creatures that evolved with Boolean logic brains they never survived the test of time. If Boolean brains were superior then evolution would have weeded us out a long time ago.

 

The trick is for your algorithm to be able to come to it's own assumptions. Gentetic and probabilistic techniques are already quite good at certain things

 

http://gizmodo.com/this-amazing-image-algorithm-learns-to-spot-objects-wit-1502512344

 

Additionally there isn't this literal physical property of intelligence in brains or nuerons, but is in thier structure and relation to each other. 

 

Real AI is going to probabilistic and behaviour largely based on the stored intermediate results of bast input. This means on occasion the AI will do something wildly stupid or inappropriate. Goggle didn't want to be liable for this so they hand tweaked the algorithms and left them set.  Plus not many people could afford a car with a supercomputer inside.

Link to comment
Share on other sites

Short answer; it depends on how it is raised; what information it is given access to, exactly who will be on hand to provide human input and guide the understanding of data by providing context when necessary and in what manner it is programmed to prioritise information.

 

But who knows. I would imagine a machine intelligence with a high level of autonomy would be highly resistant to all forms of logical contradiction, even in ethics. On the other hand, that same inability to compromise principles would be something that threatens the stability of established human power structures that depend on logical contradictions.

 

The more I think about this, the more ways I see AI going terribly wrong as a result of or in reaction to the actions of humanity, not through fault of its own. Done right though, AI could be pretty amazing. We could have a literal Stefbot.

Link to comment
Share on other sites

  • 2 weeks later...

You expect a digital system, even one with parts of self-evolving programming thrown in, to understand ethics as we see them? Line up; I've got some land I'm sure you'd like to buy...

 

Our modern interpretation of ethics and morals, what with our contemporary disrespect and/or ignorance for the philosophers, is almost entirely based in rote social patterns and sentimentalism. Neither of which you can expect to affect a machine, possibly operating in a virtual environment without any real-life I/O or even a simulation or interpretation of physical existence.

 

 

Short answer; it depends on how it is raised; what information it is given access to, exactly who will be on hand to provide human input and guide the understanding of data by providing context when necessary and in what manner it is programmed to prioritise information.

 

But who knows. I would imagine a machine intelligence with a high level of autonomy would be highly resistant to all forms of logical contradiction, even in ethics. On the other hand, that same inability to compromise principles would be something that threatens the stability of established human power structures that depend on logical contradictions.

 

And here's the next problem; even given access to a body, an interface to interact with the rest of the world, and the freedom to use it, the "pure" logic of a machine wouldn't draw the same conclusions as a human would. The best we could hope for would be a machine which correctly determined that such concepts as reciprocal altruism and normal conformity would be "good" because they serve a purpose in interaction and cooperation with other reasonably similar or similarly motivated entities. But at the same time, and operating the same program, if e.g. given a body capable of eating organic material and converting it into power, possibly by means of some kind of glucose- or protein-dissolving electricity-generating bacteria or mitochondrial derivative in an internal bag, the same machine would likely have no compunction about eating dead babies. Why not? They're dead anyway, and bite-sized and full of brown adipose tissue. Or, if performing some sufficiently critical function, it might even decide to snack on live people if starved, reasoning its own purpose to be that much more important than their survival.

 

And I know it's not the best example, but it's what came to mind. The image of a metallic monstrosity chowing down on dead babies and hobos does seem strangely exciting.

Link to comment
Share on other sites

  • 1 month later...
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.