Jump to content

Artificial vs Natural Intelligence


Recommended Posts

It can't. In order to have human level AI you would need to build a humanoid robot, give it a sex, a desire for reproduction, digestive system, all of a human's senses, desires, etc. Basically you would need to build an artificial human that practically speaking will be indistinguishable from a normal human. The AI issue has been figured out in the 70s. The pundits who keep talking about AI never really did a cursory analysis on the subject. Elon Musk worries that AI will try to take over the world however this is scientifically inaccurate because we know from science (neurology) that a person that has suffered brain injury that affected their reward system (i.e. they're unable to have desires) is tantamount to a vegetable. If we don't program a need to take over the world it won't have the desire to take over the world, and if you say that it will develop a need to take over the world it's like saying a clock can spontaneously create itself from a junkyard. Conquering things is a human trait.

Computer science is not separate from biology. Computer science (transistors, networks, programs, algorithms, etc) are an artificial imitation of biology. Computer science has had 60+ years to develop, biology has had 4 billion years. At the very best computer science can be on par with biology. We can never exceed it because we humans are the ones creating machines and humans are biological. Saying a human can break free of the bonds of biology is like saying a rock can break free of the law of gravity.

A few movies I can think of that have gotten AI correctly are: Ex-Machina, Prometheus, and the original Ghost in The Shell. The GiTS example is interesting because in the movie the AI spontaneously manifested itself and the first thing it did in order to remain sentient was to put limits on itself (like no replicating copying itself) and it searched for a human-like body in order to fully become an autonomous entity.

Link to comment
Share on other sites

It can. In order to have human level AI you build a humanoid robot, give it a sex, a desire for reproduction, digestive system, all of a human's senses, desires, etc. Basically you build an artificial human that practically speaking will be indistinguishable from a normal human. Then remove the code that emulates sleep. You now have AI which works 24 hours a day, exceeding the abilities of human level AI.

Link to comment
Share on other sites

Intelligence is the ability to process information, find patterns, and come up with theories based on data. The sheer speed of artificial expert AI makes it better suited at those tasks than humans. A general AI will have the same advantages.

  • Upvote 1
Link to comment
Share on other sites

On 08/10/2017 at 4:46 AM, ofd said:

Intelligence is the ability to process information, find patterns, and come up with theories based on data. The sheer speed of artificial expert AI makes it better suited at those tasks than humans. A general AI will have the same advantages.

Would it be able to create new theories...if so how?

Link to comment
Share on other sites

On 08/10/2017 at 4:08 AM, kenstauffer said:

It can. In order to have human level AI you build a humanoid robot, give it a sex, a desire for reproduction, digestive system, all of a human's senses, desires, etc. Basically you build an artificial human that practically speaking will be indistinguishable from a normal human. Then remove the code that emulates sleep. You now have AI which works 24 hours a day, exceeding the abilities of human level AI.

So an imitation is more intelligent simply because it doesn't need sleep?  I must disagree...the statement, "human level AI" contradicts your hypothesis & implies a limit at human level.  I can see how an AI would have superior intellect but not intelligence.

Link to comment
Share on other sites

On 08/10/2017 at 0:54 AM, Wuzzums said:

It can't. In order to have human level AI you would need to build a humanoid robot, give it a sex, a desire for reproduction, digestive system, all of a human's senses, desires, etc. Basically you would need to build an artificial human that practically speaking will be indistinguishable from a normal human. The AI issue has been figured out in the 70s. The pundits who keep talking about AI never really did a cursory analysis on the subject. Elon Musk worries that AI will try to take over the world however this is scientifically inaccurate because we know from science (neurology) that a person that has suffered brain injury that affected their reward system (i.e. they're unable to have desires) is tantamount to a vegetable. If we don't program a need to take over the world it won't have the desire to take over the world, and if you say that it will develop a need to take over the world it's like saying a clock can spontaneously create itself from a junkyard. Conquering things is a human trait.

Computer science is not separate from biology. Computer science (transistors, networks, programs, algorithms, etc) are an artificial imitation of biology. Computer science has had 60+ years to develop, biology has had 4 billion years. At the very best computer science can be on par with biology. We can never exceed it because we humans are the ones creating machines and humans are biological. Saying a human can break free of the bonds of biology is like saying a rock can break free of the law of gravity.

A few movies I can think of that have gotten AI correctly are: Ex-Machina, Prometheus, and the original Ghost in The Shell. The GiTS example is interesting because in the movie the AI spontaneously manifested itself and the first thing it did in order to remain sentient was to put limits on itself (like no replicating copying itself) and it searched for a human-like body in order to fully become an autonomous entity.

How is an algorithm an imitation of biology or what are algorithms analogous to in biology?

Link to comment
Share on other sites

On 8/10/2017 at 2:08 PM, kenstauffer said:

Then remove the code that emulates sleep.

We don't even know why animals sleep in the first place. We can't remove something we have no idea how to build in the first place

 

On 8/10/2017 at 2:08 PM, kenstauffer said:

You now have AI which works 24 hours a day, exceeding the abilities of human level AI.

This is what I don't get. If it's 100% human, albeit artificial, how can it exceed human intelligence?

If you say it's gonna process information faster you would be wrong. It won't. You can't build an artificial human brain that can process faster information than the fastest human. The brain itself is structured on having different processing speeds, some parts of it being slow are a feature not a setback. What if your brain was 2x faster? You think you would be able to move? The second impulse will arrive at your muscles way before your muscles had a chance to react to the first. How will your eyes work or detect motion in the first place? You would see the world as a series of slideshows. Same thing with speaking. Read "Thinking Fast and Slow" for more.

On 8/10/2017 at 2:46 PM, ofd said:

Intelligence is the ability to process information, find patterns, and come up with theories based on data. The sheer speed of artificial expert AI makes it better suited at those tasks than humans. A general AI will have the same advantages.

I have heard Sam Harris make the argument that an AI can live in 1 human second something like 80 years. No... NO... this is antiscienfitic. There are limits to reality colloquially called LAWS OF PHYSICS. There's an upper limit to how fast things can be called the speed of light. We can never, ever, ever surpass it. And even if we get computers to just come close to it then we're entering the realm of relativity which in reality translates to the faster you go the faster time around you will move. Meaning 1 computer second will equal 80 human years (rough generalizations but I hope you get the point).

Computers haven't gotten faster for decades because transistors have already been perfected. Our PC's run faster because of (a) parallel processing and (b) more resilient materials that can run at full speed without burning up.

So returning to your point, if you want to have faster AI you can either have more AI in different bodies making AI no more faster than human society, or more AI's in the same body essentially giving the AI a mental illness.

Again, this stuff has been figured out for literally decades. Feynman after he retired he used to hold lectures on this very topic, the limits of processing power and how we can find tricks to overcome them. He was one of the first to suggest a parallel processing mechanism or starting changing our whole infrastructure to accommodate a ternary system so as to find a use for quantum computers.

Link to comment
Share on other sites

1 minute ago, ProhibitionKilz said:

How is an algorithm an imitation of biology or what are algorithms analogous to in biology?

Feedback loops; in pseudocode: IF (X) THEN (X+1); in biology: hormone system.

You can basically reduce all programs to some if/then. If malware detected then quarantine and delete. If foreign organism detected then quarantine and kill.

Or at the very core of computers, the transistors. They can be either on or off. Nerve cells or muscle cells are the same, they can either be off or on.

Whatever we think we might have figured out through our human ingenuity biology has figured it out first. Like quantum theory: https://arstechnica.com/science/2011/12/more-evidence-found-for-quantum-physics-in-photosynthesis/

Link to comment
Share on other sites

1. THOUGHT EXPERIMENT

If it is possible, it can be created.

Say we have an artificial body with parts that do not go bad in a clean room environment. Say we start a computer program, its a very simple one:

PRINT <<Combination 1>>

RUN

IF END REPEAT with +1 to Combination

So what happens is it runs "a" then "b" then "c"... etc then "djsdfkjd kr43jr3r9enf" etc every combination of characters. After some crazy amount of time, the code could be so complex that it could stop itself by having so much logic to RUN and to do such that its abilities are indistinguishable from an actual human life.

Can a human type this specific code? No. Could it technically be possible somehow? Yes, in my thought experiment that would take too long to actually be useful in any way.

 

2. MIND EMULATION
More importantly, what about mind uploading? They already do it with mice and stuff. They somehow record the brain patterns and such and then run it on a computer. Is this not artificial life? When they do it to humans, would it not be artificial intelligence?

Link to comment
Share on other sites

7 hours ago, smarterthanone said:

2. MIND EMULATION
More importantly, what about mind uploading? They already do it with mice and stuff. They somehow record the brain patterns and such and then run it on a computer. Is this not artificial life? When they do it to humans, would it not be artificial intelligence?

This could be achievable of course but not in the near future. We still haven't figured out ho the brain works in order to simulate it. This is the whole plot of the Ghost in The Shell anime. If you take parts little by little from a human and replace them with artificial ones at some point you'll have a fully artificial conscious person, a machine with a soul. And if that's possible will the reverse also be possible, to create a machine that mimics humans little by little that will eventually develop a conscious?

What do you mean by artificial life and artificial intelligence? We have had artificial intelligence for something verging on a century. Intelligence (defined as the ability to predict the future accurately) and consciousness don't go hand in hand. Life and consciousness don't go hand in hand either.

There was a start-up of a non-invasive brain-wave scanner that you would place on your head (like headphones) and it would be able to read off brainwaves with a pretty good degree of accuracy. The idea of it was to be used as a computer peripheral, instead of using a keyboard shortcut you can map some macro or whatever to some specific brainwave and trigger the macro just by thinking about it. Here's the TED talk.

Link to comment
Share on other sites

Quote

Would it be able to create new theories...if so how?

In theory, coming up with new theories is a trival task. In most cases, hypotheses about empirical phenomena are abductive reasoning. That can be well simulated with a Bayesian, self correcting approach.

 

Quote

I have heard Sam Harris make the argument that an AI can live in 1 human second something like 80 years.

I listened to that podcast too. I remember that the argument was that AIs can theoretically process the information a human gathers in 80 years in one second. This is true to some extent for expert AI systems today, the Go Ai can study hundreds of games in a few seconds, while it takes humans days or months to do so.

Here http://www.slate.com/articles/health_and_science/explainer/2012/04/north_korea_s_2_mb_of_knowledge_taunt_how_many_megabytes_does_the_human_brain_hold_.html

is a good article that gives you the specs of a human brain. In short, you have 100 billion neurons, running at a kilohertz, with about 2,5 petabytes of storage. Technology is still a far way away from that, when you compare that data with graphic cards that have a similar architecture (massive parallel computation). https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1070/

Link to comment
Share on other sites

On 11-8-2017 at 11:11 PM, ProhibitionKilz said:

Would it be able to create new theories...if so how?

(paraphrasing an example from the Master algorithm)

Imagine you're trying to cure cancer, you would need to have a way to interrupt the process of cancer cells while leaving the other cells unharmed. In order to do that you need an accurate model of the cells workings, precise enough to identify weak-points unique to cancer cells. Also cancer cells have many different mutations which causes them meaning you'll need to be able to individually identify the cancer within each patient.

In theory you could accomplish this by creating a Markov model (let's just say it's a spinoff from Bayesian models) which explores each possible state and calculates the probabilities of different state transitions. You of-course need to manually note down each state or specify observations in a way that states could be inferred. You'll also need an algorithm to manipulate the states, add possible logical statements (hypotheses) and then the Markov model will check if they're likely or not. You then have an algorithm which can model the workings of that cell.

In the real world though we can't even see all the states and workings of a cell to begin with making the state plane incomplete. A Markov model can't handle that and you'll need a way to reduce the noise (or a better observer).

On 10-8-2017 at 9:54 AM, Wuzzums said:

Elon Musk worries that AI will try to take over the world however this is scientifically inaccurate because we know from science (neurology) that a person that has suffered brain injury that affected their reward system (i.e. they're unable to have desires) is tantamount to a vegetable. If we don't program a need to take over the world it won't have the desire to take over the world, and if you say that it will develop a need to take over the world it's like saying a clock can spontaneously create itself from a junkyard. Conquering things is a human trait.

Something doesn't need the desire to take over the world in order to actually take over the world. A goal state could have a requirement to take over the world. Besides, most goal states (such as taking over to world) are too complex to formulate manually. You'll need to let the AI observe a goal state, infer the conditions behind it and then let it do it's thing.

Furthermore the main point is not that sky-net happens but more that once an AI exists which can outsmart most of mankind we wouldn't be able to stop it once it set out to do whatever we wanted it to do. A faulty goal state could then cause serious damage.

On 10-8-2017 at 9:54 AM, Wuzzums said:

The AI issue has been figured out in the 70s.

Which papers?

Link to comment
Share on other sites

8 hours ago, Kikker said:

Which papers?

https://en.wikipedia.org/wiki/Genetic_algorithm#History

8 hours ago, Kikker said:

Something doesn't need the desire to take over the world in order to actually take over the world. A goal state could have a requirement to take over the world. Besides, most goal states (such as taking over to world) are too complex to formulate manually. You'll need to let the AI observe a goal state, infer the conditions behind it and then let it do it's thing.

Furthermore the main point is not that sky-net happens but more that once an AI exists which can outsmart most of mankind we wouldn't be able to stop it once it set out to do whatever we wanted it to do. A faulty goal state could then cause serious damage.

Taking over the world is a goal state in of itself (your words) so it cannot NOT be a goal state by being part of a larger goal state.

Again, I'll repeat myself. In order to get a computer to do something you can either (a) program it or (b) it will have a desire to do it. If you don't program it it won't happen. If it doesn't have a desire for it it won't happen. If it has a desire for it then the AI is tantamount to a human and lots of humans have a desire to rule the world but none manage it so it won't be able to take over the world.

Furthermore lots of humans have a desire to stop those who seek to take over the world, so it's only logical to assume that a human level AI will eventually express the same desire.

Link to comment
Share on other sites

13 hours ago, Wuzzums said:

The latest breakthroughs (convolutional neural networks, AlphaGo, netflix algorithm) don't make use of a genetic algorithm. There even have been serious doubts about the efficiency of genetic algorithms in it's original form. I don't know what you think they figured out in those papers but genetic algorithm are generally used to generate connection structures while backpropagation is used to optimize the parameters. To analogue to humans, most of our intelligence isn't encoded in our DNA, rather we learn it throughout our lives through an progress we haven't figured out. You don't have to recreate the genetic process in order to recreate that learning ability.

13 hours ago, Wuzzums said:

Taking over the world is a goal state in of itself (your words) so it cannot NOT be a goal state by being part of a larger goal state.

I don't know what semantic game you're playing but anything can be a goal state and anything can be a sub-goal state. If that wasn't clear by me using world domination as both a goals state and a sub-goals state I apologize. World domination could be the easiest path to accomplice the goal "continue humanities existence" for example.

13 hours ago, Wuzzums said:

Again, I'll repeat myself. In order to get a computer to do something you can either (a) program it or (b) it will have a desire to do it. If you don't program it it won't happen. If it doesn't have a desire for it it won't happen. If it has a desire for it then the AI is tantamount to a human and lots of humans have a desire to rule the world but none manage it so it won't be able to take over the world.

Furthermore lots of humans have a desire to stop those who seek to take over the world, so it's only logical to assume that a human level AI will eventually express the same desire.

Your a and b difference is a bit weird since the difference isn't very clear. The structure in which a desire can develop is most likely programmed otherwise the structure is written by a program which is in turn programmed (etc.). I don't understand how both things are mutually exclusive. Here is an explanation how an cooperative ai might be developed.

When talking about human level intelligence I assumed it was same learning ability and same skill ability. That doesn't imply same moral system or same desires. To put it differently: a bears ability to defeat a human in close combat doesn't imply human level intellect regarding close combat. An Ai's ability to take over the world doesn't imply human level intellect regarding anything except it's ability to take over the world.

 

4 hours ago, smarterthanone said:

I am going to call bull on the AI that made its own language that even people couldn't understand. They couldn't understand it because it was gibberish nonsense and the AI failed. They pushed it for publicity for a story. The end.

I am a php developer.

Wrong post?

Link to comment
Share on other sites

21 hours ago, Kikker said:

I don't know what semantic game you're playing but anything can be a goal state and anything can be a sub-goal state. If that wasn't clear by me using world domination as both a goals state and a sub-goals state I apologize. World domination could be the easiest path to accomplice the goal "continue humanities existence" for example.

I used your definition of goal state. You said because X can be both a goal set and a sub-goal state then X can be contained within a goal state. This is tautologically wrong because BASIC MATH. X cannot be both X and X-1.

21 hours ago, Kikker said:

most of our intelligence isn't encoded in our DNA, rather we learn it throughout our lives through an progress we haven't figured out. You don't have to recreate the genetic process in order to recreate that learning ability.

Yes it is and yes we do.

To even imply human intelligence is not biological and implicitly strongly tied to DNA and evolution is absurd. Name one creature on this planet that has a larger brain and more powerful cognitive capacities that it is ascribed in that species' DNA.

21 hours ago, Kikker said:

To put it differently: a bears ability to defeat a human in close combat doesn't imply human level intellect regarding close combat. An Ai's ability to take over the world doesn't imply human level intellect regarding anything except it's ability to take over the world.

Humans are apex predators. Bears compared to humans are innocuous. The very fact that you had to take away the traits that makes a human an apex predator should ring a bell in your head that your analogy doesn't hold water. You basically said "A rock is far more dangerous than a bullet because it's heavier", or "a car is deadlier than an H-bomb because a car has more mass"

21 hours ago, Kikker said:

When talking about human level intelligence I assumed it was same learning ability and same skill ability. That doesn't imply same moral system or same desires.

Yes it does. I'll put it into a syllogism to make it as clear as possible:

 

Humans want to take over the world

Humans have human intelligence

AI wants to take over the world.

Therefore it is implied AI has human intelligence.

Human intelligence is strictly related to human biology.

Human biology implies desires among other things.

Therefore it is implied that an AI that wants to take over the world has human intelligence and desires.

 

Think about this for at least one second: do you know of any desire in yourself or any action you partake in that doesn't have an evolutionary biological motive or foundation behind it?

 

Link to comment
Share on other sites

52 minutes ago, Wuzzums said:

I used your definition of goal state. You said because X can be both a goal set and a sub-goal state then X can be contained within a goal state. This is tautologically wrong because BASIC MATH. X cannot be both X and X-1.

I admit the phrasing was sloppy. But if you keep insisting I meant X = X and X = X+1 then you're assuming I'm an idiot beforehand.

Just to be clear what I meant was: you could have a goalstate(x) but you could also have a goalstate(y) which has a subgoal(x). Since you generally don't know subgoal(x) it's potentially dangerous.

52 minutes ago, Wuzzums said:

Yes it is and yes we do.

To even imply human intelligence is not biological and implicitly strongly tied to DNA and evolution is absurd. Name one creature on this planet that has a larger brain and more powerful cognitive capacities that it is ascribed in that species' DNA.

I agree that the brain structure is essentially* encoded in DNA. But if you learn about physics your DNA doesn't change after every bit of knowledge you gain. Your structure has enabled you to learn it. So AI researchers don't have to recreate the process that produced a human structure after 4 billion years, you could just reverse-engineer the governing principles of the current brain structure in order to get a human level AI.

 

* It's mathematically impossible to encode every neuron in dna so it's probably a certain growing pattern which has different outcomes if environmental factors change, most importantly during pregnancy.

52 minutes ago, Wuzzums said:

Humans are apex predators. Bears compared to humans are innocuous. The very fact that you had to take away the traits that makes a human an apex predator should ring a bell in your head that your analogy doesn't hold water. You basically said "A rock is far more dangerous than a bullet because it's heavier", or "a car is deadlier than an H-bomb because a car has more mass"

I didn't say that, to adjust your analogies:

"A rock being heavier than a bullet doesn't imply bullet level dangerousness."

"A car having more mass that the H-bom doesn't imply it having H-bom level deadliness."

The very fact that you had to change the sentence structure completely to disprove an analogy should ring a bell in your head.

52 minutes ago, Wuzzums said:

Yes it does. I'll put it into a syllogism to make it as clear as possible:

 

Humans want to take over the world

Humans have human intelligence

AI wants to take over the world.

Therefore it is implied AI has human intelligence.

Human intelligence is strictly related to human biology.

Human biology implies desires among other things.

Therefore it is implied that an AI that wants to take over the world has human intelligence and desires.

 

Think about this for at least one second: do you know of any desire in yourself or any action you partake in that doesn't have an evolutionary biological motive or foundation behind it?

 

I'm not sure what your syllogism clarifies as it can be used to argue any property of a human to be the same for an Ai. I assume you don't want to argue that.

 

Humans want to take over the world

Humans have legs

AI wants to take over the world.

Therefore it is implied AI have legs.

Human legs are strictly related to human biology.

Human biology implies desires among other things.

Therefore it is implied that an AI that wants to take over the world has legs.

 

To be specific the condition that the desire to take over the world requires human intelligence isn't believable. You can make it observe a (simulated) situation which resembles a world taken over and turn it into a goal state. And like I said before a world taken over could be a hidden sub-goal of a goal-state which we had programmed in.

And no I do not know any action of myself that doesn't have a evolutionary foundation behind it. I'm not sure what an evolutionary biological motive is if it's different from an evolutionary foundation..

Link to comment
Share on other sites

On 8/19/2017 at 1:07 PM, Kikker said:

Just to be clear what I meant was: you could have a goalstate(x) but you could also have a goalstate(y) which has a subgoal(x). Since you generally don't know subgoal(x) it's potentially dangerous.

This is the 3rd time you're doing it. You keep defining X as a goalstate AND subgoal state. Make up your mind.

On 8/19/2017 at 1:07 PM, Kikker said:

I agree that the brain structure is essentially* encoded in DNA. But if you learn about physics your DNA doesn't change after every bit of knowledge you gain. Your structure has enabled you to learn it. So AI researchers don't have to recreate the process that produced a human structure after 4 billion years, you could just reverse-engineer the governing principles of the current brain structure in order to get a human level AI.

I never said you have to recreate 4 billion years of evolution though that would be a completely valid method of creating AI (and the storyline behind a great videogame called "The Talos Principle"). Reverse engineering is a completely valid method of creating AI, we having reverse engineered loads of biology already (like flight, medicine, eyeglasses, and so on). BTW, I don't know why you're bringing up DNA. I never said we need DNA to create AI, I'm 100% certain of this. 

If you thought my bringing up of genetic algorithms had anything to do with DNA then I wasn't clear enough. Genetic algorithms and how the brain works at a very base level are almost identical. I don't have any exact proof of this but these two ideas (of a genetic-algorithm and how the brain works) have been arrived at independently. I don't think that two theories from two different fields having such striking similarities is trivial.

On 8/19/2017 at 1:07 PM, Kikker said:

I'm not sure what your syllogism clarifies as it can be used to argue any property of a human to be the same for an Ai. I assume you don't want to argue that.

 

Humans want to take over the world

Humans have legs

AI wants to take over the world.

Therefore it is implied AI have legs.

Human legs are strictly related to human biology.

Human biology implies desires among other things.

Therefore it is implied that an AI that wants to take over the world has legs.

YES! That's exactly what I wanted to say. Once you attribute one human-esque characteristic to AI you can attribute with the same certainty ANY other human-esque characteristic to AI. If it thinks like a duck it thinks like a duck AND has other duck-like characteristics.

Link to comment
Share on other sites

I'll put the other arguments on hold for a post or two.

On 20-8-2017 at 9:03 PM, Wuzzums said:

Genetic algorithms and how the brain works at a very base level are almost identical. I don't have any exact proof of this but these two ideas (of a genetic-algorithm and how the brain works) have been arrived at independently. I don't think that two theories from two different fields having such striking similarities is trivial.

Of all the things you don't explain you choose this core idea you hold... What is this base level you're talking about? How are they almost identical? What exactly is the theory of how the brain works?

Link to comment
Share on other sites

3 hours ago, Kikker said:

I'll put the other arguments on hold for a post or two.

Of all the things you don't explain you choose this core idea you hold... What is this base level you're talking about? How are they almost identical? What exactly is the theory of how the brain works?

When you want to catch a ball the brain runs a subroutine. If the subroutine does not lead to the catching of the ball the brain will try another subroutine. This will be repeated until the ball is finally caught. This is identical to genetic-algorithms.

The difference between a computer and the brain is, like I previously mentioned, speed. Each time a computer subroutine so to speak runs it runs at the exact same speed as any other subroutine. If a subroutine correctly gives out the intended goal in a machine the other "wrong" subroutine are scratched. In the brain there's no such thing as that. Each time you want to catch a ball ALL subroutines are played at once, all neurons depolarize at once that are related to the ball-catching subroutine. However only the subroutine that can actually catch the ball is played because it's faster than the other ones. The signal arrives to the motor centers FIRST activates it and depolarize the neurons leading to whatever motor actions are needed to catch the ball. A depolarized neuron is unresponsive to any other signal it might receive, it's kinda like a discharged battery. So when the other "wrong" subroutines arrive it's as if they get lost because the motor neuron still hasn't repolarized in order to be able to send any more signals forth. 

Why is there a speed difference? Myelinization. Myelin roughly speaking makes the neuron signal travel faster. What causes myelinization? Repeated depolarization. When we actively and consciously repeat an action long enough it's gonna become like a reflex because all the neurons coming from the brain leading to the muscles will be strongly myelinated thus faster. This is why the brain has 2 states: the conscious and subconscious. The subconscious is way, way faster because it's made up of all the fast (myelinated) neurons. The conscious is slow because it runs like a computer, because the connections are newly formed each subroutine runs as slow as any other subroutine we try out. It's like the difference between designing a car and building a car. Designing it can take months if not years and it's a trial an error type of process, building it takes weeks because it's a streamlined process.

People with Alzheimer disease that have had a strong education and have had a constant intellectual life seem to have a milder form of the disease that those who didn't. The more you use the brain the more myelin it will accumulate, the more myelin it has the more it can afford to lose over time. In Alzheimer especially but in old age in general people have a difficult time of forming new memories or learning new things because the myelinization process isn't what it used to. The habits/memories of these people from long ago are still intact because those habits/memories were formed during a time when myelinization was normal.

This is why I mentioned that speed is key. People who say AI will be able to process thousands if no millions of times faster than a human don't really know what they're talking about.

I'm gonna equate some things here for simplicity's sake but keep in mind these are not perfect analogies.

A transistor can switch at about 600 billion times per second.

The subconscious runs at about a similar speed of 400 billion. (again, take this with a grain of salt because personally I think this number is way too high but there are some studies that suggest it might be correct)

This means that at the current moment the state-of-the-art AI will be 50% faster than the average human.

The speed of light is the upper limit of speed. The speed of light is 300 mil m/s. However this is not the upper limit of how fast something can be because once we approach the speed of light relativistic weird stuff start happening like time dilation. So in order to avoid relativistic effects the upper limit needs to be lowered. 1/10th of the speed of light is a very general rule o thumb, above this you get relativistic effects, bellow you don't (or don't get relativistic effects that matter). So the upper speed limit for ANYTHING would be 30 mil m/s.

A transistor is 1/1mil-th of a meter meaning a transistor travels at a 600 000 m/s at which speed it's 50% smarter than a human.

The upper limit is 30 mil m/s meaning that a transistor that can operate at 30 mil m/s is 50 times faster meaning it's 50 times faster than an AI that is 50% faster/smarter than the average human. 

All of this makes the upper limit of AI to be a maximum of 75x smarter/faster than an average human WITH THE CAVEATS that the 400 billion figure for the subconscious is correct AND that we want to avoid relativistic speeds.

Another caveat would be that the 75x faster figure is for the AI's subconscious. Because I explained that speed is key in a brain, to create an AI implies creating a brain-like structure implying further the need of different processing speeds. Given this a computer will never, ever be able to process information consciously faster than a human's subconscious but a computer's subconscious can process information (or react to it, or whatever) 75x faster than an average human's subconscious.

 

I get really angry when people like Sam Harris start spouting off gibberish like AI can process information MILLIONS of times faster than humans. It's plainly idiotic. 75 times faster means avoiding the relativistic cap. Take that off and you have a maximum processing speed of 750 times faster (if my previous reasoning was correct, of course). A millions times faster means exceeding the speed of light 2000 times over PLUS. I mean, just imagine the electricity bill for such a machine. What would it cost? Everything in the universe forever times 3?

Link to comment
Share on other sites

On 22-8-2017 at 3:16 PM, Wuzzums said:

When you want to catch a ball the brain runs a subroutine. If the subroutine does not lead to the catching of the ball the brain will try another subroutine. This will be repeated until the ball is finally caught. This is identical to genetic-algorithms.

The difference between a computer and the brain is, like I previously mentioned, speed. Each time a computer subroutine so to speak runs it runs at the exact same speed as any other subroutine. If a subroutine correctly gives out the intended goal in a machine the other "wrong" subroutine are scratched. In the brain there's no such thing as that. Each time you want to catch a ball ALL subroutines are played at once, all neurons depolarize at once that are related to the ball-catching subroutine. However only the subroutine that can actually catch the ball is played because it's faster than the other ones. The signal arrives to the motor centers FIRST activates it and depolarize the neurons leading to whatever motor actions are needed to catch the ball. A depolarized neuron is unresponsive to any other signal it might receive, it's kinda like a discharged battery. So when the other "wrong" subroutines arrive it's as if they get lost because the motor neuron still hasn't repolarized in order to be able to send any more signals forth.

Why is there a speed difference? Myelinization. Myelin roughly speaking makes the neuron signal travel faster. What causes myelinization? Repeated depolarization. When we actively and consciously repeat an action long enough it's gonna become like a reflex because all the neurons coming from the brain leading to the muscles will be strongly myelinated thus faster. This is why the brain has 2 states: the conscious and subconscious. The subconscious is way, way faster because it's made up of all the fast (myelinated) neurons. The conscious is slow because it runs like a computer, because the connections are newly formed each subroutine runs as slow as any other subroutine we try out. It's like the difference between designing a car and building a car. Designing it can take months if not years and it's a trial an error type of process, building it takes weeks because it's a streamlined process.

I don't understand how you see this process as a genetic algorithm... There is no combination of genetic traits between subroutines to make new ones, no killing of unfit subroutines and you don't even try to explain how random mutations come into being and how the fastest subroutine is also evaluated as the best one to do a particular thing.

Before trying to fit a square into a round hole, please just read up on neural networks and backpropegation which more closely resembles your own description on how the brain works.

On 22-8-2017 at 3:16 PM, Wuzzums said:

People with Alzheimer disease that have had a strong education and have had a constant intellectual life seem to have a milder form of the disease that those who didn't. The more you use the brain the more myelin it will accumulate, the more myelin it has the more it can afford to lose over time. In Alzheimer especially but in old age in general people have a difficult time of forming new memories or learning new things because the myelinization process isn't what it used to. The habits/memories of these people from long ago are still intact because those habits/memories were formed during a time when myelinization was normal.

This is why I mentioned that speed is key. People who say AI will be able to process thousands if no millions of times faster than a human don't really know what they're talking about.

I'm gonna equate some things here for simplicity's sake but keep in mind these are not perfect analogies.

A transistor can switch at about 600 billion times per second.

The subconscious runs at about a similar speed of 400 billion. (again, take this with a grain of salt because personally I think this number is way too high but there are some studies that suggest it might be correct)

This means that at the current moment the state-of-the-art AI will be 50% faster than the average human.

The speed of light is the upper limit of speed. The speed of light is 300 mil m/s. However this is not the upper limit of how fast something can be because once we approach the speed of light relativistic weird stuff start happening like time dilation. So in order to avoid relativistic effects the upper limit needs to be lowered. 1/10th of the speed of light is a very general rule o thumb, above this you get relativistic effects, bellow you don't (or don't get relativistic effects that matter). So the upper speed limit for ANYTHING would be 30 mil m/s.

A transistor is 1/1mil-th of a meter meaning a transistor travels at a 600 000 m/s at which speed it's 50% smarter than a human.

The upper limit is 30 mil m/s meaning that a transistor that can operate at 30 mil m/s is 50 times faster meaning it's 50 times faster than an AI that is 50% faster/smarter than the average human. 

All of this makes the upper limit of AI to be a maximum of 75x smarter/faster than an average human WITH THE CAVEATS that the 400 billion figure for the subconscious is correct AND that we want to avoid relativistic speeds.

Another caveat would be that the 75x faster figure is for the AI's subconscious. Because I explained that speed is key in a brain, to create an AI implies creating a brain-like structure implying further the need of different processing speeds. Given this a computer will never, ever be able to process information consciously faster than a human's subconscious but a computer's subconscious can process information (or react to it, or whatever) 75x faster than an average human's subconscious.

I get really angry when people like Sam Harris start spouting off gibberish like AI can process information MILLIONS of times faster than humans. It's plainly idiotic. 75 times faster means avoiding the relativistic cap. Take that off and you have a maximum processing speed of 750 times faster (if my previous reasoning was correct, of course). A millions times faster means exceeding the speed of light 2000 times over PLUS. I mean, just imagine the electricity bill for such a machine. What would it cost? Everything in the universe forever times 3?

I don't understand where yo get your 400 billion from, even if it's possible, observations I googled (I don't have the time to spend a workweek understanding precisely what scientist agree on) all come down to 20-1000 switches per second average across the brain (link your article if you don't agree). That's not even near 400 billion. More importantly a transistor can't hold even remotely the amount of information to represent a neuron. You would need at least a transistor for the neuron itself and one for every connection it has (average of a 1000) to other neurons, which is still wildly inaccurate. Your estimate needs some serious adjustment.

 

 

  • Downvote 1
Link to comment
Share on other sites

On 8/23/2017 at 10:25 PM, Kikker said:

Before trying to fit a square into a round hole, please just read up on neural networks and backpropegation which more closely resembles your own description on how the brain works.

It makes no difference whatsoever in my arguments whether I would have said "neural networks" instead of "genetic algorithms". These two are not mutually exclusive or contradictory to each other. You asked for an example, I gave you an example then you started talking about how you didn't like the example not because it was wrong but because it wasn't the one you wanted.

On 8/23/2017 at 10:25 PM, Kikker said:

I don't understand how you see this process as a genetic algorithm... There is no combination of genetic traits between subroutines to make new ones, no killing of unfit subroutines and you don't even try to explain how random mutations come into being and how the fastest subroutine is also evaluated as the best one to do a particular thing

Are you talking about AI here or the brain? If it's the latter I addressed ALL THOSE POINTS. And what's the deal with this whole "the best one to do a particular thing" bs? Was I arguing how the brain figures out how to do something or was I arguing how the brain figures out how to do something THE BEST POSSIBLE WAY?

On 8/23/2017 at 10:25 PM, Kikker said:

I don't understand where yo get your 400 billion from, even if it's possible, observations I googled (I don't have the time to spend a workweek understanding precisely what scientist agree on) all come down to 20-1000 switches per second average across the brain (link your article if you don't agree). That's not even near 400 billion. More importantly a transistor can't hold even remotely the amount of information to represent a neuron. You would need at least a transistor for the neuron itself and one for every connection it has (average of a 1000) to other neurons, which is still wildly inaccurate. Your estimate needs some serious adjustment.

I gotta say I'm pretty annoyed that I spent the time answering you and you retort back to me repeating MY EXACT POINTS but with different words as if you just added something to the conversation.

I specifically said not to dwell too much on the 400 billion figure because I have doubts on its validity.

I specifically said the analogy is very rough. Argue on the numbers all you want, the conclusion still remains the same.

I NEVER said 1 neuron = 1 transistor. The WHOLE argument was about SPEED not how many transistors equate to how many neurons. Please show me my phrasing that led you to believe I've said such a thing. Prove to me you're not just having hallucinations on what I said. Prove to me you're not just talking to yourself at this point.

 

BTW please don't answer anything. They were all rhetorical questions. I know you're not even attempting to understand what I've said because none for your points contradict a word I've said.

 

Link to comment
Share on other sites

59 minutes ago, Wuzzums said:

It makes no difference whatsoever in my arguments whether I would have said "neural networks" instead of "genetic algorithms". These two are not mutually exclusive or contradictory to each other. You asked for an example, I gave you an example then you started talking about how you didn't like the example not because it was wrong but because it wasn't the one you wanted.

Backpropagation introduces the concept of self-correction instead of environmental correction. It is contradictory to genetic algorithms, something can't be a genetic algorithm and a neural network at the same time. Unless you have an population of self-correcting routines encapsulated in a genetic algorithm.

1 hour ago, Wuzzums said:

Are you talking about AI here or the brain? If it's the latter I addressed ALL THOSE POINTS. And what's the deal with this whole "the best one to do a particular thing" bs? Was I arguing how the brain figures out how to do something or was I arguing how the brain figures out how to do something THE BEST POSSIBLE WAY?

You misunderstand. Fastest subroutine to finish doesn't equal subroutine which actually does something. You only explain how a already working subroutine gets faster not how you get a working subroutine in the first place or how it rules out unnecessary actions.

1 hour ago, Wuzzums said:

I gotta say I'm pretty annoyed that I spent the time answering you and you retort back to me repeating MY EXACT POINTS but with different words as if you just added something to the conversation.

I specifically said not to dwell too much on the 400 billion figure because I have doubts on its validity.

I specifically said the analogy is very rough. Argue on the numbers all you want, the conclusion still remains the same.

I NEVER said 1 neuron = 1 transistor. The WHOLE argument was about SPEED not how many transistors equate to how many neurons. Please show me my phrasing that led you to believe I've said such a thing. Prove to me you're not just having hallucinations on what I said. Prove to me you're not just talking to yourself at this point.

If your estimate is so rough that it can differ by dozens of magnitude, you should just argue that the actual number is unknown.

I assumed you where comparing neurons to transistors from here:

On 22-8-2017 at 3:16 PM, Wuzzums said:

A transistor can switch at about 600 billion times per second.

The subconscious runs at about a similar speed of 400 billion. (again, take this with a grain of salt because personally I think this number is way too high but there are some studies that suggest it might be correct)

Or do I need to explain that each layer of transistors could half the switches per second of the whole thing?

1 hour ago, Wuzzums said:

BTW please don't answer anything. They were all rhetorical questions. I know you're not even attempting to understand what I've said because none for your points contradict a word I've said.

Your a good sport you know, first slander my arguments and then demand I don't refute you.

 

  • Downvote 1
Link to comment
Share on other sites

11 hours ago, Kikker said:

You misunderstand. Fastest subroutine to finish doesn't equal subroutine which actually does something. You only explain how a already working subroutine gets faster not how you get a working subroutine in the first place or how it rules out unnecessary actions.

You talk as if you never learned how to catch a ball in your life, or learned anything in your life: trial and error. A half moment of thought would have given you the answer which is why I'm 100% certain you're talking to yourself.

I have proof:

11 hours ago, Kikker said:

Backpropagation introduces the concept of self-correction instead of environmental correction. It is contradictory to genetic algorithms, something can't be a genetic algorithm and a neural network at the same time. Unless you have an population of self-correcting routines encapsulated in a genetic algorithm.

I repeatedly said this is not about neural networks or genetic algorithms (which are not contradictory to each other seeing how you CAN have BOTH like YOU pointed out, Einstein) yet you keep trying to prove to me I'm wrong on some point I've never made.

Bye.

Link to comment
Share on other sites

Quote

In order to have human level AI you would need to build a humanoid robot, give it a sex, a desire for reproduction, digestive system, all of a human's senses, desires, etc. Basically you would need to build an artificial human that practically speaking will be indistinguishable from a normal human.

So I want to study Newtonian mechanics I need to put on a whig, never have sex again, become a homosexual and do all my studying under some apple tree? What do you recommend for studying Relativity? Growing a Jewfro, never wear socks, mistreat my wife and become a socialist?

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.