aFireInside Posted August 14, 2014 Posted August 14, 2014 http://en.wikipedia.org/wiki/Technological_singularity I personally don't think so. It seems to me that people and most scientist don't understand ethics and how important childhood is to a humans development. If that is true how could they replicate a human brain? I may be incorrect but don't some scientist say that faster computing will eventually lead to a singularity ? How is a faster processor suppose to lead to a computer being self aware? Even if a processor is billions of times faster wouldn't it need some kind of software to actually be self aware?
Songbirdo Posted August 14, 2014 Posted August 14, 2014 An example of what faster processing power can accomplish: To represent a point in space you need six degrees of freedom. Three direction (x,y,z) and three orientation. I took a class on robotics and several more on kinematics. To locate the end point of a member you need to calculate the effect each joint has on that end point. Every joint usually adds a degree of freedom. Robotic arms can have any number of degrees of freedom, but any more than six is considered redundant and is avoided unless the extra joint is absolutely required. Your arm has seven: three in the wrist, three in the shoulder and one at the elbow. If you know the joints position (linear/angular) you can calculate the end point. And vice versa. These equations form a matrix and there are usually a lot of trig functions. The reason I’m bringing this up is a topic we briefly touched upon was the Jacobian matrix, which is the derivative (Calculus) of the above matrix to calculate the position/orientation of the end of the arm. So there’s even MORE trig functions: “The time derivative of the kinematics equations yields the Jacobian of the robot, which relates the joint rates to the linear and angular velocity of the end-effector.” Our computers aren’t yet powerful enough to calculate these massive Jacobian matrices in real-time. Your brain does this easily.
aFireInside Posted August 14, 2014 Author Posted August 14, 2014 An example of what faster processing power can accomplish: To represent a point in space you need six degrees of freedom. Three direction (x,y,z) and three orientation. I took a class on robotics and several more on kinematics. To locate the end point of a member you need to calculate the effect each joint has on that end point. Every joint usually adds a degree of freedom. Robotic arms can have any number of degrees of freedom, but any more than six is considered redundant and is avoided unless the extra joint is absolutely required. Your arm has seven: three in the wrist, three in the shoulder and one at the elbow. If you know the joints position (linear/angular) you can calculate the end point. And vice versa. These equations form a matrix and there are usually a lot of trig functions. The reason I’m bringing this up is a topic we briefly touched upon was the Jacobian matrix, which is the derivative (Calculus) of the above matrix to calculate the position/orientation of the end of the arm. So there’s even MORE trig functions: “The time derivative of the kinematics equations yields the Jacobian of the robot, which relates the joint rates to the linear and angular velocity of the end-effector.” Our computers aren’t yet powerful enough to calculate these massive Jacobian matrices in real-time. Your brain does this easily. Yes but isn't being human more than just moving joints ? An animal can also move joints, it is a very specific theory that is posted in the link.
Kevin Beal Posted August 14, 2014 Posted August 14, 2014 If by technological singularity you mean artificial intelligence, this is not possible using the computers we use today. And it's not only because people don't understand human psychology well enough, but there are technological limitations. People mistakenly believe that this is an issue of better programming or faster processing, but that's actually untrue. No amount of processing power can make the symbol manipulation that computers do (1s and 0s == this message board) equal actual intelligence. The distinction between simulated intelligence and actual intelligence is more important than you'd think. Consider the following thought experiment (Searle's chinese room): You are locked in a room with no windows. All there is are a stack of cards with chinese symbols on them next to a desk with a giant manual on it (written in english) and a slit in the wall. Occasionally new cards with chinese symbols slide into the room through that slit. The manual has a reference for every card that you have or could get and shows a corresponding card that you should push back out through that slit in the wall. You don't know it, but outside the room a fluent chinese speaker is writing questions on these cards and pushing them through the slot. The manual is so good that the person outside actually believes they are carrying on a real conversation with someone in the room, who understands the meaning of what they are writing and writing thoughtful answers. This is essentially how computers work. The manual is the program, the cards are 1s and 0s or any other symbol computers could manipulate and you are the central processing unit. The computer is inescapably dumb. It doesn't matter how fast you reference and slip back out those cards, you still can't read chinese. A computer does not know the meaning of anything it does. Programs can only ever provide a syntax, and never a semantics to the programming language from the computers POV. All the meaning that we get from using computers is beyond the computer. It was either put there by another user, a developer, or as part of some dynamic accident. The assumption that people in the realm of machine learning and artificial intelligence often make is that human understanding, beliefs, desires, perceptions, consciousness as a whole is superfluous. The thought is that the brains are computers, neurons are the memory for computation and that some combination of past experiences, instinct and logical operations make up the computer program of the brain. This is useful as an analogy, but the mind appears to be another animal entirely from what I've seen researching neurobiology as a layman. If we actually accept the premise that our subjective experience of meaning, semantics, consciousness is superfluous and it's all really just underlying neurological processes, we run into some insurmountable logical problems. This position btw, is called "epiphenomenalism" and I think that it's bunk. If all of our conscious experience is simulated by the brain, the way a computer program simulates a weather pattern, then for the output of this process (e.x. me eating a sandwich) it doesn't require consciousness at all. The causal nature of this "computation" makes all of the beliefs and desires arbitrary data that does not itself work are required fields in completing this process. I could just be programmed to think that I am conscious (a contradiction because thinking is consciousness, but let's just roll with it). In other words, there is no meaning, just a simulation of meaning, at the computer level and at the human level. We are just deluded in thinking we're different. This is how many people see consciousness. What our conscious experience of the world is, however, is clearly linking a causal chain of events from the ontologically subjective phenomena of consciousness (beliefs, desires and perceptions) to our actions. My belief that doors open causes me to reach for the knob and turn. My perception that a ball is whizzing past my head causes me to duck out of the way. My desire for sex causes me to imagine that lady over there without any clothes on and think to myself "yum" But if the epiphenomenalists are right, it is only an illusion that my consciousness causes anything. Rather it has to be neurons and sensors and the wetware of the brain that cause all of these things. And obviously it involves these things, but here's where the problem is: if there is no meaning, we could never know it, or think it or believe it or conclude it or suggest it. All of that is an illusion. The proposition "humans only simulate meaning" could not be true or false or comprehensible or conceivable. If it is any of those things, then it's not true since you accept the reality of meaning. As far as anyone knows, consciousness is an entirely biological phenomenon originating in the brain. This does not limit it to the structure of the brain however. Nobody knows how consciousness works yet, but it appears to be a new state that the brain is in. Like h2o molecules do not splash or feel wet, neurons do not think or feel or desire. Liquidity is a state that the h2o molecules are in, not simply an aggregation of molecules. Solidity is a state that molecules positioned in a lattice structure create. Something about brains causes consciousness, where beliefs and desires and perceptions can subjectively cause things to happen in the world. It's completely amazing. Machine learning definitely has value, though. Google by implementing machine learning on the usage of their servers in an anticipation / reaction program have been able to save tons of electricity by shutting unused servers down when they are not going to be needed. It's amazing too what machines can do in that area. 2 1
Songbirdo Posted August 14, 2014 Posted August 14, 2014 Of course we are more than just some joints. I was just giving you an example of where we lack the processing power to perform a "simple" function that even animal brains can.
Recommended Posts