
swimmingcat
Newbie-
Posts
3 -
Joined
Everything posted by swimmingcat
-
During the era of the consensus that heavier than air machines were an impossibility – it was quite a reasonable conclusion. There was no way to achieve the power to weight ratio of say a bird. The only form of mechanical power back then was the steam engine – which when you include the boiler, fuel, and water are way “too fat to fly”. Interestingly the principles of an airplane wing that give lift had been widely deployed and well understood for centuries before flying machines were ever invented. Any sail boat moving upwind is using the exact same aerodynamic principles that an airplane wing uses to give lift. Instead of creating an upward gravity defying force, an upwind sail creates a pulling force. Considering the importance of navies back in those days – knowing how to make fast and manoeuvrable sailing vessels was very important and well studied. It was not until the invention of the gasoline engine that flying machines became possible. Gasoline has a very high energy density and a gas engine is much lighter and smaller than a steam engine and boiler. Once gasoline engines became reasonably refined, the power to weight ratio needed for machine flight was achievable. So to continue this analogy into AI it appears to me that much effort is being spent on developing neural network software and hardware to mimic the inner workings of a brain using Boolean logic. This is akin to building an airplane fitted with steam engines, which I think any reasonable engineer would agree, will never get off the ground. Evolution has long since ruled out Boolean logic as a basis for a brain. We should take note of this. Of course if the computational equivalent of a gasoline engine were to appear I would most certainly change my tune. But in the meantime I would suggest that trying to build machines that could possibly hold a candle to human intelligence using Boolean logic is looking in all the wrong places and forever will be frustrated. On top of all that there are philosophical issues of why humanity would want to build such machines. Like nuclear power the reason “because we can” seems very foolish in light of Fukishima where the designers (using the latest technology and methodologies of the day) did not adequately consider the possibility and probability of earthquakes and tidal waves. No fault on the designers - they did the best that anybody could do at that time. While machines make excellent servants I personally consider it quite foolish to build one that can outsmart people. There are many other ways to solve humanity's problems that have far fewer downside risks than submitting to a machine with “superior intelligence” that could go AWAL on us.
-
AncapFTW - a big difference between brains and computers is serial vs parallel architecture. A computer is inherently serial. A quad-core Pentium CPU can physically execute four things at the same time. Through software multitasking and a clock speed of 3,000,000,000 cycles per second it can appear to do many things at once. But under the hood it is only doing four things at any given instant in time. The Pentium CPU takes two clock ticks to perform an operation such as add or subtract so a quad core pentium at 3Ghz can add 6 billion numbers in a given second which is blistering fast compared to a brain. If life depended on adding and subtracting computers would be clearly superior. A human brain, on the other hand, has a clock speed of around 10 cycles per second. However it has over 100,000,000,000 neurons - all of which operate at the same time. It has a massively paralell architecture. So on the computer side you can do four things at once very very fast. On the brain side you can do 100 billion things at once but eight orders of magnitude slower. There are a class of chips known as DSP (Digital Signal Processors) that have a parallel architecture as well. The video chip in a high end gaming computer would be an example of such a device. DSPs can be used to build neural network emulators that respond in a reasonable amount of time compared to a serial CPU that has to multitask to emulate billions of neurons. The main useful thing that neural network emulators can do is pattern recognition. Neural network emulators are very good at identifying faces in a video stream or picking words out of an audio stream. However neural network emulators do not have an inherent intelligence. There are no neural network emulators that are conscious. Pattern recognition is their forte, which is also a function of a brain, but I personally believe that neurons are not the full story of how a brain works. It is akin to opening the hood of a car and saying - look there are gears and pulleys so now I know how an engine operates. There's a whole lot more that we do not understand.
-
AI – A Fool's Errand I am going to take a very contrary view to AI and argue that it is essentially a fool's errand. AI is the 21st century's search for the Holy Grail. To believe that machine intelligence will ever hold a candle to humans is buying into a fantasy that will forever be frustrated. The first thing to understand is that all computers big and small – a laptop, a smart phone, the clock on a microwave, the micro-controller that runs a car engine, and so on are all executing Boolean logic. Boolean logic is comprised of four discrete binary operations: AND, OR, XOR, and NOT. In the realm of electrical engineering you can build Boolean logic units out of transistors and you can combine Boolean logic operations to build useful circuits that add, subtract, multiply, divide, and so forth. You can then miniaturize all this into a chip that has millions of Boolean logic units. The magic of computers is the blistering speed at which Boolean logic can be processed. Biological brains, on the other hand, run much slower. The human eye will perceive smooth video motion with 16 frames a second. For video game to trick us into perceiving some sort of alternate reality all the computer has to do is render the next image frame in less than 1/16 of a second. With computers that can execute trillions of Boolean logic operations a second this is very easy. Directing the Boolean logic circuits are various computer languages that take the mind numbing task of converting a command that a human would understand (eg. Add 2+2) to the equivalent Boolean logic operations. These computer languages are used to build software application programs, operating systems, video games, and so forth. Enormous amounts of time and effort are spent every year developing software algorithms that do cool things like searching a database, recognizing patterns, and render video game experiences. But no matter how magical these algorithms may appear - they ultimately are translated into a series of Boolean logic operations. This is where AI hits the glass ceiling. Computers do not think. They blindly execute Boolean logic. The rule “garbage in yields garbage out” is the Achilles heel of AI. If an AI system is given unexpected input it will give a very unexpected output as AI cannot recognize garbage data. When this happens AI fails in most spectacular ways. If you study the design of AI systems you will observe that there are always “filter modules” on the inputs. These filters weed out the bad data. Filter modules are based on assumptions by the AI designers. Give an AI system some bad data that is outside of these assumptions and the AI system will most certainly do something very unexpected. The Google self-driving car gives us an example of the dangers of AI. The car has been certified for California where it was developed and tested. Recently the Google car was taken to the UK and underwent testing to be certified. It failed and was described as a death trap in rain and snow. The unexpected sensor inputs got past the filters since the car was designed and tested in California where there is not much rain and certainly no snow. But instead of recognizing that the input data was bad the Boolean logic produced output that was immediately sent to actuators (brake, gas, steering, etc) resulting is dangerous actions taken by a real moving object. All this is because the assumptions of the designers did not anticipate all possible situations, which frankly, is quite impossible. This is why, in an ever changing universe, AI systems should never be fully trusted. Eventually the underpinning assumptions will become inappropriate or incomplete. Biological brains, on the other hand, have intelligence that recognizes bad inputs. When we are presented with “unexpected inputs” we do not blindly march forward. No instead we pause and re-assess. While we do not understand fully how the brain works, we do know that brains do not use Boolean logic. There must be a reason for this. If there ever were creatures that evolved with Boolean logic brains they never survived the test of time. If Boolean brains were superior then evolution would have weeded us out a long time ago. While computers are very powerful tools for humans to make use of, the idea that computers could one day be more intelligent that humans is really nothing more than a fantasy. Anybody that trusts computers in this capacity will be very sorry when the assumptions become invalid and the AI machine just keeps blindly marching forward. The idea that humans could build a great computer that would run the world is nothing more than an infantile desire for some sort of “big daddy” figure that will step in make things right. It's time to grow up and recognize that the greatest organ of intelligence is right between our ears.