Jump to content

Recommended Posts

Posted

But how do you know the falling book does not understand, and do you hold humans and computers to the same standard of judgement by dropping them to the floor?  If there is some methodology to making your claims, I would like to hear about it.  Otherwise all I know is you are making assertions and categorizations about what qualifies as understanding.

 

By way of automated deduction, a computer can use logical connections to establish new equations, similar to an algebra student trying to solve an equation.  While I can certainly accept this has nothing to do with free will, it seems to overlap with the kind of mathematical understanding people have.  I am sure you can augment these computational associations with sensors to provide "subjective experience", but it sounds to me as if you've already decided nothing a computer can do will qualify as true understanding unless there is a human brain attached.

 

Lol, a book has no cognitive apparatus, no sense to perceive, no intelligence to reason, but I guess it at least has the capacity to store information. All of those things are required for comprehension and the ability to designate meaning. A computer can also do all of these things (besides real intelligence) but only through human programming. Nothing computers do can qualify as understanding because they are not capable of doing anything without human instruction at the moment. You can program a computer to perform functions that mimic the process of comprehension for human beings but for obvious reasons that is not the same thing.

  • Replies 112
  • Created
  • Last Reply

Top Posters In This Topic

Posted

Lol, a book has no cognitive apparatus, no sense to perceive, no intelligence to reason, but I guess it at least has the capacity to store information. All of those things are required for comprehension and the ability to designate meaning. A computer can also do all of these things (besides real intelligence) but only through human programming. Nothing computers do can qualify as understanding because they are not capable of doing anything without human instruction at the moment. You can program a computer to perform functions that mimic the process of comprehension for human beings but for obvious reasons that is not the same thing.

Right.

 

The computer can only simulate things.

 

That is not to say that artificial intelligence cannot exist, it's just that as far as anybody can tell, it's the wetware in our brains that provides the necessary environment for the state of consciousness to arise. And this is too obvious, since we don't argue with a computer or talk about it having preferences (except in a comical homunculus sense: "my computer is acting up today"). If the computer were to tell alert some scary message like "I can see you. I'm going to get you!", we wouldn't get a restraining order against the computer, we'd think it was somebody communicating to us thru the computer.

 

Additionally, if we were to say that a computer understands things because it reacts to and expresses things in a sophisticated manner, then that is to look at human understanding in the same sense as the computer being a blind input to a blind output. That is to make consciousness an illusion, something on top of and entirely subject to these blind processes, which poses a lot of problems. It makes debate nonsensical for the reasons Stef so brilliantly points out, but it also there are other logical problems:

 

The computation theory of mind states (roughly) that the problems of consciousness, meaning and free will can be answered in terms of computation, reduced to simpler processes, rather than as a state of the brain. This is like saying that H2O molecules are generating liquidity rather than liquidity existing as a state the H2O molecules are in.

 

John Searle is my favorite philosopher after Stef and he's got a lot to say about consciousness and the theory of mind that AFAIK has never been refuted. A relevant and interesting paper is here. Here are some quotations:

 

 

 

Cognitive science starts with the fact that humans have cognition and rocks don't. Cognitive scientists do not need to prove this any more than physicists need to prove that the external world exists, or astronomers need to solve Hume's problem of induction before explaining why the sun rises in the East. It is a mistake to look for some test, such as the [turing test], the [improved turing test], etc., which will "solve the other minds problem" because there is no such problem internal to Cognitive Science in the first place.

 

 

 

I, on the contrary, think that we should not try to improve on the [turing test] or get some improved versions of computationalism. [turing test] was hopeless to start with because it confused sameness of external behavior with sameness of internal processes. No one thinks that because an electric engine produces the same power output as a gas engine, that they must have the same internal states. Why should it be any different with brains and computers? The real message to be found in the Chinese Room is that this whole line of investigation is misconceived in principle and I want now to say why.

 

 

 

Where the ontology -- as opposed to the epistemology -- of the mind is concerned, behavior is irrelevant. The Chinese Room Argument demonstrated this point. To think that cognitive science is somehow essentially concerned with intelligent behavior is like thinking that physics is essentially a science of meter readings. (Chomsky's example). So we can forget about the [turing test] and [improved turing test], etc. External behavior is one epistemic device among others. Nothing more.
Posted

Those are great quotes, and I never understood the Turing test for exactly the reason Searle points out. Who cares if a computer can be programmed to be pretty good at responding like a human. It would be far more interesting if you could actually teach a computer and have conversations with it, knowing that it would respond intelligently. Faking intelligence is just boring and pointless besides as a joke to confuse computer illiterate people.

Posted

This is fascinating but it makes no sense.  To take cognition as a "starting" point seems entirely like religious faith, just as accepting the soul as a given.  I do not claim an ordinary computer program can have consciousness.  Quite the opposite.  But I do not accept the idea that behavior is not a decisive factor.  This Chinese room fraud basically says that no matter what tasks a machine does, it will always be fake, and that only the human brain defines real cognition.  This is like saying gravity only applies to things that you choose.  It seems far from being an objective standard.

 

Instead, I claim human brains engage in behaviors that exhibit free will, and anything that exhibits the same behavior also has free will.  It is an objective standard, one that you can test for without assuming a bunch of crazy stuff about what constitutes a legitimate "room".  I believe computer programs lack free will, not because they are inhuman or "only simulations", but because they can never exhibit precisely any kind of free behavior (as humans do exhibit).  We seem to agree computers cannot presently have free will.  But why adhere to the Searle way of going about it?  It seems to do nothing but give the determinists exactly what they want, proof that free will is just a fabricated assumption, rather than a real result of observing and measuring the universe.

Posted

This Chinese room fraud basically says that no matter what tasks a machine does, it will always be fake, and that only the human brain defines real cognition.  This is like saying gravity only applies to things that you choose.

Please elaborate. How is it like that? Demonstrate the point you are making by use of examples and / or logic.

Posted

I believe computer programs lack free will, not because they are inhuman or "only simulations", but because they can never exhibit precisely any kind of free behavior (as humans do exhibit).  We seem to agree computers cannot presently have free will.  But why adhere to the Searle way of going about it?  It seems to do nothing but give the determinists exactly what they want, proof that free will is just a fabricated assumption, rather than a real result of observing and measuring the universe.

 

What standard of behavior can you use to prove that a computer is exhibiting free will rather than sufficiently complex programming? I don't think you can, because any exception you find can be corrected through programming by a being with actual free will to mimic whatever behavior you would expect. The way you know a computer isn't actually thinking is it requires human input to do anything. (by this I mean even an automated system had to be programmed at some point, and a machine can be programmed to program another machine so that can't be the standard either) Imagine trying to prove that a computer is creative based on the unique art that it produces. How would you know whether it has a real imagination or a good random generator programmed by a human being?

 

Btw this doesn't do anything to advance the determinist position because human beings aren't programmed.

Posted

I consider myself a n00b in this area, so it could be that I'm totally wrong, but I'd prefer a demonstration of how I'm wrong rather than mere assertion.

 

I just wanted to share a few terms that are thrown around sometimes in these types of debates:

 

Epiphenomenalism - is a mind-body philosophy marked by the belief that basic physical events (sense organs, neural impulses, and muscle contractions) are causal with respect to mental events (thought, consciousness, and cognition). Mental events are viewed as completely dependent on physical functions and, as such, have no independent existence or causal efficacy; it is a mere appearance. Fear seems to make the heart beat faster; though, according to epiphenomenalism, the state of the nervous system causes the heart to beat faster. Because mental events are a kind of overflow that cannot cause anything physical, epiphenomenalism is viewed as a version of monism.

 

Computationalism - In philosophy, a computational theory of mind names a view that the human mind and/or human brain is an information processing system and that thinking is a form of computing. The theory was proposed in its modern form by Hilary Putnam in 1961, and developed by the MIT philosopher and cognitive scientist (and Putnam's PhD student) Jerry Fodor in the 1960s, 1970s and 1980s. Despite being vigorously disputed in analytic philosophy in the 1990s (due to work by Putnam himself, John Searle, and others), the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology; in the 2000s and 2010s the view has resurfaced in analytic philosophy (Scheutz 2003, Edelman 2008).

 

Dualism - is the position that mental phenomena are, in some respects, non-physical, or that the mind and body are not identical. Thus, it encompasses a set of views about the relationship between mind and matter, and is contrasted with other positions, such as physicalism, in the mind–body problem

 

Ontological subjectivity - Searle has argued that critics like Daniel Dennett, who (he claims) insist that discussing subjectivity is unscientific because science presupposes objectivity, are making a category error. Perhaps the goal of science is to establish and validate statements which are epistemically objective, (i.e., whose truth can be discovered and evaluated by any interested party), but are not necessarily ontologically objective.

 

Searle calls any value judgment epistemically subjective. Thus, "McKinley is prettier than Everest" is "epistemically subjective", whereas "McKinley is higher than Everest" is "epistemically objective. In other words, the latter statement is evaluable (in fact, falsifiable) by an understood ('background') criterion for mountain height, like 'the summit is so many meters above sea level'. No such criteria exist for prettiness.

Beyond this distinction, Searle thinks there are certain phenomena (including all conscious experiences) that are ontologically subjective, i.e. can only exist as subjective experience. For example, although it might be subjective or objective in the epistemic sense, a doctor's note that a patient suffers from back pain is an ontologically objective claim: it counts as a medical diagnosis only because the existence of back pain is "an objective fact of medical science". But the pain itself is ontologically subjective: it is only experienced by the person having it.

Searle goes on to affirm that "where consciousness is concerned, the appearance is the reality". His view that the epistemic and ontological senses of objective/subjective are cleanly separable is crucial to his self-proclaimed biological naturalism.

Posted

The standard of behavior is based on responses to stimuli, and adaptive power of those responses to entirely new conditions.  I conclude no computer can be simply programmed to provide any response you can imagine, because the problem space is too large.  Even random numbers generated by an algorithm are only pseudorandom, they are behavior that fails to qualify as unpredictable behavior.  A computer program can lose in a game where strategy is fixed by algorithm.  For an example of why I reject the matter of who built and programmed the computer, a calculator must be pre-programmed with algorithm to add, multiply, etc., rather than possessing a mere lookup table that relates inputs to outputs.  The problem space is too large to simply be a table of responses.  The calculator is programmed by a human, but that fact is not essential to the continued operation of the calculator.  Its correct behavior remains a decisive factor in whether the algorithm tracks the larger problem space.  Similarly, if you build a living human one molecule at a time it will not automatically dictate that the human is a non-thinking being by virtue of its unnatural origin.  The way a thing came to be is logically (but perhaps not biologically) separate from its behavior.  The full spectrum of its behavior will tell me if a thing is a conscious being, just as we are able to distinguish (by behavior alone) an unconscious person from one who is wide awake.  We can also distinguish a calculator that works without consulting with the inventor and their credentials.  The behavior is an observable fact, and consciousness is one of those things we observe at face value without constantly contemplating whether we came from sperm or microchips.Since this discussion is big on quotations, there are four ways of thinking about AI that Roger Penrose describes (Shadows of the Mind, 1994, p.12)  which I believe are distinct viewpoints:

A.  All thinking is computation; in particular,feelings of conscious awareness are evoked by merely carring out the appropriate computations.B.  Awareness is a feature of the brain's physical action;  and whereas any physical action can be simulated computationally, computational simulation cannot by itself evoke awareness.C.  Appropriate physical action of the brain evokes awareness, but this physical action cannot even be properly simulated computationally.D. Awareness cannot be explained by physical, computational, or any other scientific terms.

 

A pure determinist (holding to strong AI & functionalism) basically believes A. Penrose says "A is regarded by some as the only viewpoint that an entirely scientific attitude allows."  Clearly Searle is holding to position B.  I agree with Penrose in defending position C.  Finally, position D is mysticism or anti-science.The reason I reject position B is because it is too vague.  The other positions are vague also, but they can be remedied in principle.  B just postpones the definition of awareness into the definition of the word "brain".  Then it seems to rule out a behavioral definition of what is a brain, ignores whether the brain is conscious or not, alive or not, etc. and just sticks to whatever thing it is that brains might do.  But once you admit we need a good behavioral definition of what is a conscious brain, that seems to admit defeat.  Next, if all behavior can be simulated, now you have to behaviorally define what constitutes simulation versus real action.  Without a physical metric, how do we do that without using a pure faith assumption that human behavior is real and computer behavior is fake?

 

I believe the Chinese Room argument has merit, but it does not prove what it is believed to prove.  Searle mainly seeks to reject position A.  I do not feel that rejecting computationalism implies being committed to dualism or any subjective approach.  You may be surprised to learn Searle has said "of course the brain is a digital computer.  Since everything is a digital computer, brains are too." (Searle 1980).


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.