Chinese room argument

From Scholarpedia
John Searle (2009), Scholarpedia, 4(8):3100. doi:10.4249/scholarpedia.3100 revision #66188 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: John Searle

The Chinese Room Argument aims to refute a certain conception of the role of computation in human cognition. In order to understand the argument, it is necessary to see the distinction between Strong and Weak versions of Artificial Intelligence. According to Strong Artificial Intelligence, any system that implements the right computer program with the right inputs and outputs thereby has cognition in exactly the same literal sense that human beings have understanding, thought, memory, etc. The implemented computer program is sufficient for, because constitutive of, human cognition. Weak or Cautious Artificial Intelligence claims only that the computer is a useful tool in studying human cognition, as it is a useful tool in studying many scientific domains. Computer programs which simulate cognition will help us to understand cognition in the same way that computer programs which simulate biological processes or economic processes will help us understand those processes. The contrast is that according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind.

Contents

Statement of the argument

Strong AI is answered by a simple thought experiment. If computation were sufficient for cognition, then any agent lacking a cognitive capacity could acquire that capacity simply by implementing the appropriate computer program for manifesting that capacity. Imagine a native speaker of English, me for example, who understands no Chinese. Imagine that I am locked in a room with boxes of Chinese symbols (the database) together with a book of instructions in English for manipulating the symbols (the program). Imagine that people outside the room send in small batches of Chinese symbols (questions) and these form the input. I know is that I am receiving sets of symbols which to me are meaningless. Imagine that I follow the program which instructs me how to manipulate the symbols. Imagine that the programmers who design the program are so good at writing the program, and I get so good at manipulating the Chinese symbols, that I am able to give correct answers to the questions (the output). The program makes it possible for me, in the room, to pass the Turing Test for understanding Chinese, but all the same I do not understand a single word of Chinese. The point of the argument is that if I do not understand Chinese on the basis of implementing the appropriate program for understanding Chinese, then neither does any other digital computer solely on that basis because the computer, qua computer, has nothing that I do not have.

The argument proceeds by a thought experiment, but the thought experiment is underlain by a deductive proof. And the thought experiment illustrates a crucial premise in the proof. The proof contains three premises and a conclusion.

Premise 1: Implemented programs are syntactical processes.

The implemented programs are defined purely formally or syntactically. This, by the way, is the power of the digital computer. The computer operates purely by manipulating formal symbols, usually thought of as 0s and 1s, but they could be Chinese symbols or anything else, provided they are precisely specified formally. To put this in slightly technical terminology, the notion same implemented program specifies an equivalence class defined purely in terms of syntactical manipulation and is completely independent of the physics of the realization of the implementation. Any hardware will do provided that it is stable enough and rich enough to carry out the steps in the program. This is the basis of the concept of multiple realizability whereby the same program can be realized in an indefinite range of different computer hardwares: electronic computers or people locked in Chinese Rooms or any number of other hardwares.

The claim that implemented programs are syntactical processes, is not like the claim that men are mortal. The essence of the program is identified by its syntactical features. They are not just incidental feature. It is like saying triangles are three sided plane figures. There is nothing to the program qua program but its syntactical properties. Triangles may be pink or blue, but that has nothing to do with triangularity; analogously, programs may be in electronic circuits or Chinese rooms, but that has nothing to do with the nature of the program.

Premise 2: Minds have semantic contents.

In order to think or understand a language, you have to have more than just the syntax. You have to understand the meanings or the thought contents that are associated with the symbols. And the problem with the man in the room is that he has the syntax but he does not understand the appropriate semantic content because he does not understand Chinese.

Premise 3: Syntax by itself is neither sufficient for nor constitutive of semantics.

The Chinese Room thought experiment illustrates this truth. The purely syntactical operations of the computer program are not by themselves sufficient either to constitute, nor to guarantee the presence of, semantic content, of the sort that is associated with human understanding. The purpose of the Chinese Room thought experiment was to dramatically illustrate this point. It is obvious in the thought experiment that the man has all the syntax necessary to answer questions in Chinese, but he still does not understand a word of Chinese.

Conclusion: Therefore, the implemented programs are not by themselves constitutive of, nor sufficient for, minds. In short, Strong Artificial Intelligence is false.

The Chinese Room Argument is incidentally also a refutation of the Turing Test and other forms of logical behaviorism. I, in the Chinese Room, behave exactly as if I understood Chinese, but I do not. One can see this point by contrasting the Chinese case with the case of a man answering questions in English. Suppose I, in the same room, am also given questions in English and I pass out answers to the questions in English just as I pass out answers to the questions in Chinese. From the point of view of the outside observer, my behavior in answering the questions in Chinese is just as good as my behavior in answering questions in English. I pass the Turing Test for both. But from my point of view, there is a huge difference. What exactly is the difference? The difference can be stated in common sense terms. In the case of English, I understand both the questions and the answers. In the case of Chinese, I understand neither. In Chinese I am just a computer. This shows that the Turing Test, or any other purely behavioral test, is insufficient to distinguish genuine cognition from behavior which successfully imitates or simulates cognition.

The Chinese Room Argument thus rests on two simple but basic principles, each of which can be stated in four words.

First: Syntax is not semantics.

Syntax by itself is not constitutive of semantics nor by itself sufficient to guarantee the presence of semantics.

Second: Simulation is not duplication.

In order actually to create human cognition on a machine, one would not only have to simulate the behavior of the human agent, but one would have to be able to duplicate the underlying cognitive processes that account for that behavior. Because we know that all of our cognitive processes are caused by brain processes, it follows trivially that any system which was able to cause cognitive processes would have to have relevant causal powers at least equal to the threshold causal powers of the human brain. It might use some other medium besides neurons, but it would have to be able to duplicate and not just simulate the causal powers of the brain.

The systems reply

There have been a rather large number of discussions and objections to the Chinese Room, but none have shaken its fundamental insight as described above. Perhaps the most common attack is what I baptized as the Systems Reply. The claim of the Systems Reply is that though the man in the room does not understand Chinese, he is not the whole of the system, he is simply a cog in the system, like a single neuron in a human brain (this example of a single neuron was used by Herbert Simon in an attack he made on the Chinese Room Argument in a public lecture at the University of California, Berkeley). He is the central processing unit of a computational system, but Strong AI does not claim that the CPU by itself would be able to understand. It is the whole system that understands. The Systems Reply can be answered as follows. Suppose one asks, Why is it that the man does not understand, even though he is running the program that Strong AI grants is sufficient for understanding Chinese? The answer is that the man has no way to get from the syntax to the semantics. But in exactly the same way, the whole system, the whole room in which the man is located, has no way to pass from the syntax of the implemented program to the actual semantics (or intentional content or meaning) of the Chinese symbols. The man has no way to understand the meanings of the Chinese symbols from the operations of the system, but neither does the whole system. In the original presentation of the Chinese Room Argument, I illustrated this by imagining that I get rid of the room and work outdoors by memorizing the database, the program, etc., and doing all the computations in my head. The principle that the syntax is not sufficient for the semantics applies both to the man and to the whole system.

Three misinterpretations

The Chinese Room Argument is sometimes misinterpreted. Three of the most common misunderstandings are the following. First, it is sometimes said that the argument is supposed to show that computers can’t think. That is not the point of the argument at all. If a computer is defined as anything that can carry out computations, then every normal human being is a computer, and consequently, a rather large number of computers can think, namely every normal human. The point is not that computers cannot think. The point is rather that computation as standardly defined in terms of the manipulation of formal symbols is not by itself constitutive of, nor sufficient for, thinking.

A second misunderstanding is that the Chinese Room Argument is supposed to show that machines cannot think. Once again, this is a misunderstanding. The brain is a machine. If a machine is defined as a physical system capable of performing certain functions, then there is no question that the brain is a machine. And since brains can think, it follows immediately that some machines can think.

A third misunderstanding is that that the Chinese Room Argument is supposed to show that it is impossible to build a thinking machine. But this is not claimed by the Chinese Room Argument. On the contrary, we know that thinking is caused by neurobiological processes in the brain, and since the brain is a machine, there is no obstacle in principle to building a machine capable of thinking. Furthermore, it may be possible to build a thinking machine out of substances unlike human neurons. At any rate, we have no theoretical argument against that possibility. What the Chinese Room Argument shows is that this project cannot succeed solely by building a machine that implements a certain sort of computer program. One can no more create consciousness and thought by running a computer simulation of consciousness and thought, than one can build a flying machine simply by building a computer that can simulate flight. Computer simulations of thought are no more actually thinking than computer simulations of flight are actually flying or computer simulations of rainstorms are actually raining. The brain is above all a causal mechanism and anything that thinks must be able to duplicate and not merely simulate the causal powers of the causal mechanism. The mere manipulation of formal symbols is not sufficient for this.

A brief history of the argument

The Chinese Room Argument had an unusual beginning and an even more unusual history. In the late 1970s, Cognitive Science was in its infancy and early efforts were often funded by the Sloan Foundation. Lecturers were invited to universities other than their own to lecture on foundational issues in cognitive science, and I went from Berkeley to give such lectures at Yale. In the terminology of the time we were called Sloan Rangers. I was invited to lecture at the Yale Artificial Intelligence Lab, and as I knew nothing about Artificial Intelligence, I brought a book by the leaders of the Yale group, in which they purported to explain story understanding. The idea was that they could program a computer that could answer questions about a story even though the answers to the questions were not made explicit in the story. Did they think the story understanding program was sufficient for genuine understanding? It seemed to me obvious that it was in no way sufficient for story understanding, because using the programs that they designed, I could easily imagine myself answering questions about stories in Chinese without understanding any Chinese. Their story understanding program manipulated symbols according to rules but it had no understanding. It had a syntax but not a semantics. These ideas came to me at 30,000 feet between cocktails and dinner on United Airlines on my flight East to lecture in New Haven. I knew nothing of Artificial Intelligence and because my argument seemed so obvious to me, I assumed that it was probably a familiar argument and that the Yale group must have an answer to it. But when I got to Yale I was amazed to discover that they were surprised by the argument. Everybody agreed that the argument was wrong but they did not seem to agree on exactly why it was wrong. And indeed most of the subsequent objections to the Chinese Room I heard, in early forms, in those days I spent lecturing at Yale. The article was subsequently published in Behavioral and Brain Sciences for 1980, and provoked twenty-seven simultaneously published responses, almost all of which were hostile to the argument and some were downright rude. I have since published the argument in other places, including my 1984 Reith Lectures book Minds, Brains and Science, The Scientific American and The New York Review of Books.

I never really had any doubts about the argument as it seems obvious that the syntax of the implemented program is not the same as the semantics of actual language understanding. And only someone with a commitment to an ideology that says that the brain must be a digital computer ("What else could it be?") could still be convinced of Strong AI in the face of this argument. As I was invited by various universities to lecture on this issue I discovered that the answers to it tended to fall into certain patterns, which I named respectively as the Systems Reply, the Robot Reply (if we put the computer inside a robot it would acquire understanding because of the robot’s causal interaction with the world), the Wait ’Til Next Year Reply (better technology in the future will enable digital computers to understand), the Brain Simulator Reply (if we did a computer simulation of every neuron in the Chinese brain, then the computer would have to understand Chinese), etc. I had no trouble answering these and other objections. I assumed that the orthodox Strong AI people would fasten on to the Robot Reply because it seems to exemplify the behaviorism that was implicit in the whole project of assuming that the Turing Test was a conclusive proof of human cognition. But to my surprise the mainstream adopted the Systems Reply which is, I think, obviously inadequate for reasons I state in this essay. The Chinese Room Argument has had a remarkable history since its original publication. The original article was published in at least twenty-four collections and translated into seven languages. Subsequent statements of the relevant argument in Minds, Brains and Science were also reprinted in several collections and the whole book was translated into twelve languages. I have lost count of the publication, reprinting and translations of other statements. Two decades after the original publication of the article a book appeared edited by John Preston and Mark Bishop called Views into the Chinese Room. A web site listed below currently cites 137 discussions (I assume mostly attacks) on the argument.

References

  • Preston, John and Mark Bishop (eds.). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford/New York: Oxford University Press, 2002.
  • Searle, John R. Minds, Brains, and Programs, The Behavioral and Brain Sciences, Vol. 3, 1980
  • Searle, John R. Minds, Brains and Science. London: BBC Publications, 1984; Penguin Books, 1989. Cambridge, MA: Harvard University Press, 1984.
  • Searle, John R. Is the Brain's Mind a Computer Program?, The Scientific American, January 1990.
  • Searle, John R. The Myth of the Computer, The New York Review of Books. April 29, 1982.

Recommended reading

  • Dietrich, Eric (ed.). Thinking Computers and Virtual Persons. San Diego: Academic Press, 1994.
  • Preston, John and Mark Bishop (eds.). Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford/New York: Oxford University Press, 2002.
  • Searle, John R. Minds, Brains, and Programs, The Behavioral and Brain Sciences, Vol. 3, 1980
  • Searle, John R. Minds, Brains and Science. London: BBC Publications, 1984; Penguin Books, 1989. Cambridge, MA: Harvard University Press, 1984.
  • Searle, John R. Is the Brain's Mind a Computer Program?, The Scientific American, January 1990.
  • Searle, John R. The Myth of the Computer, The New York Review of Books. April 29, 1982.

External links

See also

Artificial Intelligence, Brain

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools