The Chinese Room is a thought experiment in artificial intelligence. John Searle proposed it as a way to falsify the claim that some computer algorithm could be written which could mimic the behavior of an intelligent human so precisely that we could call it an artificial intelligence. Searle proposed that we imagine a log cabin (though it has been observed that it must be an enormous log cabin, perhaps a log aircraft carrier), in which a person sits. Around that person lie reams of paper, full of rules in English, as well as a story written in Chinese (or any other language that the person doesn’t understand) and a set of all possible answers to questions about that story. On one side of the cabin is a slot through which pieces of paper come in. Those strips of paper contain questions in Chinese. On the other side of the room is a slot and a notepad.
Our hypothetical cabindweller’s job is to take the note from the slot and use the instructions written on the reams of paper to compose a reply. Searle asks whether we would regard this arrangement as embodying some sort of intelligence if the notes passed from the cabin in response to any note sent in are sufficiently sensible.
In Le Ton Beau De Marot, Douglas Hofstadter points out some problems with this. Because Searle puts a person into the system, we identify that person as the locus of whatever intelligence might be present. This appeals to a dualist’s desire to think that intelligence must be some extrinsic force which breathes life into instructions. Searle then takes that intelligence out of the equation by stipulating that the person doesn’t understand the language of the questions or answers. The point of the exercise, though, is not the person. The person in the room is an automaton and a McGuffin. The paper is where any artificial intelligence would lie, and Searle is careful to use all of a magician’s arts to direct attention away from them. Since even a short story can generate an infinite number of questions, any set of instructions for answering such questions would not provide rote answers to a finite list of questions, but some sort of adaptable set of rules for answering any questions.
Stories also contain tremendous amounts of context. A story about people eating dinner might not specify any details at all about the table, but the reader generates those details mentally. The rules would have to explain how to describe the imagined details, and the unspecified rules of the story’s universe. The characters in the story are assumed to have brains, are probably around 5′ 9″ (or maybe less if it’s set in China). The rules would have to include an excellent encyclopedia and means to translate that encyclopedic knowledge into Chinese in response to questions. In short, the rules would not specify an answer for each possible question, it would specify a way to parse questions, to gather information in response, and to construct a grammatically and syntactically correct way to express the underlying ideas.
The more fundamental error lies in trying to subdivide meaning and intelligence. If we call this system intelligent, is the intelligence in the person, the slots, the logs of the cabin or the ink on the paper? To subdivide intelligence is to destroy it. The intelligence lies in the system of interacting parts. Hofstadter has pointed out elsewhere that “consciousness certainly seems to vanish when one mentally reduces a brain to a gigantic pile of individually meaningless chemical reactions. It is this reductio ad absurdum applying to any physical system, biological or synthetic, that forces (or ought to force) any thoughtful person to reconsider their initial judgment about both brains and computers, and to rethink what it is that seems to lead inexorably to the conclusion of an in-principle lack of consciousness ‘in there,’ whether the referent of ‘there’ is a machine or a brain.” The intelligence does not lie in the inert paper or the uncomprehending person, it lies in the interactions between parts.
Michael Egnor, creationist brane serjun, is not a terribly thoughtful person, which may explain why he feels comfortable defending dualism using the Chinese room, and why he butchers the description:
Imagine that P.Z. Myers went to China and got a job. His job is this: he sits in a room, and Chinese people pass questions, written on paper in Chinese, through a slot into the room. Myers, of course, doesn’t speak Chinese. Not a word. But he has a huge book, written entirely in Chinese, that contains every conceivable question, in Chinese, and a corresponding answer to each question, in Chinese. P.Z. just matches the characters in the submitted questions to the answers in the book, and passes the answers back through the slot.
In a very real sense, Myers would be just like a computer. He’s the processor, the Chinese book is the program, and questions and answers are the input and the output. And he’d pass the Turing test. A Chinese person outside of the room would conclude that Myers understood the questions, because he always gave appropriate answers. But Myers understands nothing of the questions or the answers. They’re in Chinese. Myers (the processor) merely had syntax, but he didn’t have semantics. He didn’t know the meaning of what he was doing. There’s no reason to think that syntax (a computer program) can give rise to semantics (meaning), and yet insight into meaning is a prerequisite for consciousness. The Chinese Room analogy is a serious problem for the view that A.I. is possible.
The idea that every possible question and every “appropriate” answer to that question could be contained in any book, no matter how “huge” is laughably egnorant. Searle at least was smart enough to try to restrict the scope of the questioning (though any interesting story would be set in a world about which an infinite number of questions could be asked).
Searle’s original argument was an argument from personal incredulity. Egnor multiplies that incredulity with his own ignorance, and slaps on the fallacy that anything that hasn’t been done in 50 years won’t happen.
Egnor’s atomistic view seemingly also extends to meaning, even though that’s easy to disprove using the photograph at the right.
I think we can agree that this digital photograph has meaning. No one is likely to object if I say that it represents a duckling, even though you, dear reader, didn’t actually see me photograph this duckling. The fact that the photons entering your eyes did not actually reflect off it’s fluffy feathers doesn’t prevent you from attaching meaning to it, nor does the fact that it is actually a 200x187 grid of colored dots.
It probably doesn’t stop you from attaching the same meaning if I remove a quarter of the dots.
Or half of them or three quarters of them. It’s harder to be sure what you are seeing now, but I suspect that you can still work it out.
Does that mean that the meaning of this picture resides in one of the pixels I didn’t remove?
No. That can’t be it, because for the final picture I revealed the hidden 25% from the first obscured photo, and hid the other 75%. If the meaning of the image resided in a single pixel, one of those images would be meaningless.
Meaning resides not in the individual pixels, but in the arrangement of pixels, their interactions on the screen, in the eye, the optic nerve, and the brain. Your description of the photograph draws on those interactions and adds another layer of interactions between neurons in the brain and on into the tongue, lips, lungs and cheeks. Slicing those interactions apart and expecting to cleanly separate intelligence from other aspects is as silly as thinking you can separate syntax from semantics or meaning from context.