Searle's Chinese Room paradox is an attempt to demonstrate that a computer program cannot possess general intelligence. As we've seen with other paradoxes the Chinese Room is a thought experiment that magnifies our misconceptions, in this case about how our own minds work. This one paradox seems to cause quite a bit of mischief so we stand to gain a lot if we unroll it.
The scenario goes like this. Suppose you have a computer program that can hold a conversation in Chinese. It takes Chinese symbols as input and produces replies in the same alphabet. If the conversational back and forth is indistinguishable from how humans might respond in the same situation, we have to conclude at the very least that the computer understands Chinese. Not so, says Searle, and suggests replacing the computer with a room. This room contains a human-readable version of the program, filing cabinets to serve as memory, scratch paper and pencils to act as temporary storage, and a non-Chinese speaking human being to execute the program. The symbols acting as prompt come in through a slot, the human runs the program by hand, and the reply is generated as more Chinese symbols and passed out another slot. Because the human doesn't understand Chinese, Searle argues, and everything else is inanimate and incapable of understanding, this human-powered computer doesn't understand Chinese.
Searle's formal objection is that the computer program is only doing "symbol manipulation." Programs take symbols of some kind, perform operations over them, perhaps combining them with other symbols stored in data banks, and generate new symbols. At no point does the computer assign meaning to those symbols; that's only done by the human who entered the input symbols or interprets the output symbols.
This argument is really spectacularly bad. First of all it's question-begging. Part of the question we're trying to answer is "can a collection of inanimate stuff understand things?" and Searle comes right out and assumes that's false. He also equivocates. By failing to define his use of the term "understand" he conflates the layman's everyday reading with a philosopher's technical jargon.
So what do we mean when we say that someone or something "understands" something we said to them? Understanding means that the information content of the utterance was correctly added to that individual's existing store of information. That requires that we're dealing with something with a mind, but it doesn't have to be human. Minds tend to be opaque things, so we can't observe "understanding" directly. Instead we can only test it by observation and experiment. If you tell your dog to "stay" and she stays, then you have some confirmation that she understood the command. If a student passes the exam then you can be fairly certain that they understood the course material. The exam is a kind of probe of the student's model of the world, allowing us to see if the new information was successfully integrated without having to open up their skulls which we can't do in any case.
Instead of this commonsense and testable definition, Searle insists on the epistemologist's concept that "understanding" relates to how humans assign meaning to symbols. The word "horse" isn't just a collection of letters or a sequence of sounds, it stands in for the English speaker who reads or hears it as a representation of the real animal. This is a philosophical atom of comprehension, both indivisible and unseen. However since he offers no explanation of how this magic is accomplished, nor any empirical test we might apply to non-human minds, it's entirely useless for the question at hand. He can simply assert that human minds can do this while inanimate systems cannot.
The argument also tries to trick us by creating the impression that an intelligent computer program would be a simple thing. A large filing cabinet might hold 100 megabytes, and a well-trained human might perform at 1 operation per second (FLOP) on a good day. And Searle wants us to imagine that this would be sufficient. Of course it's not. This program accomplishes human-like conversational performance. Simple programs that try to do this have all failed. The human brain is estimated to have 10 terabytes of storage and operate at 1 exaFLOP. Even if the program can converse in Chinese using 1% of those resources we'd still need the "room" to be more like an office building containing thousands of filing cabinets, and for the poor human to take 10,000 millennia to compute the one second of real time it takes to read “你好吗?” and reply “我很好”. A later stage of his thought experiment assumes that the human operator can memorize the algorithm and all of the state information and run the program in their head. People can't do long division in their heads, let alone execute massive, data-intensive programs. This isn't just nitpicking either -- it's central to the intuition.
The pivotal issue is that this is again a fallacy of division, where the argument depends on the mistaken idea that a large collection of things can only have properties that are also shared by the individual components. This is why it was important to handle the sand heap problem first, as a much simpler example of the same fallacy. This hypothetical computer program and system state will be enormously complex -- many orders of magnitude beyond what a person can deal with by hand. Yes, each individual operation is a mechanical transform of symbols whose meaning is known only to the programmer, not the computer hardware, but there are 100's of trillions of them. Neurons don't know the meaning of the signals they receive and pass along, and yet put enough of them together and you get us. Given that we don't know how brains work at that level it's the height of hubris to suggest that minds can't emerge from other types of systems of similar depth.