The Chinese Room

Finn McBride
7 min readJan 5, 2023

Introduction

If you started with $1, and then doubled your money every day for a month, you would have over $500 million by the end of the month. And just a few days later, you would be a multibillionaire.

The moral of the story is that when things increase exponentially, they increase crazy fast. One of the best examples of this is the exponential increase in computing power. For over 50 years, computing power has doubled roughly every 18 months, and the number of transistors that can fit on a silicon chip has doubled roughly every 2 years. In fact, some of the transistors on silicon chips are now so small that quantum mechanical effects are starting to interfere with their functioning. Time Magazine predicts that by 2045 the computing power of a single computer will exceed the combined brainpower of all 8 billion people’s brains combined, while just a decade ago computers couldn’t even compete with insect brains.

Robots are getting smarter, and they’re doing it fast. So will their intelligence eventually surpass our own? Many people believe this, scientists like Ray Kurzweil foremost among them. However, there are also many people who believe that there is something special about human beings, which artificial intelligence will never be able to replicate. One of the most famous arguments in favor of this idea was put forward by John Searle, a philosopher at Berkeley. According to Searle, computers might seem intelligent, but they can’t actually understand things. Searle’s argument is known as the Chinese Room, and has become famous in the artificial intelligence community.

The Chinese Room

In the argument, Searle asks us to imagine a man locked in a room. Under the door of the room, someone outside slides him slips of paper with questions written on them. The man’s task is to read the questions and then write answers to them, and then slide his answers back under the door.

But there’s a problem. The questions on the slips of paper are written in Chinese, and the man in the room only speaks and understands English. So to him, the questions just look like random squiggles.

The way that the man gets around this problem is by using a program. Basically, the man is given a massive set of instructions, in English, that tell him how to transform one set of Chinese characters into another set of Chinese characters. One of these instructions might be “when you see two vertical lines, write a slanted horizontal line, with a slight curve, unless the previous character contained the same kind of line.”

We are meant to think about these instructions as being similar to the kind of code that a Chinese chatbot computer would follow. Therefore, by following the instructions, the man is able to produce answers to the questions that actually make sense, just like a Chinese chatbot.

Imagine that this is possible, and that the man’s answers make sense. From the outside, it would seem like the man understood Chinese. But really, he has no understanding of any of it at all.

According to Searle, this is how computers work. They might seem like they understand things, but really they are just mindlessly following the “instructions.” They are just mindlessly following their computer code.

When Searle first published this argument, he was met with a flurry of replies. The 4 most prominent of these are the systems reply, the robot reply, the brain simulator reply, and the other minds reply. In this post I want to go through the 4 replies, then propose a 5th.

  1. The Systems Reply

According to the systems reply, the man doesn’t understand Chinese, but the whole system (man + instructions) does. According to this reply, the room understands Chinese, even though the man doesn’t, the same way that the brain can understand Chinese, even though a single neuron can’t.

This reply goes against some of our most hardwired intuitions. We all seem to “know” intuitively that a room can’t understand something. Nonetheless, there is something to be said for it. It is very for a system to understand something and for the parts that make up the system to not understand that same thing, because, as mentioned above, it is possible for a brain to understand Chinese even though the neurons that make up that brain don’t understand Chinese.

Searle responded to the systems reply by asking us to imagine that the man simply memorized the instructions for translating Chinese characters into responses. Now the man is the entire system, Searle says. But he still doesn’t understand Chinese. Therefore the entire system doesn’t understand Chinese.

2. The Robot Reply

According to the robot reply, first proposed by Fodor, Searle’s argument is true of computers but not of robots. Computers are little boxes of virtual computation, Fodor argues, but robots are capable of interacting with the real world, which means that they can do more than simply instantiate a program. According to him, this makes them fundamentally different, and gives them a greater capacity for comprehension.

Searle responded to the robot reply by asking us to imagine putting the Chinese Room inside of a giant robot’s head. Thus he maintains that, even if a robot were involved, the robot still be able to pass the Turing test without understanding anything is was saying.

3. The Brain Simulator Reply

The brain simulator reply asks us to replace the instructions and the slips of paper inside of the Chinese Room with a simulation of the neurons inside of a Chinese person’s brain while that person is understanding Chinese questions and giving Chinese answers. According to this reply, we would have to say that the Chinese room is an intelligent, comprehending computer, since it is functionally identical to an actual person understanding Chinese.

Searle responded by saying that simulated comprehension isn’t real comprehension, in the same way that a simulated hurricane isn’t wet.

4. The Other Minds Reply

The other minds reply states that the only way we know that Chinese people actually understand Chinese is by their behavior. Therefore, if a computer were to exhibit the same behavior, we would have to extend to it the same assumption of understanding.

Searle responded by saying that the thought experiment isn’t about how he knows that other people understand stuff, but rather what that attributed understanding is in the first place. According to him, it’s more than just computer code.

5. My Reply

Searle’s argument boils down to 2 premises and a conclusion.

Premise 1: There could be a guy who doesn’t speak Chinese in a box with a set of instructions that could pass the Turing test in Chinese.

Premise 2: Despite the box passing the Turing test in Chinese, neither the box nor anything inside of it actually understands Chinese.

Conclusion: Therefore, a computer could pass the Turing test without actually understanding what it was saying.

My reply to Searle’s thought experiment is to question the first premise. I flat out don’t believe that the “instructions” that Searle speaks of could ever exist. If they could, then Searle’s argument makes sense. But they simply can’t, at least not today.

It remains an open question whether we will some day develop chatbot code that can pass the Turing test. Without confirmation that such code is possible, however, Searle’s syllogism basically boils down to this:

  1. Imagine something that could pass the Turing test without understanding what it was talking about.
  2. Therefore, something could pass the Turing test without understanding what it was talking about.

This argument is circular. Of course we can imagine a system of instructions that could magically produce coherent Chinese conversation. But what if such a system isn’t possible?

Searle might reply by saying that such a system of instructions may not be possible in practice, but is possible in principle, and therefore still serves as a valid thought experiment. I completely disagree. Firstly, the idea that a system of instructions like this is possible, even if only in principle, is an assumption, not a fact. It remains to be seen whether such a system is possible, and may remain so forever.

Secondly, there is a problem with Searle asking us to imagine these instructions: it isn’t actually possible to imagine them. It is only possible to visualize some comically long book of directions, and then to assume from there that this comically long book would be able to pass the Turing test. This isn’t real imagination. It is partial imagination, followed by a leap of faith. Hmm, Searle wants his readers to think. I guess a big book of instructions really could be just as intelligent as a real Chinese person. Wow, that really says a lot about AI.

Just because you can convince yourself that you’re imagining something doesn’t mean that that thing is actually possible, in practice or principle. This is why I take Searle’s thought experiment, like many others, with a grain of salt. Are you really imagining what he’s asking you to imagine? Or are you just visualizing a vague image of a guy in a room with a big instruction book, and then assuming that this book would somehow be able to pass the Turing test, leaving the details a black box?

Conclusion

Searle is a smart guy, and his Chinese Room thought experiment gets us to think in interesting ways about AI. However, it’s utility as more than just a cool thought experiment — that is, it’s utility as an argument — is limited at best, nonexistent at worst.

--

--

Finn McBride

The Skrillex of blogging. My Wattpad is @ireallylovemangos