← Back to map

Chinese room

John Searle
EraSecond half of the 20th century · 1980
RegionNorth America · United States
DisciplinePhilosophy

Explanation

John Searle proposed in 1980 one of the most cited thought experiments in philosophy of mind: the Chinese Room. Imagine a monolingual English speaker locked in a room, with a detailed manual of rules relating Chinese characters. Questions in Chinese are slipped under the door, and he, following the manual, produces Chinese answers that native speakers consider perfect. Does he understand Chinese?

Searle's intuition is that he does not: the speaker manipulates symbols mechanically without understanding any of them. And yet, from the outside, he behaves as if he understood perfectly. The system passes the Turing test. If the mere manipulation of symbols according to rules could generate genuine understanding, the speaker should understand. Since he does not, symbol manipulation is not sufficient for authentic cognition.

The argument's target was what Searle called strong AI: the thesis that a computational program with the right rules would literally have a mind, understanding and intentionality. For Searle, the room shows that this is false: syntax (rule-governed symbol manipulation) is not enough for semantics (understanding meanings). Something more is needed.

Searle proposes that the something more is the causal powers of the brain, specific biological properties that silicon systems do not replicate. This is his biological naturalism: the mind is caused by neurobiological processes, not by computational organisation in just any substrate. Computers simulate cognition without embodying it, just as a meteorological simulation does not actually make it rain.

Replies are abundant. The systems reply: although the individual speaker does not understand Chinese, the system as a whole (speaker + manual + room) does. The robot reply: if the room were connected to sensors and actuators, it could have genuine intentionality. The brain simulator reply: if we simulate each neuron individually, at what point does understanding appear or disappear?

Forty years on, the Chinese Room is still discussed, especially given large language models (LLMs) such as ChatGPT. Does an LLM understand what it says? Is it a giant Chinese Room or something new? Searle's thought experiment does not settle the debate but frames it precisely, and forces every defender of artificial mind to specify what kind of understanding they require and how it is achieved.

Strengths

  • Memorable, comprehensible and philosophically fertile.
  • Articulates the difference between syntax and semantics.
  • Critically effective against naive strong AI.
  • Catalyst for refined responses.

Main critiques

  • Systems reply: the understander is the whole system, not Searle.
  • Robot reply: embodiment changes the picture.
  • Appeals to intuitions that may mislead with sufficiently complex systems.
  • Incompatible with well-specified computationalism.

Connections with other theories