Two Objections To Searle’s Chinese Room Argument

Searle’s Chinese Room Argument always seemed dumb to me:

The Chinese Room Argument contains two fallacies:

First, it conflates a composition of elements with a single element:

Is a plank of wood a ship? No, but a whole bunch of them are. You can’t say that you can never sail across the ocean because a wood plank isn’t a ship.

No, the message operator in the room doesn’t speak Chinese. But the room *as a system* does speak it. Are the individual neurons in your brain conscious? No, they are simple machines. But you as a whole are sentient. Likewise, software may be sentient even if individual lines of code are not.

The second fallacy is a trick: it compares a one-rule operator to 100 billion neurons in your brain. The hidden argument is that a simple rule engine cannot compete with a 100-billion rule network. The hidden implication is that a “simple” system can never do the job of an ultra-complex one like the brain. But no one claims that sentience can be replicated with a simple system. Perhaps 100 billion neurons are the minimum needed for intelligence, and that’s fine. In fact, GPT-4 has about the same number of neurons as the human brain.

Leave a Reply