Two Objections To Searle’s Chinese Room Argument

Searle’s Chinese Room Argument always seemed dumb to me:

The Chinese Room Argument contains two fallacies:

First, it conflates a composition of elements with a single element:

Is a plank of wood a ship? No, but a whole bunch of them are. You can’t say that you can never sail across the ocean because a wood plank isn’t a ship.

No, the message operator in the room doesn’t speak Chinese. But the room *as a system* does speak it. Are the individual neurons in your brain conscious? No, they are simple machines. But you as a whole are sentient. Likewise, software may be sentient even if individual lines of code are not.

The second fallacy is a trick: it compares a one-rule operator to 100 billion neurons in your brain. The hidden argument is that a simple rule engine cannot compete with a 100-billion rule network. The hidden implication is that a “simple” system can never do the job of an ultra-complex one like the brain. But no one claims that sentience can be replicated with a simple system. Perhaps 100 billion neurons are the minimum needed for intelligence, and that’s fine. In fact, GPT-4 has about the same number of neurons as the human brain.

Why the AI apocalypse IS coming

When most people think of AI as a threat, they imagine a malevolent AI. That isn’t very likely, so they dismiss the threat entirely.

But that’s not the danger of AI. The real danger is super-competence in the wrong hands. This is a difficult idea to convey because “super-competence” does not exist yet, so it takes some imagination to consider the risks it could bring.

I say “super-competence” instead of “super-intelligence” because we don’t have a firm grasp of intelligence, much less “super”-intelligence in a non-human context. “Super-competent AI” is just software with a versatile utility function, without the baggage of self-awareness, the question of a soul, etc. Everyone accepts that computers are super-competent in some narrow contexts. They can solve math problems much faster, search large amounts of data, land rockets on a dime, etc. These are all forms of narrow competence — algorithms that are good at very specific things.

Of course, with ChatGPT, we have an example of competence in a broader context. GPT4 can solve problems in almost any domain of human knowledge, even if not very well (yet).

I predict that within the next 10 years, we will have super-competent AI (which could be augmented humans) that can provide superhuman results in a wide array of problems. It doesn’t have to be a chatbot. It might be an oil rig builder that can handle everything from finding oil to building a pipeline. The initial domain doesn’t matter because once we figure out how to make more adaptive AI tools, they will spread to every human endeavor. Most likely, it will be an informational tool of some kind (like a kids’ cartoon generator) because language models have already proven superhuman talents in this field.

Once we have super-competent tools, one of three disasters will inevitably happen:

1: A do-gooder will tell an AI algorithm something dumb like “design an airborne viral cancer cure” and the AI will decide that the most effective way to cure cancer is to kill the hosts.

2: Some malicious human will use AI towards malicious ends. The most likely way this will happen is that the world gets so scared of AI that it bans all research, and that only parties without regard for the law or AI safety continue the work.

3: An AI really will develop a mind of its own and decide that the carbon in our bodies would be more useful in building more compute nodes. I consider this the least likely, or rather likely to be a risk long after the first two risks.

So far as I can see, the only way I could be wrong is if (1) super-competent AI is not possible, either because technological progress stalls, or because human intelligence is at the apex of what is possible or (2) we find a way to align software with human values, such as by integrating super-intelligence into everyone’s minds or perhaps creating an AI overlord with human values that can stop any possible disasters.

My confidence in the timeline of AI progress comes from both the state of AI research and the nature of information systems (such as human civilization), which tend to produce more complex iterations in ever shorter cycles. I do not think “no one will do anything dumb or malicious” is a realistic option because given 8 billion+ humans, all of whom will eventually have access to these tools, it is a certainty that some will be smart enough to be dangerous, but too dumb or malicious to know better.

So what’s the solution? As I mentioned, banning AI will only make a disaster more likely. I think the first step is to develop a consensus about AI’s existential risks, rather than, for example, worry that someone will trick ChatGPT into saying something racist. The second step is to fund AI safety research on a scale proportional to AI research itself. The third step is to develop future-oriented safety protocols. (AI safety protocols exist, but they are backward-looking and completely inadequate for the scale of the problem.)

See Insights and Ads2Blake Williams and Fred Smiszek Jr.