The Chinese Room
One of my favourite things about the internet is the ease with which it enables us to find new information, broaden our minds and learn things that we didn’t know before.
In a glowing example of video games being educational, whilst watching a recent League of Legends YouTube video by Gbay99 I learnt about the “Chinese Room” a thought experiment or Gedankenexperiment (Like I said yesterday have to get those Niche words in to increase the page views 😀 ) presented back in the 80’s by John Searle to challenge the idea of computers of having “real” intelligence or “Strong AI” as he calls it.
Searle’s thought experiment begins with this hypothetical premise:
Suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test.
Passing the Turing Test means it convinces a human Chinese speaker that the program is itself a live Chinese speaker – See now you have learnt two new things today!
The question Searle wants to answer is this: does the machine “understand” Chinese? Or is it merely simulating the ability to understand Chinese?
Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program’s instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.
Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behaviour which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. (“I don’t speak a word of Chinese, he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.
Searle argues that without “understanding” (or “intentionality”), we cannot describe what the machine is doing as “thinking” and since it does not think, it does not have a “mind” in anything like the normal sense of the word.
Therefore he concludes that “strong AI” is false.
All interesting stuff that makes sense, but it got me thinking isn’t the scenario that he mentions no different from how we learn a new language? Or even how we learn our first language?
When you first learnt the word for “Tree” it wasn’t because you understood what a tree is it was because you had someone point it out to you and most likely repeat a hundred times until you knew what a “Tree” was.
Ultimately associating that image with that word, was due to the instructions that you followed from your parent or caregiver or “programs instructions” if you will.
Is a more interesting argument here not whether the computer can achieve Artificial Intelligence but instead whether our own intelligence is artificial?