I stumped my philosophy professor today. We were discussing the Chinese Room Thought Experiment and how it proves that a Strong AI is impossible (a strong AI being a thinking machine). The experiment is based on the idea that someone is sitting in a room and Chinese symbols are handed to him. He uses a rulebook that tells him what characters to write in response to the symbols given to him, and he understands neither the input nor the output. Due to this, anyone standing outside the room would think that the person inside the room understands Chinese, even though he clearly does not.
There are two glaring issues with this argument. The first is very simple - Understanding cannot be defined by philosophers. We will ignore that for now.
The second is what I stumped my philosophy teacher with - We learn and use language with a giant dictionary of words and their meanings, along with a set of rules of grammar. How is this in any way different from a guy in a room with a giant rulebook? If this thought experiment is correct, all Searle has succeeded in doing is proving that humans understand nothing. Hence, the thought experiment doesn't prove anything at all, because humans obviously do understand things or I wouldn't be objecting to us understanding things.
So what is incorrect about this thought experiment? This leads us back to the first problem - what is Understanding? If we cannot differentiate between using a giant book of rules and actually understanding something, then my flimsy little laptop can "understand" the English words I'm typing and correct them for me, which is not true.
Hence, we are inevitably led to the problem of understanding. What differentiates following a bunch of rules and understanding a concept? The answer is simple: experience. Our experiences allow us to attach significance to symbols that would otherwise be totally meaningless to us. Someone can tell you an elephant is huge, but if you've never seen anything larger then a 10 foot tall tree, you won't understand what that means.
This means that the Chinese room experiment succeeds in proving something painfully obvious: No, the man in the room doesn't understand anything. Sadly, this conclusion has no significance whatsoever. In fact, if we modify the experiment to say that the man is using all his previous experience and knowledge to try and interpret the symbols, and succeeds at doing so, then by definition he will understand the concept and the outside observer will be correct in thinking that he does.
By defining understanding as experience of an abstract concept, we can therefore identify the crucial difference between faking that you understand something, and actually understanding something - having an experience of it.
Now, we can construct an alternate version of the Chinese Room thought experiment, where we have a robot that can sense the world around it (sight, smell, touch, taste, hear). This robot responds to stimuli based on a set of rules that its programmed to follow. To an outside observer, the robot would appear to act human, and to understand concepts. There are 2 possibilities:
1. The robot understands nothing and is simply using a very advanced rulebook to tell it what to do.
2. The robot does in fact, understand what is going on, and by extension is therefore a thinking, conscious being.
With our new definition of "understand," we are now able to differentiate between these two situations. If in one situation, the robot is like today's robots and cannot store memories or experiences, then it is not a thinking, conscious being and cannot understand what it is doing.
If, however, the robot CAN store memories and experiences, and it is capable of assigning these memories and experiences to the abstract definitions in its rulebook, then it is capable of gaining an understanding of the world around it, and hence is a conscious, thinking being.
So what separates us from an extremely advanced robot?
Nothing.
In direct contradiction to Searle's argument, a Strong AI must exist because human beings, by definition, are Strong AI's. If Strong AIs are impossible, we are impossible. To prove this wrong, a philosopher will have to somehow differentiate between a robot understanding something and an organic being understanding something. If one cannot do that, then we come to the inevitable conclusion that science has been trying to tell us for decades - the human brain is a giant, organic computer.
No comments:
Post a Comment