Thursday, September 13, 2012

Homework #3: Chinese Room


Introduction:
In “Minds, Brains, and Programs,” John Searle argues against certain aspects of Artificial Intelligence (AI) by comparing the situation to unique example. John Searle is from the Department of Philosophy at the University of California in Berkeley, California. Searle claims that full Artificial Intelligence is not possible and he attempts to defend his claim against multiple responses from other very noteworthy universities.
Summary:
One of the first distinctions explained in the paper is the difference between weak Artificial Intelligence and strong Artificial Intelligence. Searle explains that weak AI, by definition, is simulating understanding of something. This would be similar to someone giving you step-by-step instructions on how to do something, but you not really understanding what you are doing. You will probably be able to complete the task, assuming that the instructions are thorough enough, but you will not actually understand what you did. Conversely, strong AI is true understanding. The metaphor that Searle compares this situation to is an English speaking man, trapped in a Chinese Room. The man is first given a set of Chinese characters. Then the man is given instructions in English relating each Chinese character from the first set to a Chinese character provided in a new step. Then someone outside of the room engages in a conversation with the man in Chinese. They pass him a question, and then he replies with an answer. The important part that Searle points out is that the man does not actually understand the conversation taking place past the idea of having a formal list of instructions and following those instructions. If the man were to be given the same set of questions and answers in English, then he would truly understand the conversation, because he would know what they were talking about.
Some of the critics of his paper have written responses to which he defends his claim. For example, a group from Yale University devised a response termed as “The Robot Reply.” In this response, the contributors from Yale argue that if instead of the human, they place a robot into the room. Then, they would pass instructions in Chinese to the robot, which the robot would refer to its corresponding instruction in a language that it knows, and then it would perform some action. Examples of these actions could be hammering a nail, drinking water, opening a window, etc. The team from Yale argues that in this case, the robot would surely understand the Chinese language, because it has a physical action associated with each instruction. Searle then defends his claim by stating that strong AI is defined as internal understanding, relying on nothing from the environment, besides the input given. He then points out that the team from Yale rely on actions taken place in the environment, which defeat the definition of strong AI.
Discussion:
The Chinese Room is a very interesting paper which raises key questions regarding the current state of Artificial Intelligence and future possibilities. I believe that most of the AI that I have been exposed to, or could imagine, would fall into the weak AI category. I believe that the main problem with the strong AI definition and theory is that it bases its proof on the idea of understanding. However, understanding has so many different levels. For example, if I read a book about a group of characters interacting in a fictional place, then I would say that I understand what is happening. However, some of my understanding would be influenced by my past experiences related to the story line. What I mean to say is that someone else reading the same story would have a different understanding of what is going on.  Does a reader focus on the big picture, noting the high level significance of the events? Or does the reader strictly follow the immediate interactions between characters. The point that I wish to make is that I feel like Searle and his antagonist base their arguments around the idea of understanding, but there are so many levels of understanding. I believe that the English speaking man in the example understands the Chinese characters he is interacting with because he has some set of instructions explaining how to process the characters. However, I believe that he does not have the same level of understanding as a true Chinese speaking person. Furthermore, if two Chinese speaking individuals interacted with the English speaking man, they might have different levels of understanding between them. If one Chinese speaking person is more intelligent and experienced than the other, then they might have a higher level of understanding than the less intelligent, less experienced, Chinese speaking person. This does not mean that the less intelligent, less experienced, Chinese speaking person does not understand at all.

No comments:

Post a Comment