Wednesday, January 19, 2011

Chinese Room

Comments
Derek Landini


Reference Information
Title: Minds, Brains, and Programs
Author: John R. Searle
Also: Wikipedia article


Summary
The "Chinese Room" is an exercise in artificial intelligence. As Searle explains, it is an enclosed room with a person in it, who is given Chinese input from outside of the room, follows English rules associated with specified inputs, and outputs a Chinese result according to the rules. The person in the room doesn't understand Chinese, but for the people outside of the room, it appears as if the Chinese Room understands the input and is thinking to appropriately respond. This room is a visualization of how a computer works while following a program, and Seale argues that since the person inside the room is just following outlined rules without understanding, the system is not intelligent.

The author acknowledges that there are disagreements to his way of thinking, and presents various replies to his own opinion. For example, there is a reply that the room as a whole understands, while the person is just a part of the whole and does not need to understand, and another reply that if a system were to simulate the neurons and neuron firings of a human, that it would be indistinguishable from a person.  Searle disputes these replies, summarizing by saying that simulation of thought does not indicate intentionality.

An example of a rule the person inside the room would have. 

Discussion
There are many different definitions of "understanding" and "intelligence". Searle chose specific definitions that do not allow computers to have understanding or intelligence, claiming that this means there cannot be strong AI. However, the current goal of the artificial intelligence field is not to create a human mind, but to have a system that seems to have human intelligence. Therefore, I believe that, while Searle's definitions of intelligence are valid, for the purposes of Artificial Intelligence, the Turing test should be sufficient to determine if a system is intelligent. By this definition, a system would merely need to appear to have human intelligence, and the Chinese Room itself would be intelligent, while the person inside would not be.

3 comments:

  1. I'd agree that these questions all depend on what constitutes understanding and intelligence and what it means to have a mind. So a lot of this will always be open to interpretation.

    But I disagree that AI does not aim to create a human mind; even though we're far from it, it seems to me that the ultimate goal of AI researchers is to create a machine capable of performing "any intellectual task a human being can". Regardless of anyone's definition of intelligence, this is literally creating a human mind.

    ReplyDelete
  2. Searle's argument does comes across as staged. I would bet that he tailored this definition of 'understanding' specifically to suit his argument that programs do not understand.

    ReplyDelete
  3. I agree that the examples that The author uses are to set up his point and is biased toward his goal. Also I agree that the Turning test is sufficient because of the appearance of human intelligence not actually the person inside the room being intelligent.

    ReplyDelete