Is Turing Test still controversial after 100 years?
Turing's Machine vs Searle's Idea
60 years ago, Alan Turing purposed a thought experiment which attempted to show whether machines could think intelligently. The thought experiment, namely "Imitation Game” or "Turing Test" has become one of the controversial and fundamental way to study artificial intelligence (AI) today.
John Searle did not agree Turing's view and show that Turing's machine cannot be true by using “Chinese room” thought experiment. In other words, the validity of Turing Test is questionable.
In this article, I will show my point that passing Turing Test can indeed show that machine has certain abilities to do certain tasks which are similar to human's thinking process. However, passing the test itself is not sufficient enough to show that machines can think independently.
What is Turing Test?
In order to prove that machine can think, Turing proposes “Imitation Game” in his paper Computing Machinery and Intelligence. This game needs to involve with a person, a machine, and an interrogator. When the interrogator is in another room that separates from the person and the machine, his aim is to determine which of these two is the person and which of these two is the machine.
The interrogator only knows them by the labels X and Y, but he does not know which of these two is X, and which is Y. However, the interrogator is allowed to ask them certain kind of question, such as, “Will X please tells me the length of his or her hair (Turing, 433)?” While the interrogator cannot reliably identify who of these is the machine from the person, the machine is said to have passed the game.
This game sounds interesting may sound interesting.
This game, also known as Turing Test, highlights the question if a programmed computer can play a certain kind of game in which is indistinguishably from a human being [Footnote 1]. As Turing argues, it fairly draws a clear distinction “between the physical and the intellectual capacities of a man (Turing, 434).”
Moreover, Turing holds that when programmed computers have been improved with its storage capacity, it is possible for the interrogator who has no “more than 70 percent chance of making the right identification (Turing, 442).” Under this circumstance, modern computers are far from passing this test, and it is controversial to say that if computers pass the test, they exhibit intelligence to think. It should be noted that in Turing Test, Turing identifies thoughts as with states of a system by their roles in producing internal states and verbal outputs. This view appears that has much in common with the functionalism view of the mind [Footnote 2].
Searle's Chinese Room Thought Experiment
In Minds, Brains, and Programs, Searle attacks Turing’s theory of mind. When he purposes a “Chinese Room” thought experiment as a refutation of Turing Test, he argues that “appropriately programmed computers literally have cognitive states (Searle, 1981).”
In Chinese room experiment, an agent does not understand Chinese texts and receives these texts while he is locked in a room. He only follows the rules from a book in English that tells him what Chinese texts to process out of the room given the texts he receives.
Hence, he can “behave exactly” as understanding Chinese in which receive Chinese characters and process them according to the instructions from rulebook, and produces the Chinese characters as output (Searle, 1981). As the computer passes the Turing Test in this way, it plausibly says that human agent can simply run the program manually.
Searle holds that there is no difference role between the machine and the human operator since both of them simply following a program to simulate intelligent behavior. However, the crucial point is that the human agent in the room does not understand any Chinese but still can produce the output. Searle argues that since he does not know any Chinese, it is reasonable to say that the computer does not know or understand Chinese either (Searle, 1981).
According to functionalism, to make something a mental state in particular type depends on its function. If this view is the correct theory of the mind, mental states such as understanding a given text of an unknown language are also functional states.
In fact, such states have two kinds of intentionality. First, they have intentionality of representation the world as containing symbols. Second, they have intentionality of representation the meanings of the symbols.
However, Searle argues that machine cannot think without understanding the meaning of the symbols. Since it cannot think, it leads to say that it does not have mental state or mind.
Moreover, Searle holds that merely realizing the right functional states is not enough for agents to understand the symbol of Chinese. Instead, if something has the right functional states with the right materials, then it can have mind or mental representations. Similarly, brain does not cause mind by itself. Anything that cause mind, Searle holds, has a causal power equivalent to its brain.
Turing may defend his view as the following. One can put a robot inside the room instead of human. It is programmed with those rules from the rulebook, and it is programmed that allows itself to pass the Turing Test. For each request of Chinese the robot receives, it processes the request and sends them out of the room. When a person outside the room who does not know anything inside the room, this person can say whatever is in the room, it has “genuine understanding and other mental states (Searle, 1981).”
I do not think that works and it is a implausible defense. First, such defense does not show that the robot know what the Chinese symbols mean. It only follows the rules from the rulebook. Robot’s capacity does not say show any intentionality or “way of understanding.”
Second, I think functionalism or Turing Test is not good enough to claim that entities can think intelligently because it holds the materials are irrelevant with functional states. Even if a computer program in a robot can pass Turing Test, the robot, seems to me, has “no intentional states at all” while it is just moving about “its program.” In other words, merely realizing the right program for the robot is not enough to say that the robot understands Chinese. Similarly, if a person in the room has no knowledge of any symbols of Chinese, he has “no intentional states of the relevant type.” What he does is only “follow formal instructions about manipulating formal symbols (Searle, 1981).”
In short: Turing has tried to demonstrate his thought experiment is a good test. He also want to show that entities can be intelligence.
While Turing believes that it is possible to show thing as intelligence by passing the Turing Test, Searle holds that realizing a program is not sufficient for saying representational mental state. Instead, materials of the entities also need to take into account since certain causal process from material and functional is sufficient for mental states.
Yet, there are possibly replies to Searle’s argument, but from all the above reasons, it seems to me that the Turing Test is still not good enough to show that passing the Test can show that entities can be intelligence.
 Turing Test is purposed as the way of answering the question “Can Machine Think? (Turing, 433)” However, if thinking the question in that way, Turing holds the rejection to the question because it causes the debate between the meaning of the term “machines” and “think”. Instead, he poses the question with a new one in which he believes it is empirically answerable.
 To object functionalism, Block (2002) brings the Putman actor example of functionalism in his paper Psychologism and Behaviorism. The core idea of functionalism, or functionalist theories of the mind, does not depend on its internal constitution. Instead, mental states are constituted by their functional role, or the role it plays. In other words, while entitles have casual relations to other mental states, inputs, and outputs, they can be considered as having mental states. For example, in the Turing Test, while the interrogator involves in the test (he has the mental states), and the collections of questions are the inputs of the test, the output is the answers from both the person and the machine.
- Turing, A.M. “Computing Machinery and Intelligence” Mind ,59:236 (1950) 433-460
- Searle, J “Minds, Brains, and Programs” Cambridge University Press (1980) http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html
- Block, N “Psychologism and Behaviorism” (2002) http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/Psychologism.pdf