X

Is The Turing Test Really A Measure Of Machine Intelligence?

Artificial intelligence permeates all aspects of modern day life, right from movie tickets to banking, and dating to making reservations at restaurants. It all began with Alan Turing, a British computer scientist, who is widely regarded as the “father of #artificialintelligence (AI).” He proposed a test (the #Turingtest) for assessing machine intelligence. It posits that if a machine can reliably lull a human conversational partner into thinking that it was human the machine demonstrates real artificial intelligence.

The original version of the test required both a human and a computer to have a conversation with a human judge solely by means of text. If the judge was unable to differentiate the machine from the human, the machine would have passed the Turing test for machine intelligence. Turing speculated that by the year 2000, computer programs would stymie an average human judge 30% of the times – after 5 minutes of questioning. The “5 minutes” part is important. Though, to give Turing his due, he didn’t talk about the time limit as being an integral part of the test. However, it can be argued that for a machine to really pass the Turing test, it must be able to deal with any amount of questioning. Therefore, shorter the conversation, greater the machine’s advantage; longer the conversation, greater the odds of the machine giving itself away.

Many people have always felt that machines are never going to upstage humans in terms of intelligence. However, when IBM’s Deep Blue defeated Garry Kasparov, the world chess champion, in a 6-match chess series, people finally started to have second thoughts. Since then it has been established that machines are getting smarter day-by-day. But are machines smart enough to pass the Turing test? Proponents of the Turing test say that a successful artificial intelligence (AI) is worthless if its intelligence is trapped within an unresponsive program.

However, the critics of the Turing test claim that the test is solely aimed at testing the ability of a machine to imitate human intelligence and not the means of attaining intelligence. For instance, a person will find it difficult to calculate the value of 39877/139, which is pretty simple for a machine. However, if a machine has to mimic #humanintelligence, it would take time in coming up with the answer.

Furthermore, the innate #complexityoflanguage is responsible for the issues associated with creating a talking machine. Context plays a central role in language; for us, understanding the context is easy; for machines, not so much. Therefore, a chatbot’s arsenal consists of a bag of tricks designed to hide the program’s limitations. These include memorizing megabytes of responses and scouring the internet for dialogue pertinent to the conversation at hand. So essentially, whatever a machine lacks in intelligence it makes up for it in raw computing power. This is the reason why Alexa or Siri seem so smart to us.

Contributed by Praveena L Ramanujam, PhD (IISER, Pune)