Judging Partner Identity In A Turing Test

In a Turing test, a human participant interacts with two agents through a text interface. One of the agents is human, while the other is an artificial agent. After a period of time, the participant is asked to decide which of the agents is human and which is artificial. Some AI researchers would consider a machine (or software program) to be intelligent if it is indistinguishable from a human agent.

Much work has been done to improve artificial conversational agents in order to pass the Turing test, but relatively little work has been done to investigate the factors that a participant uses to determine the identity of the agents. The current work describes two such factors.

When interacting with an artificial agent, such as a chatbot, it often becomes clear quite quickly that one is talking to a machine, as the conversation does not appear to have any context. While individual sentences may appear to be grammatically correct, they sometimes appear ad hoc and do not have any relevance to the topic at hand or may be inconsistent with earlier utterances. For example, when an agent claims to be vegetarian, later in the conversation they would be unlikely to say that their favorite food is a hamburger. In the current study, we manipulated the context of Turing test transcripts to investigate its effect on human-likeness.

But even when an artificial agent produces grammatically correct sentences, the type of grammar being used could provide clues to its identity. While both humans and animals can learn linear grammar, only humans seem to be able to learn recursive grammar. Recursive complex grammar can create multiple levels of information within another unit of information (e.g. sentences within a sentence) and requires forward and backward shifts of attention. For example, the sentence [The boy the girl kisses laughs] requires the reader to bind the verb [laughs] to its subject [the boy]. In contrast, the linear construction [The girl kisses the boy who laughs] requires no such attention shifts. In this study, we manipulated the grammatical structure of Turing test transcripts to either recursive or linear form to investigate their effects on human-likeness.

In our first experiment, we presented individual sentences to a group of 53 participants. They were asked to rate each sentence on how human-like or artificial it was, on a scale from 1 to 7. We found that recursive sentences — even though they are unique to humans — were considered less humanlike than linear sentences (see figure).

Figure courtesy Roy de Kleijn

In a second experiment, we manipulated Turing test conversations from an annual Turing test (Loebner Prize) to show either correct or incorrect use of conversational context. The participants were shown a conversation between a human and another agent and were asked to rate whether this agent was likely to be human or artificial. Surprisingly, we did not find an effect of context on ratings of humanness. That is, it did not matter if the agent used earlier information in a conversation correctly in later utterances.

In conclusion, grammatical construction of sentences provides a judge with clues of identity. When an agent uses recursive grammar, it is more likely to be judged as being artificial — even though the use of recursive grammars is uniquely human. Second, it does not matter for the rating of humanness whether or not an agent uses contextual information correctly. This was a surprising finding, but, on the other hand, we do not expect humans to be perfect stores of information — indeed, this is more of a characteristic of computers. The current study did not allow us to distinguish between storing information correctly and using it correctly. There are likely to be many other factors that a judge can use to determine the identity of a conversational partner, and we will continue the search for them.

These findings are described in the article entitled The effect of context-dependent information and sentence constructions on perceived humanness of an agent in a Turing test, recently published in the journal Knowledge-Based Systems.