ADVERTISEMENT

Judging Partner Identity In A Turing Test

In a Turing test, a human participant interacts with two agents through a text interface. One of the agents is human, while the other is an artificial agent. After a period of time, the participant is asked to decide which of the agents is human and which is artificial. Some AI researchers would consider a machine (or software program) to be intelligent if it is indistinguishable from a human agent.

Much work has been done to improve artificial conversational agents in order to pass the Turing test, but relatively little work has been done to investigate the factors that a participant uses to determine the identity of the agents. The current work describes two such factors.

ADVERTISEMENT

When interacting with an artificial agent, such as a chatbot, it often becomes clear quite quickly that one is talking to a machine, as the conversation does not appear to have any context. While individual sentences may appear to be grammatically correct, they sometimes appear ad hoc and do not have any relevance to the topic at hand or may be inconsistent with earlier utterances. For example, when an agent claims to be vegetarian, later in the conversation they would be unlikely to say that their favorite food is a hamburger. In the current study, we manipulated the context of Turing test transcripts to investigate its effect on human-likeness.

But even when an artificial agent produces grammatically correct sentences, the type of grammar being used could provide clues to its identity. While both humans and animals can learn linear grammar, only humans seem to be able to learn recursive grammar. Recursive complex grammar can create multiple levels of information within another unit of information (e.g. sentences within a sentence) and requires forward and backward shifts of attention. For example, the sentence [The boy the girl kisses laughs] requires the reader to bind the verb [laughs] to its subject [the boy]. In contrast, the linear construction [The girl kisses the boy who laughs] requires no such attention shifts. In this study, we manipulated the grammatical structure of Turing test transcripts to either recursive or linear form to investigate their effects on human-likeness.

In our first experiment, we presented individual sentences to a group of 53 participants. They were asked to rate each sentence on how human-like or artificial it was, on a scale from 1 to 7. We found that recursive sentences — even though they are unique to humans — were considered less humanlike than linear sentences (see figure).

Figure courtesy Roy de Kleijn

In a second experiment, we manipulated Turing test conversations from an annual Turing test (Loebner Prize) to show either correct or incorrect use of conversational context. The participants were shown a conversation between a human and another agent and were asked to rate whether this agent was likely to be human or artificial. Surprisingly, we did not find an effect of context on ratings of humanness. That is, it did not matter if the agent used earlier information in a conversation correctly in later utterances.

ADVERTISEMENT

In conclusion, grammatical construction of sentences provides a judge with clues of identity. When an agent uses recursive grammar, it is more likely to be judged as being artificial — even though the use of recursive grammars is uniquely human. Second, it does not matter for the rating of humanness whether or not an agent uses contextual information correctly. This was a surprising finding, but, on the other hand, we do not expect humans to be perfect stores of information — indeed, this is more of a characteristic of computers. The current study did not allow us to distinguish between storing information correctly and using it correctly. There are likely to be many other factors that a judge can use to determine the identity of a conversational partner, and we will continue the search for them.

These findings are described in the article entitled The effect of context-dependent information and sentence constructions on perceived humanness of an agent in a Turing test, recently published in the journal Knowledge-Based Systems.

Comment (1)

Comments

READ THIS NEXT

A Russian Jurassic Park Is Coming Soon To Siberia

Untold numbers of ancient corpses lay beneath the permafrost in Siberia. Preserved by the icy soil of the region, the […]

How Natural Extracts Activate Defenses Against Pathogens In Tomato Plants

Tomato is an agronomically valuable crop in many countries, either grown in fields or greenhouses and therefore has been bred […]

PM2.5 Bound Metals And Their Associated Health Effects

Particulate Matter (PM) or aerosol is an important constituent of the Earth’s atmosphere and varies significantly in concentration, size distribution […]

What Do Owls Eat?

Owls eat other living things as they are carnivores, including a variety of small animals, rats, mice, birds, amphibians, small […]

Harry Hammond Hess: Father Of The Unifying Theory Of Plate Tectonics

While we now know that the continents were once whole and drifted apart from each other, due to the activity […]

“I Don’t Want To Take The Perspective Of Minority Group Members”: Instructions Enhance Reactance And Non-Compliance

Blatant prejudice against refugees is on the rise in many Western countries1. Can we reduce prejudicial attitudes and promote empathy […]

Scientists Discover Metformin As The Optimal Anti-Aging Reagent To Improve Wound Healing

Cutaneous wounds are one of the most common soft tissue injuries and usually are particularly hard to heal in aging. […]

Science Trends is a popular source of science news and education around the world. We cover everything from solar power cell technology to climate change to cancer research. We help hundreds of thousands of people every month learn about the world we live in and the latest scientific breakthroughs. Want to know more?