How to use the Turing Test to play pranks on unwitting humans

Illustration for article titled How to use the Turing Test to play pranks on unwitting humans

Jerker Westin and colleagues at the Department of Culture, Media and Computer Science, Högskolan Dalarna, Borlänge, Sweden, have, between them, developed a new variant of the now-famous Turing Test.


The Turing Test was the first experimental procedure devised to try to determine whether an Artificial Intelligence (AI) machine can, in fact, "think". The new version of the test, described in the paper ‘The April Fool Turing Test' (published in the journal tripleC [Cognition Communication Co-operation] Vol 4, No 2) explains the shortcomings of the original test as developed by Alan Turing in 1950. Pointing out that:

" … the Turing Test defines human intelligence in terms of a judgement made by human intelligence. This definition is quite obviously circular and it is not perhaps so surprising that it can lead to paradoxes and confusion."


The new paradigm seeks to sidestep the apparent confusion with an ingenious, yet simple, manoeuvre – the removal of the computer-in-the-loop.

"This paper explores certain issues concerning the Turing test; non-termination, asymmetry and the need for a control experiment. A standard diagonalisation argument to show the non-computability of AI is extended to yields [sic] a so called ‘April fool Turing test', which bears some relationship to Wizard of Oz experiments and involves placing several experimental participants in a symmetrical paradox – the ‘April Fool Turing Test'."

The team's experiments involved three participants who were under the impression that they were conversing with a computer, but in reality their Q&A exchanges were with the other two experimentees. Observing the three-way strictly humanoid discussions the researchers were able to conclude " … the results clearly illustrate some of the difficulties [of the original test]." Though bearing in mind, however -

"Just as when the machines are playing, we, the experimenter [sic], can be fooled by the participants just as we are trying to fool them."


The paper can be read in full here.

Ethical note from the researchers.

"For the experiment to work, the participants have to be deceived by the experimenter. Is this justifiable? We answer in the affirmative. Firstly it seems highly unlikely that anybody could be harmed by participating. On the contrary, taking part could actually be an amusing and interesting experience. Secondly, ‘April Fool' jokes are in general not seen as taboo in western society."


This post originally appeared on Improbable Research.

Share This Story

Get our newsletter



Interesting take on the ethics.

Consider that participants are told they'll communicate with both a human and a machine to decide between them. In reality, there is no machine, there is a second human.

Seems a simpler ethical note would comment on people likely not being offended NOT talking to a machine as promised.