Imitation Game: Threshold or Watershed?

  • PDF / 646,561 Bytes
  • 21 Pages / 439.37 x 666.142 pts Page_size
  • 58 Downloads / 185 Views

DOWNLOAD

REPORT


Imitation Game: Threshold or Watershed? Eric Neufeld1   · Sonje Finnestad1 Received: 29 April 2020 / Accepted: 1 October 2020 © Springer Nature B.V. 2020

Abstract Showing remarkable insight into the relationship between language and thought, Alan Turing in 1950 proposed the Imitation Game as a proxy for the question “Can machines think?” and its meaning and practicality have been debated hotly ever since. The Imitation Game has come under criticism within the Computer Science and Artificial Intelligence communities with leading scientists proposing alternatives, revisions, or even that the Game be abandoned entirely. Yet Turing’s imagined conversational fragments between human and machine are rich with complex instances of inference of implied information, reasoning from generalizations, and meta-reasoning, challenges AI practitioners have wrestled with since at least 1980 and continue to study. We argue that the very fact the Imitation Game is so difficult may be the very reason it shouldn’t be changed or abandoned. The semi-decidability of the game at this point hints at the possibility of a hard limit to the powers of technology. Keywords  Turing test · Artificial intelligence · Philosophy of mind · Linguistics

1 Introduction Turing, in a seminal paper (Turing 1950), constructed sample conversational fragments between an interrogator (a rigorous human) and a witness (possibly a digital computer) illustrative of his expectations of machine conversation that might be regarded as human by other humans in conversation with the machine. Here’s a snippet: Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer’s day," would not "a spring day" do as well or better? * Eric Neufeld [email protected] Sonje Finnestad [email protected] 1



Department of Computer Science, University of Saskatchewan, 110 Science Place, Saskatoon S7N 5C9, Canada

13

Vol.:(0123456789)



E. Neufeld, S. Finnestad

Witness: It wouldn’t scan. Interrogator: How about "a winter’s day," That would scan all right. Witness: Yes, but nobody wants to be compared to a winter’s day. Interrogator: Would you say Mr. Pickwick reminded you of Christmas? Witness: In a way. Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison. Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas. Turing considered this an example of a machine giving “satisfactory and sustained” responses. He argued that answering the question “Can machines think?” might require definitions of the terms ‘machine’ and ‘think’, which to this day have eluded agreed and satisfactory definitions, and suggested instead the well-known Imitation Game, where an interrogator decides whether an unseen witness he is communicating with is human or not. Such outcomes could be judged statistically. Some say that a machine passes the Turing Test if (say) 30% of judges consider the machine human after a short period of communication, and s