19 results
5 - History of Conversation Systems: From Eliza to Eugene Goostman
- from PART ONE
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 69-80
-
- Chapter
- Export citation
-
Summary
In the following sections we consider early artificial conversationalists and implementations of experiments to answer Turing's question can a machine think?. The claim for the first Turing test is asserted by Hugh Loebner, sponsor of the annual Loebner Prize for Artificial Intelligence. The first contest appeared in 1991. However, more than a decade earlier a computer program of a paranoid human confounded psychiatrists: they were not able to distinguish the simulation from a real patient (Heiser et al., 1979).
As we said in the previous chapter, during a practical Turing test, the actual goal of the machine is to provide satisfactory and sustained answers to any questions – in the realm of paranoia, PARRY, the program created by Colby et al. (1971, 1972), served this purpose. Christopher Strachey, a contemporary of Turing and a fellow student at King's College Cambridge, in 1952 wrote an algorithm that generated text intended to express and arouse emotions. It was the first machine to produce digital literature (Wardrip-Fruin, 2005). Here's an example of its output:
DARLING SWEETHEART
YOU ARE MY AVID FELLOW FEELING.
MY AFFECTION CURIOUSLY CLINGS TO YOUR
PASSIONATE WISH.
MY LIKING YEARNS FOR YOUR HEART.
YOU ARE MY WISTFUL SYMPATHY: MY TENDER LIKING.
YOURS BEAUTIFULLY.
The distinguished linguist Noam Chomsky (2008) viewed the linguistic performance of a machine preferable to other ways of improving machine capacity and studying human intelligence. He considered that the Turing test provided a stimulus for two useful lines of research:
(a) improvement of the capacities of machines;
(b) investigating the intellectual properties of a human.
Chomsky believed therefore the imitation game is uncontroversial. He accepted Turing's intention as wanting to learn something about living things through the construction of a thinking machine. Moor (2004) contended that Turing's linguistic measure “is not essential to our knowledge about computer thinking … it provides one good format for gathering evidence” so that if a machine were to succeed against Turing's satisfactory and sustained response criterion “one would certainly have very adequate grounds for inductively inferring that the computer could think”.
4 - The Controversy Surrounding Turing's Imitation Game
- from PART ONE
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 56-68
-
- Chapter
- Export citation
-
Summary
In Chapter 2 we saw that Turing described a number of scenarios featuring a machine being interrogated by a human. In that pre-Internet era Turing's foresaw a way his game could be played across two rooms with the participants, human interrogator and machine, hidden from each other and communicating by typing.
Turing's imitation game is less concerned with actually deceiving a naïve human interrogator and more about a machine's ability to provide satisfactory answers to any questions a human might put to it (see Chapter 2), and that the machine's satisfactory answers must be sustained. In order that the machine is not judged on its beauty or its tone of voice it must be out of sight and hearing of the interrogator who is tasked with focussing on the answers to any questions they might want to ask – we contend five minutes is adequate for a first impression. Why should the machines be able to do this? More and more robots are being developed to interact with humans, such as Hector, a care robot built to look after the elderly isolated in their own home. Future machines should be able talk to us just as we talk with others.
Since publication of Computing machinery and intelligence (CMI) (Turing, 1950) and his proposed imitation game for investigating machine thinking through two different tests, both pivoting on linguistic inputs and outputs as criteria for intelligence (Schweizer, 2010), a whole industry has grown. According to Loebner (2010) the correct design for a Turing test involves three participants in which an interrogator questions a machine and human in parallel, an interpretation which Loebner claims he realised after implementing thirteen one-to-one tests in his annual Loebner Prize, “the oldest Turing test contest”.2 Loebner's interpretation overlooks Turing's own description for his imitation game and the two methods for realising it:
(i) a simultaneous comparison of a machine against a human, both questioned by a human interrogator (Turing, 1950); and
(ii) the direct machine examination by a human interrogator in a viva vocetype scenario (Turing, 1950; Braithwaite et al., 1952).
Turing's biographer Andrew Hodges tells us Turing was working at Manchester University at the time of CMI's publication in 1950.
PART TWO
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 97-98
-
- Chapter
- Export citation
8 - 2012 Tests – Bletchley Park
- from PART TWO
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 128-158
-
- Chapter
- Export citation
-
Summary
Between the end of the October 2008 experiment at Reading Universityand a special event at Bletchley Park in June 2012, an exciting and historic development took place in the continuing man-versus-machine narrative.
IBM once again produced a machine that beat human champions at their own game, following Deep Blue's defeat of Garry Kasparov.
Back in the late 1990s the analysis of Deep Blue's performance was that it used brute force to look ahead millions of chess moves, but it lacked intelligence. Recall that Turing (1948) had stated “research into intelligence of machinery will probably be very greatly concerned with searches. Is ‘search’ not part of our daily decision-making, even if done in an instant, to decide what the next best move is, no matter what activities we are planning?”.
In February 2011 the Watson machine, named after the IBM's founder Thomas J. Watson, was seen on TV in the US and across the Internet playing a game that involved identifying the correct answer to a clue. In the TV show, IBM presented another ‘super’ machine (see Figure 8.1), the Watson system (Ferrucci et al., 2010). This time, rather than have a machine compete with a human in a chess match, IBM chose a contest featuring natural language: the American general knowledge quiz show Jeopardy! (Baker, 2011).
The IBM team had conceded3 that this was a formidable challenge:
Understanding natural language, what we humans use to communicate with one another every day, is a notoriously difficult challenge for computers. Language to us is simple and intuitive and ambiguity is often a source of humor and not frustration.
Designing the Watson system around a deep search question–answer strategy, the IBM team were fully aware that:
As we humans process language, we pare down alternatives using our incredible abilities to reason based on our knowledge. We also use any context couching the language to further promote certain understandings. These things allow us to deal with the implicit, highly contextual, ambiguous and often imprecise nature of language.
The machine successfully challenged two Jeopardy! masters, Ken Jennings and Brad Rutter in a Final Jeopardy! general knowledge, human-versusmachine, exhibition contest.
Introduction to Part Two
- from PART TWO
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 99-102
-
- Chapter
- Export citation
-
Summary
In Part One we cast Turing and the influence of his notions of machine thinking and intelligence in the context of philosophy and cognition. In this part of the book we show what happened when his ideas were put into practice.
Early attempts at building social machines that could interact with humans using everyday language were explored through single-topic systems beginning with Eliza the imitative psychotherapist, and PARRY the paranoia simulation.
Although the ability to imitate human conversation, and the capacity to deceive human interrogators into believing they are interacting with another human, seems central to Turing's game, the whole performance of the test depends on the individual judgement of satisfactory responses by panels of interrogators. One person's idea of a satisfactory response to a question or statement might seem inadequate or even evasive in another person's estimation. Perhaps, as Turing felt, on the road to building a machine to take part and succeed in his imitation game, the journey would have enlightened humans a lot about themselves and how they ‘tick’.
While our research is ongoing, in the following chapters we present results from three public experiments investigating how difficult (or easy) it is to distinguish artificial conversation from natural discourse, what it means to be human in terms of using language, and whether machines are now able to answer questions in a satisfactory and sustained manner.
Between the first (2008) and second (2012) Turing test experiments that we conducted, IBM had achieved another momentous machine versus human moment. Following Deep Blue's success in a chess tournament in 1997, the first time a computer beat Garry Kasparov, chess grand master and former world champion, over a series, in 2011 IBM's Watson machine beat two human champions on an American general knowledge quiz show Jeopardy!.
Watson's team were invited to take part in the 2012 Turing test experiment but they declined. Disappointing as it was, their decision was not a surprise. IBM's latest supercomputer was not programmed to hold conversations; it was a reverse question–answer system sifting through massive amounts of data to find the most probable response (see Stephen Baker's book Final Jeopardy: Man vs. Machine and the Quest to Know Everything, 2011).
2 - Turing's Ideas on Machine Thinking and Intelligence
- from PART ONE
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 23-40
-
- Chapter
- Export citation
-
Summary
In the previous chapter we learnt a little about Turing. Now we explain his ideas on intelligent machinery. Turing's investigation into the question of whether it was possible for machines to show intelligent behaviour led him to propose one of the most controversial tests for examining a machine's performance through a question-and-answer imitation game.
Human computers
Turing's 1936 paper On Computable Numbers, with an Application to the Entscheidungsproblem revolutionised the way in which modern computer science was considered and his later papers formed the basis for the digital computing era. The readers are encouraged to view this paper, which first appeared in the Proceedings of the London Mathematical Society and is included along with other works in Cooper and van Leeuwen (2013). Hodges (1992) provides an excellent and less technical explanation.
At the time of its publication, it was humans who computed carrying out calculations using sufficient pencil and paper (Bush, 1945), so it was humans who were known as computers, employed in all sorts of industries including government and business (Copeland, 2004). Computation was done by humans using their agency to write meaningful symbols on paper. Turing believed this work could be managed by a machine; he declared that it was “possible to invent a single machine which can be used to compute any computable sequence” (Turing, 1936). Although the machines of that period were mechanised elements of the human computer's work, i.e., adding and subtracting, they did so more quickly. The use of the term digital distinguished the machine from the human. Turing's idea for a universal machine emerged in this paper. It is worth noting here comments from Turing's 1938 Princeton University doctoral dissertation, that mathematical reasoning involved intuition and ingenuity, the former allowing spontaneous judgements that are not the result of conscious trains of reasoning, which, when added to suitable arrangements of propositions, geometric figures and drawings, would result in ingenuity. Both these functions would differ in the role they played from occasion to occasion. In this 1936 paper Turing proposed ideas that would be contentious and remain so in the century following his untimely death.
9 - Interviews with Elite Machine Developers
- from PART TWO
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 159-170
-
- Chapter
- Export citation
-
Summary
In 2014 five of the machine developers who took part in the 2012 Bletchley Park Turing test experiment were again invited to take part in a further experiment at The Royal Society in London (see Chapter 10). These were:
• Rollo Carpenter, developer of Cleverbot
• Fred Roberts, developer of Elbot
• Robby Garner, developer of JFRED/TuringHub
• Robert Medeksza, developer of Ultra Hal
• Vladimir Veselov, lead developer of Eugene Goostman.
All of these except Robby Garner, had also taken part in the 2008 Reading University Turing test experiment (see Chapter 7). In addition all the developers were asked about their views on conversation systems and their opinion as to whether the Turing test was relevant to them. Veselov, leading the team behind Eugene Goostman was not able to provide details: the system and its technology is owned by iFree, a Russian company.
This chapter recounts those views, which we think give the reader a flavour of each machine's conversation style. We begin by presenting questions put to four of the developers, with their answers, which we think together provide some insight into approaches to building dialogue agents. We then provide brief commentary on the five-minute exchanges between Shah and the 2014 online version of the five systems. Where the website recorded it the exchanges are presented.
Developer questions
During the preparations for the 2014 Turing tests at the Royal Society each of the five invited developers were asked, amonst other things:
(1) To explain their background and their views on the Turing test.
(2) If and how they negotiate ‘Winograd schema’, i.e. disambiguating ambiguous input from users who interact with the systems (Winograd, S.1970; Levesque, H.2011).
(3) How the system deals with linguistic cultural differences from those of the system in question. For example, how to deal with different versions of English.
(4) Describe the abuse their systems have received via the web.
Developer responses
Ultra Hal – Robert Medeksza
I am president of Zabaware, Inc. and creator of the Ultra Hal chatbot. Ultra Hal is a learning bot that learns from past conversations. Every conversation it has had is recorded in a large database (currently over 21 million sentences) and it does a statistical analysis to find similar conversational threads in order to come up with responses.
Index
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 194-195
-
- Chapter
- Export citation
Contents
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp v-viii
-
- Chapter
- Export citation
7 - The 2008 Reading University Turing Tests
- from PART TWO
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 103-127
-
- Chapter
- Export citation
-
Summary
The 18th Loebner Prize, hosted at Reading University in 2008, began with an experiment using one-to-one text-based interrogation of thirteen machines. Six machines with the most conversational ability scores of 50 or over were invited to participate in simultaneous comparison tests; that is, each of the machines would be interrogated simultaneously with another hidden entity.
Five of the invited systems accepted and participated in the simultaneous comparison phase, with the best machine to be awarded the Bronze Medal of the Loebner Prize. Hugh Loebner allowed the authors to design the experiment around simultaneous tests featuring five machines, Elbot, Eugene Goostman, Brother Jerome, Jabberwacky and Ultra Hal.
Unknown to the judges, control pairs of both human were embedded in the machine–set-ups. This was to make sure the judges were paying attention to the utterances and not returning a result of one machine and one human each time without proper consideration.
Because one of the six invited entries could not attend the experiment, it was decided to use that entry's human comparisons for a control pair of machine–machine tests. The Elbot and Eugene Goostman machines received the most scores of ‘50 or over’ in the preliminary online one-to-one phase. Their developers agreed that their entries could participate in the machine control pair tests. This was the first time in Turing test implementations that control pairs of two humans and two machines had been embedded among machine– human pairs hidden from the interrogators.
The experimental set-ups
A panel of 24 judges, or two juries, was convened for the experiment. The rationale for this was:
(a) to evaluate Turing's five-minute-simultaneous-comparison imitation game;
(b) to test the hypothesis that a five-minute interrogation giving a thin slice of conversation is sufficient for differentiating the machine and the human;
(c) to test the hypothesis that, without being explicitly told of machine–human and control pairs of human–human and machine–machine, an interrogator's gut reaction would correctly identify the nature of each hidden interlocutor.
This experiment considered first impressions observations (Willis and Todorov, 2006) and thin slice experiments (Albrechtsen et al., 2009).
1 - Turing the Man
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 11-20
-
- Chapter
- Export citation
-
Summary
It is not our intention in this book to give a detailed life history and background to Alan Turing in all its complexity with the many issues that it raised. This is done very well elsewhere, for example in Andrew Hodges’ excellent biography of Turing (Hodges, 1992). What this book is about is Turing's imitation game, pure and simple.
However the game, which came into existence in the late 1940s and was fine tuned in the early 1950s, continues to spark a plethora of misunderstandings, arguments and much controversy particularly with regard to its philosophical context. As a result we feel that it is well worthwhile taking a look at some aspects of Turing's life so that we can get to grips with the game a little better.
In order for us to comprehend the particular phenomenon that is the imitation game, what we try to do in this chapter is to understand more about the actual person and what he was like. We do this by drawing on some of his lectures, on some of the biographies about him and through the comments of people that knew him. Hopefully with this background we will be able to get to grips with what the imitation game is really all about.
Bletchley Park
During the Second World War, Turing worked in Bletchley Park, the UK's main decryption centre, which subsequently became famous as the place the code for the German Enigma machine was cracked. Some of the UK's leading mathematicians were brought together in one place and Turing was one of them.
Turing had realized early on that a large part of deciphering amounted to performing a whole series of manipulations and calculations in an automated fashion and that this could be achieved much better by a machine than by a human, partly because the machine could keep working on the problem without sleep and partly because it didn't make mistakes. Along with Gordon Welchman (see Davies, 1999), who like Turing had previously been at Cambridge University, he redesigned a Polish electromechanical cipher-breaking machine and, in 1940, they called their device the Bombe. In fact it consisted largely of rows and rows of uniselectors which were the standard telecommunications technology at the time for telephone routing.
6 - Matters Arising from Early Turing Tests
- from PART ONE
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 81-96
-
- Chapter
- Export citation
-
Summary
As we already mentioned, to realise Turing's tests is, in the opinion of Hayes and Ford (1995), harmful to the science of AI. We contest this position and feel it is a dereliction of the duty of science whose remit should not be to avoid difficult goals or to appease the sceptical. Science should pursue innovation and advance technology for the benefit of humanity.
If realising Turing's two tests of imitation, deception and intelligence can help us ascertain what does and does not fool people, thus improving deception detection, then this cannot be contrary to the goals of good science. Especially as many researchers (including Block, Pinker, and Shieber) have pointed out and others (Colby et al., Heiser et al., Weizenbaum) have demonstrated through experiments that some intelligent humans are gullible.
The current climate of increasing cybercrime sees opportunists turning to innovative means of defrauding people – stealing their identity, swindling funds – including using text-based chatting across the Internet. So now is a very good time to engineer virtuous artificial conversationalists to counter the attack from malware such as CyberLover. In this chapter we look at some of the common arguments over the Turing test and early Turing test implementations, considering the key questions of duration, knowledge, memory, cultural bias. We begin by asking what if anything is actually being measured.
What is being measured?
Is it intelligence or a type of human intelligence being measured in a Turing test? Turing (1950) believed a sustained level of answering any questions was sufficient to assess a machine's performance in thinking at a satisfactory level. But what then is thinking? To Moor (2004) it is information processing in ways which involve recognition, imagination, evaluation and decision. For Baum (2004) semantics is the concern of thought equivalent to capturing and exploiting the compact structure of the world. Demchenko and Veselov (2008) ask if the proven ability to think shortens the distance between machines and humankind.
For a machine to succeed at providing sustained satisfactory responses in an imitation game these comments imply that a machine would necessarily be able to process information with the sophistication of a normal, living adult human being; that is, the machine must be a consummate actor.
Frontmatter
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp i-iv
-
- Chapter
- Export citation
3 - A Brief Introduction to Artificial Intelligence
- from PART ONE
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 41-55
-
- Chapter
- Export citation
-
Summary
At this stage in the book we take a break from looking at Alan Turing himself and the imitation game and consider the wider field of artificial intelligence (AI). Whilst the game itself has proved to be arguably one of the most iconic and controversial aspects of AI, it is useful, we feel, to assess just how the game fits into the field and perhaps to give some sort of understanding as to why it is so important. We also take a look at such things as natural language processing but we avoid heavy mathematics. Anyone who is already well versed in AI may well wish to move straight to Chapter 4.
Alan Turing is frequently referred to as the father of artificial intelligence. He was around at the dawn of the computer age and was himself directly involved in early computer systems such as the Bombe, which he designed, and the Colossus, on which his work was used. The field of AI itself however was, some claim, first so named after Turing's death, around 1956 (Russell and Norvig, 2012) although in general it could be said to have come into existence as the first computers appeared in the 1940s and 1950s.
In AI's formative years attention was focussed mostly on getting computers to do things that, if done by a human, would be regarded as intelligent acts. Essentially it was very human-centered. When Turing proposed his imitation game in 1950, it was perfectly timed to be grabbed hungrily by the young and burgeoning, soon to become, AI community, particularly those interested in the philosophical aspects of the new field. As was shown in the previous chapter even main stream radio broadcasting was not scared to encompass the topic.
The game and AI
Turing wanted to come up with a realisable concept of intelligence in machines. Rather than give a long list of definitions, many of which would be controversial, or to construct a series of mathematical statements, most of which would be impracticable, he put the human at the centre and used a form of science involving actual experimentation to confirm the hypothesis.
Introduction
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 1-10
-
- Chapter
- Export citation
-
Summary
Turing's imitation game, also commonly known as the Turing test, is undoubtedly a key component in any study of artificial intelligence or computer science. But it is much more than this as it also provides insight into how humans communicate, our unconscious biases and prejudices, and even our gullibility. The imitation game helps us to understand why we make assumptions, which often turn out to be incorrect, about someone (or something) with whom we are communicating and perhaps it helps to shed light on why we sometimes make seemingly irrational conclusions about them.
In the chapters ahead we'll look at the game in much more detail; however in essence it involves a direct conversational comparison between a human and a machine. Basically, the goal of the machine is to make you believe that it is in fact the human taking part, whereas the human involved is merely being themselves. Both the human and the machine are hidden, so cannot be seen or heard. The conversation is purely textual with slang, idiom, spelling mistakes, poor grammar and factual errors all being part of the mix.
If you put yourself in the role of an interrogator in the parallel test then it is your job to converse with both a hidden human and a hidden machine at the same time and, after a five-minute period as stipulated by Alan Turing himself, to decide which hidden entity is which. If you make the right identification then that's a point against the machine whereas if you make a mistake or you simply don't know which is which then that's a point for the machine. If a machine is successful in fooling enough average interrogators (one interpretation of Turing's 1950 work is 30%), then it can be said to have passed the Turing test.
Actually restricting the topic to a specific subject area makes it somewhat easier for the machine, because it can direct the interrogation to its knowledgebase. However, Turing advocated that the machine be investigated for its intellectual capacity. Thus we should not restrict the topic of conversation at all, which we believe is an appropriate challenge for machines of today, and which is much more interesting for the interrogator and is in the spirit of the game as (we feel) Turing intended.
11 - The Reaction to Turing2014
- from PART TWO
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 187-193
-
- Chapter
- Export citation
-
Summary
We presented in this book Alan Turing the man before taking the reader on a journey through the prescient beliefs of this mathematical genius, WWII code-breaker and all-round polymath. From his earliest works, and notions about thinking machines, to implementations of his thought experiment about a machine's ability to answer any questions put to it by a human interrogator, you can see how Turing's ideas are as relevant today as when he originally described them.
Consider the World Economic Forum (WEF) annual gathering of world leaders in Davos, in 2016. The Fourth Industrial Revolution was one of the eight themes of the conference with technology at the forefront of discussions. These included ‘The Transformation of Tomorrow’, ‘What if robots go to war?’ and the ‘State of artificial intelligence’. Nearly 70 years before this, in his 1948 report, Intelligent Machinery, Turing first proposed his test.
As we have seen, machines can now converse improving on the Eliza program's technique to transform an input message into an output question, thereby getting human interlocutors to talk about themselves (see Chapter 9). Of course there remain sophistications in human language that need mastering in machine talk; for example, creating metaphors and analogies to explain unexpected or unusual occurrences through similar or common experiences. This will take time to develop, especially to understand more fully how humans do it.
Do we feel then that the three practical Turing test experiments, especially the 2014 event, realised the kind of thinking machine Turing might have envisaged in 1948? Of course not, … yet. A stronger test would last for longer than five minutes and a machine would need to convince more than half a panel of interrogator–judges that it was human, as Turing felt in 1952, two years after his prediction in the Mind paper.
The three experiments, 2008, 2012 and in 2014 were a start in seriously examining Turing's ideas for a thinking machine. An unpredicted backlash followed the announcement of machine Eugene Goostman's performance in the 2014 Turing test experiment.
Just as Turing had predicted more than half a century before, academics like Stevan Harnad were unimpressed. Harnad tweeted his doubt that the test had actually been passed. Criticism also came from the psychologist and linguist Gary Markus.
Turing's Imitation Game
- Conversations with the Unknown
- Kevin Warwick, Huma Shah
-
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016
-
Can you tell the difference between talking to a human and talking to a machine? Or, is it possible to create a machine which is able to converse like a human? In fact, what is it that even makes us human? Turing's Imitation Game, commonly known as the Turing Test, is fundamental to the science of artificial intelligence. Involving an interrogator conversing with hidden identities, both human and machine, the test strikes at the heart of any questions about the capacity of machines to behave as humans. While this subject area has shifted dramatically in the last few years, this book offers an up-to-date assessment of Turing's Imitation Game, its history, context and implications, all illustrated with practical Turing tests. The contemporary relevance of this topic and the strong emphasis on example transcripts makes this book an ideal companion for undergraduate courses in artificial intelligence, engineering or computer science.
PART ONE
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 21-22
-
- Chapter
- Export citation
10 - Turing2014: Tests at The Royal Society, June 2014
- from PART TWO
- Kevin Warwick, Coventry University, Huma Shah, Coventry University
-
- Book:
- Turing's Imitation Game
- Published online:
- 12 October 2016
- Print publication:
- 22 September 2016, pp 171-186
-
- Chapter
- Export citation
-
Summary
Turing's imitation experiment can be regarded as a:
(a) A game for judges: to avoid being persuaded by a machine that it is human;
(b) A game for the machines: to persuade the judges that they/it are the human;
(c) A game for hidden humans: be human with all the constraints about not revealing personal identity;
(d) A game for the observer, to study and compare the results.
It is worth pointing out here that turning Turing's idea into a practical
means of examining machine thinking is fraught with problems, for example:
(a) matching the machines with ‘like’ humans for simultaneous comparison or in a viva voce experiment;
(b) lack of resources for conducting psychological tests; personality and typing speed of the Judges and hidden humans.
In the case of Eugene Goostman, a machine which simulates an English speaking young teen from Odessa, Ukraine, it is not difficult to pair the machine with a human teenager, the pair being interrogated by a teenage judge. However, what is difficult is recruiting the exact match: a teenager from the Ukraine who speaks English.
At the other end of the machine conversation spectrum, Elbot is a machine with a robot personality; it would be futile to recruit a human to act like a robot, because the Turing test is concerned with the hidden entities providing satisfactory and sustained answers to any questions.
When it comes to testing the personality of human interrogator judges and the hidden humans or checking for characteristics such as typing speed, it would add an interesting dimension to the experiment.
A Turing test is a scientific experiment in that a set of conditions, the duration of tests, the number/nature of participants – human or machine, can be put in place for observation and measurement, and be repeatable. Adding new features, therefore, would also mean new challenges.
In Chapter 8 we presented the Turing test experiment at Bletchley Park in 2012. That was concerned mainly with finding which of Turing's two scenarios for implementing his imitation game – the one-to-one viva voce or the simultaneous comparison – was harder for the machine when trying to convince the human judges that it was a human.