To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Schulte-Geers and Stadje [Journal of Applied Probability, 2015, 52: 180–190] gave several closed form expressions for the exact distribution of the all-time maximal percentage in Pólya's urn model. But all these expressions corresponded to an integer parameter taking the value 1. Here, we derive much more general closed form expressions applicable for all possible values of the integer parameter. We also illustrate their computational efficiency.
It is not our intention in this book to give a detailed life history and background to Alan Turing in all its complexity with the many issues that it raised. This is done very well elsewhere, for example in Andrew Hodges’ excellent biography of Turing (Hodges, 1992). What this book is about is Turing's imitation game, pure and simple.
However the game, which came into existence in the late 1940s and was fine tuned in the early 1950s, continues to spark a plethora of misunderstandings, arguments and much controversy particularly with regard to its philosophical context. As a result we feel that it is well worthwhile taking a look at some aspects of Turing's life so that we can get to grips with the game a little better.
In order for us to comprehend the particular phenomenon that is the imitation game, what we try to do in this chapter is to understand more about the actual person and what he was like. We do this by drawing on some of his lectures, on some of the biographies about him and through the comments of people that knew him. Hopefully with this background we will be able to get to grips with what the imitation game is really all about.
Bletchley Park
During the Second World War, Turing worked in Bletchley Park, the UK's main decryption centre, which subsequently became famous as the place the code for the German Enigma machine was cracked. Some of the UK's leading mathematicians were brought together in one place and Turing was one of them.
Turing had realized early on that a large part of deciphering amounted to performing a whole series of manipulations and calculations in an automated fashion and that this could be achieved much better by a machine than by a human, partly because the machine could keep working on the problem without sleep and partly because it didn't make mistakes. Along with Gordon Welchman (see Davies, 1999), who like Turing had previously been at Cambridge University, he redesigned a Polish electromechanical cipher-breaking machine and, in 1940, they called their device the Bombe. In fact it consisted largely of rows and rows of uniselectors which were the standard telecommunications technology at the time for telephone routing.
As we already mentioned, to realise Turing's tests is, in the opinion of Hayes and Ford (1995), harmful to the science of AI. We contest this position and feel it is a dereliction of the duty of science whose remit should not be to avoid difficult goals or to appease the sceptical. Science should pursue innovation and advance technology for the benefit of humanity.
If realising Turing's two tests of imitation, deception and intelligence can help us ascertain what does and does not fool people, thus improving deception detection, then this cannot be contrary to the goals of good science. Especially as many researchers (including Block, Pinker, and Shieber) have pointed out and others (Colby et al., Heiser et al., Weizenbaum) have demonstrated through experiments that some intelligent humans are gullible.
The current climate of increasing cybercrime sees opportunists turning to innovative means of defrauding people – stealing their identity, swindling funds – including using text-based chatting across the Internet. So now is a very good time to engineer virtuous artificial conversationalists to counter the attack from malware such as CyberLover. In this chapter we look at some of the common arguments over the Turing test and early Turing test implementations, considering the key questions of duration, knowledge, memory, cultural bias. We begin by asking what if anything is actually being measured.
What is being measured?
Is it intelligence or a type of human intelligence being measured in a Turing test? Turing (1950) believed a sustained level of answering any questions was sufficient to assess a machine's performance in thinking at a satisfactory level. But what then is thinking? To Moor (2004) it is information processing in ways which involve recognition, imagination, evaluation and decision. For Baum (2004) semantics is the concern of thought equivalent to capturing and exploiting the compact structure of the world. Demchenko and Veselov (2008) ask if the proven ability to think shortens the distance between machines and humankind.
For a machine to succeed at providing sustained satisfactory responses in an imitation game these comments imply that a machine would necessarily be able to process information with the sophistication of a normal, living adult human being; that is, the machine must be a consummate actor.
We investigate the expressiveness of two classical distributed paradigms by defining the first encoding of the pure mobile ambient calculus into the synchronous π-calculus. Our encoding, whose correctness has been proved by relying on the notion of operational correspondence, shows how the hierarchical ambient structure can be reformulated within a flat channel interconnection amongst independent processes, without centralised control. To easily handle the computation for simulating a capability, we introduce the notions of simulating trace (representing the computation that a π-calculus process has to execute to mimic a capability) and of aborting trace (representing the computation that a π-calculus process executes when the simulation of a capability cannot succeed). Thus, the encoding may introduce loops, but, as it will be shown, the number of steps of any trace, therefore of any aborting trace, is limited, and the number of states of the transition system of the encoding processes still remains finite. In particular, an aborting trace makes a sort of backtracking, leaving the involved sub-processes in the same starting configurations. We also discuss two run-time support methods to make these loops harmless at execution time. Our work defines a relatively simple, direct, and precise translation that reproduces the ambient structure by means of channel links, and keeps track of the dissolving of an ambient.
Effect systems have the potential to help software developers, but their practical adoption has been very limited. We conjecture that this limited adoption is due in part to the difficulty of transitioning from a system where effects are implicit and unrestricted to a system with a static effect discipline, which must settle for conservative checking in order to be decidable. To address this hindrance, we develop a theory of gradual effect checking, which makes it possible to incrementally annotate and statically check effects, while still rejecting statically inconsistent programs. We extend the generic type-and-effect framework of Marino and Millstein with a notion of unknown effects, which turns out to be significantly more subtle than unknown types in traditional gradual typing. We appeal to abstract interpretation to develop and validate the concepts of gradual effect checking. We also demonstrate how an effect system formulated in the framework of Marino and Millstein can be automatically extended to support gradual checking. We use gradual effect checking to develop a fully gradual type-and-effect framework, which permits interaction between static and dynamic checking for both effects and types.
At this stage in the book we take a break from looking at Alan Turing himself and the imitation game and consider the wider field of artificial intelligence (AI). Whilst the game itself has proved to be arguably one of the most iconic and controversial aspects of AI, it is useful, we feel, to assess just how the game fits into the field and perhaps to give some sort of understanding as to why it is so important. We also take a look at such things as natural language processing but we avoid heavy mathematics. Anyone who is already well versed in AI may well wish to move straight to Chapter 4.
Alan Turing is frequently referred to as the father of artificial intelligence. He was around at the dawn of the computer age and was himself directly involved in early computer systems such as the Bombe, which he designed, and the Colossus, on which his work was used. The field of AI itself however was, some claim, first so named after Turing's death, around 1956 (Russell and Norvig, 2012) although in general it could be said to have come into existence as the first computers appeared in the 1940s and 1950s.
In AI's formative years attention was focussed mostly on getting computers to do things that, if done by a human, would be regarded as intelligent acts. Essentially it was very human-centered. When Turing proposed his imitation game in 1950, it was perfectly timed to be grabbed hungrily by the young and burgeoning, soon to become, AI community, particularly those interested in the philosophical aspects of the new field. As was shown in the previous chapter even main stream radio broadcasting was not scared to encompass the topic.
The game and AI
Turing wanted to come up with a realisable concept of intelligence in machines. Rather than give a long list of definitions, many of which would be controversial, or to construct a series of mathematical statements, most of which would be impracticable, he put the human at the centre and used a form of science involving actual experimentation to confirm the hypothesis.
Turing's imitation game, also commonly known as the Turing test, is undoubtedly a key component in any study of artificial intelligence or computer science. But it is much more than this as it also provides insight into how humans communicate, our unconscious biases and prejudices, and even our gullibility. The imitation game helps us to understand why we make assumptions, which often turn out to be incorrect, about someone (or something) with whom we are communicating and perhaps it helps to shed light on why we sometimes make seemingly irrational conclusions about them.
In the chapters ahead we'll look at the game in much more detail; however in essence it involves a direct conversational comparison between a human and a machine. Basically, the goal of the machine is to make you believe that it is in fact the human taking part, whereas the human involved is merely being themselves. Both the human and the machine are hidden, so cannot be seen or heard. The conversation is purely textual with slang, idiom, spelling mistakes, poor grammar and factual errors all being part of the mix.
If you put yourself in the role of an interrogator in the parallel test then it is your job to converse with both a hidden human and a hidden machine at the same time and, after a five-minute period as stipulated by Alan Turing himself, to decide which hidden entity is which. If you make the right identification then that's a point against the machine whereas if you make a mistake or you simply don't know which is which then that's a point for the machine. If a machine is successful in fooling enough average interrogators (one interpretation of Turing's 1950 work is 30%), then it can be said to have passed the Turing test.
Actually restricting the topic to a specific subject area makes it somewhat easier for the machine, because it can direct the interrogation to its knowledgebase. However, Turing advocated that the machine be investigated for its intellectual capacity. Thus we should not restrict the topic of conversation at all, which we believe is an appropriate challenge for machines of today, and which is much more interesting for the interrogator and is in the spirit of the game as (we feel) Turing intended.
We presented in this book Alan Turing the man before taking the reader on a journey through the prescient beliefs of this mathematical genius, WWII code-breaker and all-round polymath. From his earliest works, and notions about thinking machines, to implementations of his thought experiment about a machine's ability to answer any questions put to it by a human interrogator, you can see how Turing's ideas are as relevant today as when he originally described them.
Consider the World Economic Forum (WEF) annual gathering of world leaders in Davos, in 2016. The Fourth Industrial Revolution was one of the eight themes of the conference with technology at the forefront of discussions. These included ‘The Transformation of Tomorrow’, ‘What if robots go to war?’ and the ‘State of artificial intelligence’. Nearly 70 years before this, in his 1948 report, Intelligent Machinery, Turing first proposed his test.
As we have seen, machines can now converse improving on the Eliza program's technique to transform an input message into an output question, thereby getting human interlocutors to talk about themselves (see Chapter 9). Of course there remain sophistications in human language that need mastering in machine talk; for example, creating metaphors and analogies to explain unexpected or unusual occurrences through similar or common experiences. This will take time to develop, especially to understand more fully how humans do it.
Do we feel then that the three practical Turing test experiments, especially the 2014 event, realised the kind of thinking machine Turing might have envisaged in 1948? Of course not, … yet. A stronger test would last for longer than five minutes and a machine would need to convince more than half a panel of interrogator–judges that it was human, as Turing felt in 1952, two years after his prediction in the Mind paper.
The three experiments, 2008, 2012 and in 2014 were a start in seriously examining Turing's ideas for a thinking machine. An unpredicted backlash followed the announcement of machine Eugene Goostman's performance in the 2014 Turing test experiment.
Just as Turing had predicted more than half a century before, academics like Stevan Harnad were unimpressed. Harnad tweeted his doubt that the test had actually been passed. Criticism also came from the psychologist and linguist Gary Markus.
Turing's imitation experiment can be regarded as a:
(a) A game for judges: to avoid being persuaded by a machine that it is human;
(b) A game for the machines: to persuade the judges that they/it are the human;
(c) A game for hidden humans: be human with all the constraints about not revealing personal identity;
(d) A game for the observer, to study and compare the results.
It is worth pointing out here that turning Turing's idea into a practical
means of examining machine thinking is fraught with problems, for example:
(a) matching the machines with ‘like’ humans for simultaneous comparison or in a viva voce experiment;
(b) lack of resources for conducting psychological tests; personality and typing speed of the Judges and hidden humans.
In the case of Eugene Goostman, a machine which simulates an English speaking young teen from Odessa, Ukraine, it is not difficult to pair the machine with a human teenager, the pair being interrogated by a teenage judge. However, what is difficult is recruiting the exact match: a teenager from the Ukraine who speaks English.
At the other end of the machine conversation spectrum, Elbot is a machine with a robot personality; it would be futile to recruit a human to act like a robot, because the Turing test is concerned with the hidden entities providing satisfactory and sustained answers to any questions.
When it comes to testing the personality of human interrogator judges and the hidden humans or checking for characteristics such as typing speed, it would add an interesting dimension to the experiment.
A Turing test is a scientific experiment in that a set of conditions, the duration of tests, the number/nature of participants – human or machine, can be put in place for observation and measurement, and be repeatable. Adding new features, therefore, would also mean new challenges.
In Chapter 8 we presented the Turing test experiment at Bletchley Park in 2012. That was concerned mainly with finding which of Turing's two scenarios for implementing his imitation game – the one-to-one viva voce or the simultaneous comparison – was harder for the machine when trying to convince the human judges that it was a human.
This paper presents our preliminary research into the autonomous control of analpine skiing robot. Based on our previous experience with active balancing ondifficult terrain and developing an ice-skating robot, we have implemented asimple control system that allows the humanoid robot Jennifer to steer around asimple alpine skiing course, brake, and actively control the pitch and roll ofthe skis in order to maintain stability on hills with variable inclination.
The robot steers and brakes by using the edges of the skis to dig into the snow,by inclining both skis to one side the robot can turn in an arc. By rolling theskis outward and pointing the toes together the robot creates a snowplough shapethat rapidly reduces its forward velocity.
To keep the skis in constant contact with the hill we use two independentproportional-integral-derivative (PID) controllers to continually adjust therobot’s inclination in the frontal and sagittal planes.
Our experiments show that these techniques are sufficient to allow a smallhumanoid robot to alpine ski autonomously down hills of different inclinationwith variable snow conditions.
In this paper I argue that negation in The Port Royal Logic is not a failed or incoherent approximation of Boolean complementation as maintained by Sylvain Auroux and Marc Dominicy, but is rather a version of privative negation from medieval logic, and that as such it has a perfectly coherent semantics. The discussion reviews the critiques of Auroux and Dominicy as well as the semantics of privative negation as found in Aristotle, Proclus, Ockham, Buridan, Descartes, and Arnauld.
We say a graph is (Qn,Qm)-saturated if it is a maximal Qm-free subgraph of the n-dimensional hypercube Qn. A graph is said to be (Qn,Qm)-semi-saturated if it is a subgraph of Qn and adding any edge forms a new copy of Qm. The minimum number of edges a (Qn,Qm)-saturated graph (respectively (Qn,Qm)-semi-saturated graph) can have is denoted by sat(Qn,Qm) (respectively s-sat(Qn,Qm)). We prove that
for fixed m, disproving a conjecture of Santolupo that, when m=2, this limit is 1/4. Further, we show by a different method that sat(Qn, Q2)=O(2n), and that s-sat(Qn, Qm)=O(2n), for fixed m. We also prove the lower bound
In this paper, we propose a novel method for constructing upper bounds of the quasi-stationary distribution of SIS processes. Using this method, we obtain an upper bound that is better than the state-of-the-art upper bound. Moreover, we prove that the fixed point map Φ [7] actually preserves the equilibrium reversed hazard rate order under a certain condition. This allows us to further improve the upper bound. Some numerical results are presented to illustrate the results.
Haptic devices have proven effective in stimulating proprioceptive sensing in post-stroke patients. In this work, pre-existing devices were used together in a remote environment for the teleassessment of impaired hands. A four-channel bilateral control system in the presence of large and variable time delay is proposed as a proof of concept. Time delay is managed with a novel communication disturbance observer (CDOB). The system also employed a scaling down compensation value (SDCV) for the CDOB. The proposed control system was tested successfully in bilateral haptic interaction, simulating a remote motor and functional evaluation of patients' hands, guaranteeing safe and stable interaction, even in the presence of large network delays.
The knowledge gradient (KG) policy was originally proposed for online ranking and selection problems but has recently been adapted for use in online decision-making in general and multi-armed bandit problems (MABs) in particular. We study its use in a class of exponential family MABs and identify weaknesses, including a propensity to take actions which are dominated with respect to both exploitation and exploration. We propose variants of KG which avoid such errors. These new policies include an index heuristic, which deploys a KG approach to develop an approximation to the Gittins index. A numerical study shows this policy to perform well over a range of MABs including those for which index policies are not optimal. While KG does not take dominated actions when bandits are Gaussian, it fails to be index consistent and appears not to enjoy a performance advantage over competitor policies when arms are correlated to compensate for its greater computational demands.
In this paper, we study an M/M/1 queue, where the server continues to work during idle periods and builds up inventory. This inventory is used for new arriving service requirements, but it is completely emptied at random epochs of a non-homogeneous Poisson process, whose rate depends on the current level of the acquired inventory. For several shapes of depletion rates, we derive differential equations for the stationary density of the workload and the inventory level and solve them explicitly. Finally, numerical illustrations are given for some particular examples, and the effects of this depletion mechanism are discussed.