To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The field of quantum programming languages is developing rapidly and there is a surprisingly large literature. Research in this area includes the design of programming languages for quantum computing, the application of established semantic and logical techniques to the foundations of quantum mechanics, and the design of compilers for quantum programming languages. This article justifies the study of quantum programming languages, presents the basics of quantum computing, surveys the literature in quantum programming languages, and indicates directions for future research.
In recent years several new models of computation have emerged that have been inspired by the physical sciences, biology and logic, to name but a few (for example, quantum computing, chemical machines and bio-computing). Also, many developments of traditional computational models have been proposed with the aim of taking into account the new demands of computer systems users and the new capabilities of computation engines.
The $\lambda$-calculus is destructive: its main computational mechanism, beta reduction, destroys the redex, which makes replaying the computational steps impossible. Combinatory logic is a variant of the $\lambda$-calculus that maintains irreversibility. Recently, reversible computational models have been studied mainly in the context of quantum computation, as (without measurements) quantum physics is inherently reversible. However, reversibility also fundamentally changes the semantical framework in which classical computation has to be investigated. We describe an implementation of classical combinatory logic in a reversible calculus for which we present an algebraic model based on a generalisation of the notion of a group.
We present a formalism called addressed term rewriting systems, which can be used to model implementations of theorem proving, symbolic computation and programming languages, especially aspects of sharing, recursive computations and cyclic data structures. Addressed Term Rewriting Systems are therefore well suited to describing object-based languages, and as an example we present a language called $\lambda{\cal O}bj^{a}$, incorporating both functional and object-based features. As a case study in how reasoning about languages is supported in the ATRS formalism, we define a type system for $\lambda{\cal O}bj^{a}$ and prove a type soundness result.
Gamma is a programming model in which computation can be seen as chemical reactions between data represented as molecules floating in a chemical solution. This model can be formalised as associative, commutative, conditional rewritings of multisets where rewrite rules and multisets represent chemical reactions and solutions, respectively. In this article we generalise the notion of multiset used by Gamma and present applications through various programming examples. First, multisets are generalised to include rewrite rules, which become first-class citizens. This extension is formalised by the $\gamma$-calculus, which is a chemical model that summarises in a few rules the essence of higher-order chemical programming. By extending the $\gamma$-calculus with constants, operators, types and expressive patterns, we build a higher-order chemical programming language called HOCL. Finally, multisets are further generalised by allowing elements to have infinite and negative multiplicities. Semantics, implementation and applications of this extension are considered.
In this paper we discuss a persistent problem arising from polysemy: namely the difficulty of finding consistent criteria for making fine-grained sense distinctions, either manually or automatically. We investigate sources of human annotator disagreements stemming from the tagging for the English Verb Lexical Sample Task in the SENSEVAL-2 exercise in automatic Word Sense Disambiguation. We also examine errors made by a high-performing maximum entropy Word Sense Disambiguation system we developed. Both sets of errors are at least partially reconciled by a more coarse-grained view of the senses, and we present the groupings we use for quantitative coarse-grained evaluation as well as the process by which they were created. We compare the system's performance with our human annotator performance in light of both fine-grained and coarse-grained sense distinctions and show that well-defined sense groups can be of value in improving word sense disambiguation by both humans and machines.
Motion planning for manipulators with many degrees of freedom is a complex task. The research in this area has been mostly restricted to static environments. This paper presents a comparative analysis of three reactive on-line path-planning methods for manipulators: the elastic-strip, strategy-based and potential field methods. Both the elastic-strip method [O. Brock and O. Khatib, “Elastic strips: A framework for integrated planning and execution,” Int. Symp. Exp. Robot. 245–254 (1999)] and the potential field method [O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” Int. J. Robot. Res.5(1), 90–98 (1986)] have been adapted by the authors to the problem at hand related to our multi-manipulator system (MMS) (three manipulators with five degrees of freedom each). Strategy-based method is an original contribution by the authors [M. Mediavilla, J. L. González, J. C. Fraile and J. R. Perán, “Reactive approach to on-line path planning for robot manipulators in dynamic environments,” Robotica20, 375–384 (2002); M. Mediavilla, J. C. Fraile, T. González and I. J. Galindo, “Selection of strategies for collision-free motion in multi-manipulator systems,” J. Intell. Robot Syst38, 85–104 (2003)].
The three methods facilitate on-line path planning for our MMS in dynamic environments with collision avoidance, where the three manipulators may move at the same time in their common workspace. We have defined some ‘basic motion problems’ for the MMS, and a series of simulations has been running that will tell us how effective each path-planning method is. The simulations have been performed and the obtained results have been analysed by using a software program developed by the authors.
The paper also presents experimental results obtained applying the path-planning methods to our MMS, that perform pick-and-place tasks sharing common working areas.
In these early decades of the information age, the flow of information is becoming more and more central to our daily lives. It has therefore become important that information transmission be protected against eavesdropping (as, for example, when one sends credit card information over the Internet) and against noise (which might occur in a cell phone transmission, or when a compact disk is accidentally scratched). Though most of us depend on schemes that protect information in these ways, most of us also have a rather limited understanding of how this protection is done. Part of the aim of this book is to introduce the basic concepts underlying this endeavor.
Besides its practical significance, it happens that the subject of protecting information is intimately related to a number of central ideas in mathematics and computer science, and also, perhaps surprisingly, in physics. Thus in addition to its significance for society, the subject provides an ideal context for bringing ideas from these disciplines together. This interdisciplinarity is part of what has attracted us to the subject, and we hope it will appeal to the reader as well.
Among undergraduate texts on coding or cryptography, this book is unusual in its inclusion of quantum physics and the emerging technology of quantum information. Quantum cryptography, in which an eavesdropper is detected by his or her unavoidable disturbance of delicate quantum signals, was proposed in the 1980s and since then has been investigated and developed in a number of laboratories around the world.
The early twentieth century was a revolutionary time in the history of physics. People often think of Einstein's special theory of relativity of 1905, which changed our conceptions of time and space. But among physicists, quantum mechanics is usually regarded as an even more radical change in our thinking about the physical world. Quantum mechanics, which was developed between 1900 and 1926, began as a theory of atoms and light but has now become the framework in terms of which all basic physical theories are expected to be cast. We need quantum ideas not only to understand atoms, molecules, and elementary particles, but also to understand the electronic properties of solids and even certain astronomical phenomena such as the stability of white dwarf stars. The theory was radical in part because it introduced probabilistic behavior as a fundamental aspect of the world, but even more because it seems to allow mutually exclusive situations to exist simultaneously in a “quantum superposition.” We will see later that the possibility of quantum superposition is largely responsible for a quantum computer's distinctive advantage over an ordinary computer. The present chapter is devoted to introducing the basic principles of quantum mechanics.
There are essentially four components of the mathematical structure of quantum mechanics. We need to know how to represent (i) states, (ii) measurements, (iii) reversible transformations, and (iv) composite systems. We will develop the first three in stages, starting with a very simple case – linear polarization of photons – and working toward the most general case.
Recall that in the Bennett–Brassard key distribution scheme, after Alice and Bob have obtained bit strings that ideally are supposed to be identical, they have to do some further processing to make sure they end up with strings that really are identical. Our first goal in this chapter is to show how they can do this.
As you might expect, we will use error-correcting codes of the sort we have been discussing in the preceding chapter. However, the way we use error-correcting codes in quantum key distribution is not quite the same as in classical communication. Normally, one corrects errors by encoding one's message into special codewords that are sufficiently different from each other that they will still be distinguishable after passing through a noisy channel. But in quantum key distribution the “noise” of the channel might actually be the effect of an eavesdropper who is free to manipulate the signals sent by Alice. Ordinary error correction is not designed for such a setting. So instead of using codewords to encode the original transmission, we wait until all the bits have been sent and then use an error-correcting code after the fact. In this respect error correction in quantum key distribution is similar to the use of an error-correcting code in the “hat problem” discussed at the end of Chapter 4. There also, the error-correcting code is applied only after all the data – in that case the vector of hat colors – has been generated and conveyed to the participants.
In the preceding chapter we mentioned the inevitable errors that occur when one tries to send quantum signals over, say, an optical fiber, even when there is no eavesdropper. But errors in transmission are not a problem just for quantum cryptography. For this entire chapter we forget about sending quantum information and instead focus on simply transmitting ordinary data faithfully over some kind of channel. Moreover, we assume that the data either is not sensitive or has already been encrypted. Unfortunately, many methods for transmitting data are susceptible to outside influences that can cause errors. How do we protect information from these errors? Error-correcting codes provide a mathematical method of not only detecting these errors, but also correcting them. Nowadays error-correcting codes are ubiquitous; they are used, for example, in cell-phone transmissions and satellite links, in the representation of music on a compact disk, and even in the bar codes in grocery stores.
The story of modern error-correcting codes began with Claude Shannon's famous paper “A Mathematical Theory of Communication,” which was published in 1948. Shannon worked for Bell Labs where he specialized in finding solutions to problems that arose in telephone communication. Quite naturally, he started considering ways to correct errors that occurred when information was transmitted over phone lines. Richard Hamming, who also worked at Bell Labs on this problem, published a groundbreaking paper in 1950 on the subject.
As we have said, quantum mechanics has become the standard framework for most of what is done in physics and indeed has played this role for three-quarters of a century. For just as long, physicists and philosophers have, as we have already suggested, raised and discussed questions about the interpretation of quantum mechanics: Why do we single out measurement as a special kind of interaction that evokes a probabilistic, irreversible response from nature, when other kinds of interaction cause deterministic, reversible changes? How should we talk about quantum superpositions? What does entanglement tell us about the nature of reality? These are interesting questions and researchers still argue about the answers. However, in the last couple of decades researchers have also been thinking along the following line: Let us accept that quantum objects act in weird ways, and see if we can use this weirdness technologically. The two best examples of potential uses of quantum weirdness are quantum computation and quantum cryptography.
One of the first quantum cryptographic schemes to appear in the literature, and the only one we consider in detail in this book, was introduced by Charles Bennett and Gilles Brassard in 1984. Their idea is not to use quantum signals directly to convey secret information. Rather, they suggest using quantum signals to generate a secret cryptographic key shared between two parties. Thus the Bennett–Brassard scheme is an example of “quantum key distribution.”
In an ordinary computer, information is stored in a collection of tiny circuits each of which is designed to have two stable and easily distinguishable configurations: each represents a bit. In our study of quantum cryptography, we have seen how it can be useful to express information not in ordinary bits but in qubits. Whereas a bit can have only two values, say 0 and 1, a qubit can be in any quantum superposition of ∣0〉 and ∣1〉. Moreover, a qubit can be entangled with other qubits. Thus one might wonder whether a quantum computer, in which the basic elements for storing and processing information are qubits, can outperform an ordinary (classical) computer in certain ways. This question was addressed by researchers starting in the 1980s. In terms of practical consequences, perhaps the most dramatic answer has been given by Peter Shor in his 1994 factoring algorithm for a quantum computer, an algorithm that is exponentially faster than any known classical algorithm. As we have seen in Chapter 1, the difficulty of factoring a product of two large primes is the basis of the security of the RSA cryptosystem. So if one could build a large enough quantum computer – and there is no reason in principle why this could not be done – the RSA system would be rendered ineffective. In this chapter we present the basics of quantum computation and then focus on Shor's factoring algorithm.
Fields are important algebraic structures used in almost all branches of mathematics. Here we only cover the definitions and theorems needed for the purposes of this book.
Definition. A field F is a set along with two operations (denoted with addition and multiplication notation) on pairs of elements of F such that the following properties are satisfied.
For all a and b in F, we have that a + b ∈ F.
For all a, b, and c in F, we have that (a + b) + c = a + (b + c).
There exists an element 0 in F satisfying a + 0 = a for all a ∈ F.
For every a ∈ F there exists a b in F such that a + b = 0.
For all a and b in F we have that a + b = b + a.
For all a and b in F we have that ab ∈ F.
For all a, b, and c in F we have that (ab)c = a(bc).
There is an element 1 in F satisfying 1a = a for all a ∈ F.
For every a ∈ F with a ≠ = 0, there exists a b ∈ F such that ab = 1.
For every a and b in F we have that ab = ba.
For every a, b, and c in F we have that a(b + c) = ab + ac.