To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The distribution of secret keys through quantum means has certainly become the most mature application of quantum information science. Much has been written on quantum cryptography today, two decades after its inception by Gilles Brassard and Charles Bennett, and even longer after the pioneering work of Stephen Wiesner on non-counterfeitable quantum money which is often considered as the key to quantum cryptography. Quantum key distribution has gone from a bench-top experiment to a practical reality with products beginning to appear. As such, while there remain scientific challenges, the shift from basic science to engineering is underway. The wider interest by both the scientific and engineering community has raised the need for a fresh new perspective that addresses both.
Gilles Van Assche has taken up the challenge of approaching this exciting field from a non-traditional perspective, where classical cryptography and quantum mechanics are very closely intertwined. Most available papers on quantum cryptography suffer from being focused on one of these aspects alone, being written either by physicists or computer scientists. In contrast, probably as a consequence of his twofold background in engineering and physics, Gilles Van Assche has succeeded in writing a comprehensive monograph on this topic, which follows a very original view. It also reflects the types of challenge in this field – moving from basic science to engineering. His emphasis is on the classical procedures of authentication, reconciliation and privacy amplification as much as on the quantum mechanical basic concepts. Another noticeable feature of this book is that it provides detailed material on the very recent quantum cryptographic protocols using continuous variables, to which the author has personally contributed.
In this chapter, I will introduce the concepts of quantum information that are necessary to the understanding of quantum cryptography. I first review the main principles of quantum mechanics. I then introduce concepts that somehow translate information theory into the quantum realm and discuss their unique features. Finally, I conclude this chapter with some elements of quantum optics.
Fundamental definitions in quantum mechanics
In classical mechanics, a system is described with physical quantities that can take certain values at a given moment in time; we say that is has a state. For instance, the state of an elevator comprises its position and speed at a given time. As dictated by common sense, if the elevator is at a given height, it cannot be simultaneously found at another location. With quantum mechanics, things are fundamentally different and elementary particles can behave against common sense. For instance, a quantum system can simultaneously be in different levels of energy. This is not because our knowledge of the state of the system is incomplete: the behavior of the quantum system is consistent with the fact that it is in simultaneous levels of energy.
It would go beyond the scope of this book to describe quantum mechanics in detail. Instead, I propose a synthetic introduction suitable for the understanding of quantum cryptography. For a more detailed introduction, please refer to the books listed at the end of this chapter.
Founded by Shannon, information theory deals with the fundamental principles of communication. The two most important questions answered by this theory are how much we can compress a given data source and how much data we can transmit in a given communication channel.
Information theory is essentially statistically minded. Data sources are modeled as random processes, and transmission channels are also modeled in probabilistic terms. The theory does not deal with the content of information – it deals with the frequency at which symbols (letters, figures, etc.) appear or are processed but not their meaning. A statistical model is not the only option. Non-statistical theories also exist (e.g., Kolmogorov complexity). However, in this section and throughout this book, we will only use the statistical tools.
Information theory is of central importance in quantum cryptography. It may be used to model the transmission of the key elements from Alice to Bob. Note that what may happen on the quantum channel is better described using quantum information theory – see Chapter 4. Yet, the key elements chosen by Alice and those obtained by Bob after measurement are classical values so, for instance, the transmission errors can accurately be modeled using classical information theory. Reconciliation, in particular, requires classical information-theoretic techniques.
Source coding
Source coding is the first problem that information theory addresses. Assume that a source emits symbols xi from an alphabet χ and that it can be modeled by the random variable X on χ. For instance, the source can be the temperature measured by some meteorological station at regular intervals or the traffic on a network connection.
Some QKD protocols, as I will detail in Chapter 11, produce Gaussian key elements. The reconciliation methods of the previous chapter are, as such, not adapted to this case. In this chapter, we build upon the previous techniques to treat the case of continuous-variable key elements or, more generally, the case of non-binary key elements.
In the first two sections, I describe two techniques to process non-binary key elements, namely sliced error correction and multistage soft decoding. Then, I conclude the chapter by giving more specific details on the reconciliation of Gaussian key elements.
Sliced error correction
Sliced error correction (SEC) is a generic reconciliation protocol that corrects strings of non-binary elements using binary reconciliation protocols as primitives [173]. The purpose of sliced error correction is to start from a list of correlated values and to give, with high probability, equal binary strings to Claude and Dominique. The underlying idea is to convert Claude's and Dominique's values into strings of bits, to apply a binary correction protocol (BCP) on each of them and to take advantage of all available information to minimize the number of exchanged reconciliation messages. It enables Claude and Dominique to reconcile a wide variety of correlated variables X and Y while relying on BCPs that are optimized to correct errors modeled by a binary symmetric channel (BSC).
An important application of sliced error correction is to correct correlated Gaussian random variables, namely X ∼ N(0, Σ) and Y = X + ∈ with ∈ ∼ N(0, σ). This important particular case is needed for QKD protocols that use a Gaussian modulation of Gaussian states, as described in Chapter 11.
This chapter develops more tools for working with random variables. The probability generating function is the key tool for working with sums of nonnegative integer-valued random variables that are independent. When random variables are only uncorrelated, we can work with averages (normalized sums) by using the weak law of large numbers. We emphasize that the weak law makes the connection between probability theory and the every-day practice of using averages of observations to estimate probabilities of real-world measurements. The last two sections introduce conditional probability and conditional expectation. The three important tools here are the law of total probability, the law of substitution, and, for independent random variables, “dropping the conditioning.”
The foregoing concepts are developed here for discrete random variables, but they will all be extended to more general settings in later chapters.
Probability generating functions
In many problems we have a sum of independent random variables, and we would like to know the probability mass function of their sum. For example, in an optical communication system, the received signal might be Y = X + W, where X is the number of photoelectrons due to incident light on a photodetector, and W is the number of electrons due to dark current noise in the detector. An important tool for solving these kinds of problems is the probability generating function. The name derives from the fact that it can be used to compute the probability mass function.
Why do electrical and computer engineers need to study probability?
Probability theory provides powerful tools to explain, model, analyze, and design technology developed by electrical and computer engineers. Here are a few applications.
Signal processing. My own interest in the subject arose when I was an undergraduate taking the required course in probability for electrical engineers. We considered the situation shown in Figure 1.1. To determine the presence of an aircraft, a known radar pulse v(t) is sent out. If there are no objects in range of the radar, the radar's amplifiers produce only a noise waveform, denoted by Xt. If there is an object in range, the reflected radar pulse plus noise is produced. The overall goal is to decide whether the received waveform is noise only or signal plus noise. To get an idea of how difficult this can be, consider the signal plus noise waveform shown at the top in Figure 1.2. Our class addressed the subproblem of designing an optimal linear system to process the received waveform so as to make the presence of the signal more obvious. We learned that the optimal transfer function is given by the matched filter. If the signal at the top in Figure 1.2 is processed by the appropriate matched filter, we get the output shown at the bottom in Figure 1.2. You will study the matched filter in Chapter 10.
In Chapters 2 and 3, the only random variables we considered specifically were discrete ones such as the Bernoulli, binomial, Poisson, and geometric. In this chapter we consider a class of random variables allowed to take a continuum of values. These random variables are called continuous random variables and are introduced in Section 4.1. Continuous random variables are important models for integrator output voltages in communication receivers, file download times on the Internet, velocity and position of an airliner on radar, etc. Expectation and moments of continuous random variables are computed in Section 4.2. Section 4.3 develops the concepts of moment generating function (Laplace transform) and characteristic function (Fourier transform). In Section 4.4 expectation of multiple random variables is considered. Applications of characteristic functions to sums of independent random variables are illustrated. In Section 4.5 the Markov inequality, the Chebyshev inequality, and the Chernoff bound illustrate simple techniques for bounding probabilities in terms of expectations.
Densities and probabilities
Introduction
Suppose that a random voltage in the range [0,1) is applied to a voltmeter with a one-digit display. Then the display output can be modeled by a discrete random variable Y taking values .0, .1, .2, …, .9 with P(Y = k/10) = 1/10 for k = 0, …, 9.