We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Entropy is a key concept of quantum information theory. It measures how much uncertainty there is in the state of a physical system. In this chapter we review the basic definitions and properties of entropy in both classical and quantum information theory. In places the chapter contains rather detailed and lengthy mathematical arguments. On a first reading these sections may be read lightly and returned to later for reference purposes.
Shannon entropy
The key concept of classical information theory is the Shannon entropy. Suppose we learn the value of a random variable X. The Shannon entropy of X quantifies how much information we gain, on average, when we learn the value of X. An alternative view is that the entropy of X measures the amount of uncertainty about X before we learn its value. These two views are complementary; we can view the entropy either as a measure of our uncertainty before we learn the value of X, or as a measure of how much information we have gained after we learn the value of X.
Intuitively, the information content of a random variable should not depend on the labels attached to the different values that may be taken by the random variable. For example, we expect that a random variable taking the values ‘heads’ and ‘tails’ with respective probabilities ¼ and ¾ contains the same amount of information as a random variable that takes the values 0 and 1 with respective probabilities ¼ and ¾.
What does it mean to say that two items of information are similar? What does it mean to say that information is preserved by some process? These questions are central to a theory of quantum information processing, and the purpose of this chapter is the development of distance measures giving quantitative answers to these questions. Motivated by our two questions we will be concerned with two broad classes of distance measures, static measures and dynamic measures. Static measures quantify how close two quantum states are, while dynamic measures quantify how well information has been preserved during a dynamic process. The strategy we take is to begin by developing good static measures of distance, and then to use those static measures as the basis for the development of dynamic measures of distance.
There is a certain arbitrariness in the way distance measures are defined, both classically and quantum mechanically, and the community of people studying quantum computation and quantum information has found it convenient to use a variety of distance measures over the years. Two of those measures, the trace distance and the fidelity, have particularly wide currency today, and we discuss both these measures in detail in this chapter. For the most part the properties of both are quite similar, however for certain applications one may be easier to deal with than the other. It is for this reason and because both are widely used within the quantum computation and quantum information community that we discuss both measures.
Science offers the boldest metaphysics of the age. It is a thoroughly human construct, driven by the faith that if we dream, press to discover, explain, and dream again, thereby plunging repeatedly into new terrain, the world will somehow come clearer and we will grasp the true strangeness of the universe. And the strangeness will all prove to be connected, and make sense.
– Edward O. Wilson
Information is physical.
– Rolf Landauer
What are the fundamental concepts of quantum computation and quantum information? How did these concepts develop? To what uses may they be put? How will they be presented in this book? The purpose of this introductory chapter is to answer these questions by developing in broad brushstrokes a picture of the field of quantum computation and quantum information. The intent is to communicate a basic understanding of the central concepts of the field, perspective on how they have been developed, and to help you decide how to approach the rest of the book.
Our story begins in Section 1.1 with an account of the historical context in which quantum computation and quantum information has developed. Each remaining section in the chapter gives a brief introduction to one or more fundamental concepts from the field: quantum bits (Section 1.2), quantum computers, quantum gates and quantum circuits (Section 1.3), quantum algorithms (Section 1.4), experimental quantum information processing (Section 1.5), and quantum information and communication (Section 1.6).
Quantum mechanics has the curious distinction of being simultaneously the most successful and the most mysterious of our scientific theories. It was developed in fits and starts over a remarkable period from 1900 to the 1920s, maturing into its current form in the late 1920s. In the decades following the 1920s, physicists had great success applying quantum mechanics to understand the fundamental particles and forces of nature, culminating in the development of the standard model of particle physics. Over the same period, physicists had equally great success in applying quantum mechanics to understand an astonishing range of phenomena in our world, from polymers to semiconductors, from superfluids to superconductors. But, while these developments profoundly advanced our understanding of the natural world, they did only a little to improve our understanding of quantum mechanics.
This began to change in the 1970s and 1980s, when a few pioneers were inspired to ask whether some of the fundamental questions of computer science and information theory could be applied to the study of quantum systems. Instead of looking at quantum systems purely as phenomena to be explained as they are found in nature, they looked at them as systems that can be designed. This seems a small change in perspective, but the implications are profound. No longer is the quantum world taken merely as presented, but instead it can be created.
This book provides an introduction to the main ideas and techniques of the field of quantum computation and quantum information. The rapid rate of progress in this field and its cross-disciplinary nature have made it difficult for newcomers to obtain a broad overview of the most important techniques and results of the field.
Our purpose in this book is therefore twofold. First, we introduce the background material in computer science, mathematics and physics necessary to understand quantum computation and quantum information. This is done at a level comprehensible to readers with a background at least the equal of a beginning graduate student in one or more of these three disciplines; the most important requirements are a certain level of mathematical maturity, and the desire to learn about quantum computation and quantum information. The second purpose of the book is to develop in detail the central results of quantum computation and quantum information. With thorough study the reader should develop a working understanding of the fundamental tools and results of this exciting field, either as part of their general education, or as a prelude to independent research in quantum computation and quantum information.
Structure of the book
The basic structure of the book is depicted in Figure 1. The book is divided into three parts. The general strategy is to proceed from the concrete to the more abstract whenever possible.
In Chapter 4 we showed that an arbitrary unitary operation U may be implemented on a quantum computer using a circuit consisting of single qubit and controlled-not gates. Such universality results are important because they ensure the equivalence of apparently different models of quantum computation. For example, the universality results ensure that a quantum computer programmer may design quantum circuits containing gates which have four input and output qubits, confident that such gates can be simulated by a constant number of controlled-not and single qubit unitary gates.
An unsatisfactory aspect of the universality of controlled-not and single qubit unitary gates is that the single qubit gates form a continuum, while the methods for fault-tolerant quantum computation described in Chapter 10 work only for a discrete set of gates. Fortunately, also in Chapter 4 we saw that any single qubit gate may be approximated to arbitrary accuracy using a finite set of gates, such as the controlled-not gate, Hadamard gate H, phase gate S, and π/8 gate. We also gave a heuristic argument that approximating the chosen single qubit gate to an accuracy ∈ required only Θ(1/∈) gates chosen from the finite set. Furthermore, in Chapter 10 we showed that the controlled-not, Hadamard, phase and π/8 gates may be implemented in a fault-tolerant manner.
Cryptography plays a crucial role in many aspects of today's world, from internet banking and ecommerce to email and web-based business processes. Understanding the principles on which it is based is an important topic that requires a knowledge of both computational complexity and a range of topics in pure mathematics. This book provides that knowledge, combining an informal style with rigorous proofs of the key results to give an accessible introduction. It comes with plenty of examples and exercises (many with hints and solutions), and is based on a highly successful course developed and taught over many years to undergraduate and graduate students in mathematics and computer science.
Coding theory is concerned with successfully transmitting data through a noisy channel and correcting errors in corrupted messages. It is of central importance for many applications in computer science or engineering. This book gives a comprehensive introduction to coding theory whilst only assuming basic linear algebra. It contains a detailed and rigorous introduction to the theory of block codes and moves on to more advanced topics like BCH codes, Goppa codes and Sudan's algorithm for list decoding. The issues of bounds and decoding, essential to the design of good codes, features prominently. The authors of this book have, for several years, successfully taught a course on coding theory to students at the National University of Singapore. This book is based on their experiences and provides a thoroughly modern introduction to the subject. There are numerous examples and exercises, some of which introduce students to novel or more advanced material.
Cryptography is concerned with the conceptualization, definition and construction of computing systems that address security concerns. The design of cryptographic systems must be based on firm foundations. Foundations of Cryptography presents a rigorous and systematic treatment of foundational issues, defining cryptographic tasks and solving cryptographic problems. The emphasis is on the clarification of fundamental concepts and on demonstrating the feasibility of solving several central cryptographic problems, as opposed to describing ad-hoc approaches. This second volume contains a thorough treatment of three basic applications: Encryption, Signatures, and General Cryptographic Protocols. It builds on the previous volume, which provided a treatment of one-way functions, pseudorandomness, and zero-knowledge proofs. It is suitable for use in a graduate course on cryptography and as a reference book for experts. The author assumes basic familiarity with the design and analysis of algorithms; some knowledge of complexity theory and probability is also useful.
The problem of evaluating integrals is well known to every student who has had a year of calculus. It was an especially important subject in 19th century analysis and it has now been revived with the appearance of symbolic languages. In this book, the authors use the problem of exact evaluation of definite integrals as a starting point for exploring many areas of mathematics. The questions discussed in this book, first published in 2004, are as old as calculus itself. In presenting the combination of methods required for the evaluation of most integrals, the authors take the most interesting, rather than the shortest, path to the results. Along the way, they illuminate connections with many subjects, including analysis, number theory, algebra and combinatorics. This will be a guided tour of exciting discovery for undergraduates and their teachers in mathematics, computer science, physics, and engineering.
Iterative processing is an important technique with numerous applications. Exploiting the power of factor graphs, this detailed survey provides a general framework for systematically developing iterative algorithms for digital receivers, and highlights connections between important algorithms. Starting with basic concepts in digital communications, progressively more complex ideas are presented and integrated resulting in the development of cutting-edge algorithms for iterative receivers. Real-world applications are covered in detail, including decoding for turbo and LDPC codes, and detection for multi-antenna and multi-user systems. This accessible framework will allow the reader to apply factor graphs to practical problems, leading to the design of new algorithms in applications beyond digital receivers. With many examples and algorithms in pseudo-code, this book is an invaluable resource for graduate students and researchers in electrical engineering and computer science, and for practitioners in the communications industry. Additional resources for this title are available online at www.cambridge.org/9780521873154.
Compression for Multimedia was primarily developed as class notes for my course on techniques for compression of data, speech, music, pictures, and video that I have been teaching for more than 10 years at the University of Aerospace Instrumentation, St Petersburg.
During spring 2005 I worked at Lund University as the Lise Meitner Visiting Professor. I have used part of this time to thoroughly revise and substantially extend my previous notes, resulting in the present version.
I would also like to mention that this task could not have been fulfilled without support. Above all, I am indebted to my colleague and husband Boris Kudryashov. Without our collaboration I would not have reached my view of how various compression techniques could be developed and should be taught. Boris' help in solving many TEX problems was invaluable. Special thanks go to Grigory Tenengolts who supported our research and development of practical methods for multimedia compression. Finally, I am grateful to Rolf Johannesson who proposed me as a Lise Meitner Visiting Professor and, needless to say, to the Engineering faculty of Lund University who made his recommendation become true! Rolf also suggested that I should give an undergraduate course on compression for multimedia at Lund University, develop these notes, and eventually publish them as a book. Thanks!
Rate distortion theory is the part of information theory which studies data compression with a fidelity criterion. In this chapter we consider the notion of rate-distortion function which is a theoretical limit for quantizer performances. The Blahut algorithm for finding the rate-distortion function numerically is given. In order to compare the performances of different quantizers, some results of the high-resolution quantization theory are discussed. Comparison of quantization procedures for the source with the generalized Gaussian distribution is performed.
Rate-distortion function
Each quantization procedure is characterized by the average distortion D and by the quantization rate R. The goal of compression system design is to optimize the rate-distortion tradeoff. In order to compare different quantizers, the rate-distortion function R(D) (Cover and Thomas 1971) is introduced. Our goal is to find the best quantization procedure for a given source. We say that for a given source at a given distortion D = D0, a quantization procedure with rate-distortion function R1(D) is better than another quantization procedure with rate-distortion function R2(D) if R1 (D0) ≤ R2 (D0). Unfortunately, very often it is difficult to point out the best quantization procedure. The reason is that the best quantizer can have very high computational complexity or sometimes, even, it can be unknown. On the other hand, it is possible to find the best rate-distortion function without finding the best quantization procedure. This theoretical lower limit for the rate at a given distortion is provided by the information rate distortion function (Cover and Thomas 1971).
Multimedia data can be considered as data observed at the output of a source with memory. Sometimes we say that speech and images have considerable redundancy, meaning the statistical correlation or dependence between the samples of such sources which is referred to as memory in information theory literature. Scalar quantization does not exploit this redundancy or memory. As was shown in Chapter 3, scalar quantization for sources with memory provides a rate-distortion function which is rather far from the achievable rate-distortion function H(D) for a given source. Vector quantization could attain better rate-distortion performance but usually at the cost of significantly increasing computational complexity. Another approach leading to a better rate-distortion function and preserving rather low computational complexity combines linear processing with scalar quantization. First, we remove redundancy from the data and then apply scalar quantization to the output of the memoryless source. Outputs of the obtained memoryless source can be vector quantized with lower average distortions but with higher computational complexity. The two most important approaches of this variety are predictive coding and transform coding (Jayant and Noll 1984). The first approach is mainly used for speech compression and the second approach is applied to image, audio, and video coding. In this chapter, we will consider predictive coding systems which use time-domain operations in order to remove redundancy and thereby to reduce the bit-rate for given quantization error levels.