To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper proposes a new method for dealing with geometrical layout constraints. Geometrical layout constraints are classified into three classes of dimensional, regional, and interference constraints. Dimensional constraints are handled by using an existing methodology. A method is proposed to translate the other two classes of constraints into dimensional constraints. Thus, it is possible to uniformly deal with all of those geometrical layout constraints. The method is twofold. First, it converts regional, interference constraints into a set of simple inequalities. Then each inequality is solved by a geometric gadget, which is a structured set of dimensional constraints. A prototype system is developed and applied to some layout design examples.
Few existing Computer Aided Design (CAD) systems provide assistance to designers in developing geometric concepts at the early design stages. Instead they require a high level of precision and detail suited to detail design. To support the early geometric design, a CAD system should provide utilities for the rapid capture and iterative development of vague geometric models. This paper presents a pilot system that is being developed based on such a vision. The system has adopted minimum commitment modelling and incremental refinement as the guiding principles. The representation of geometric configuration is based on a parametric and constraint-based geometric design model, and provides a uniform representation of the approximate and precise size and location parameters. A constraint-based mechanism has been developed for processing geometric information. The use of the system in assisting the development of a geometric configuration is also demonstrated. Finally, features and limitations of the system as well as relations to relevant works are discussed, and based on this a number of key research directions are established.
A new approach for navigation of mobile robots in dynamic environments by using Linear Algebra Theory, Numerical Methods, and a modification of the Force Field Method is presented in this paper. The controller design is based on the dynamic model of a unicycle-like nonholonomic mobile robot. Previous studies very often ignore the dynamics of mobile robots and suffer from algorithmic singularities. Simulation and experimentation results confirm the feasibility and the effectiveness of the proposed controller and the advantages of the dynamic model use. By using this new strategy, the robot is able to adapt its behavior at the available knowing level and it can navigate in a safe way, minimizing the tracking error.
A robot has been developed to maintain boiler water-cooling tubes. This robot has a double tracked moving mechanism, an ash cleaning device, a slag purging device, a tubes' thickness measurement device, a marking device, and a control system. This robot can climb up and down the tube wall. A method for the robot to complete many tasks for boiler maintenance in one process cycle is presented. The mechanism of the robot is described. Especially, a kind of special magnetic block structure is designed to obtain strong adhering force using permanent magnets. Experiments in laboratory and real thermal power station have verified that the robot cannot only climb on the surface of the tube wall smoothly, but also take heavy payloads for boiler maintenance operation.
This chapter deals with the subject of quantum error correction and the related codes (QECC), which can be applied to noisy quantum channels and quantum memories with the purpose of preserving or protecting the information integrity. I first describe the basics of quantum repetition codes, as applicable to bit-flip and phase-flip quantum channels. Then I consider the 9-qubit Shor code, which has the capability of diagnosing and correcting any combination of bit-flip and phase-flip errors, up to one error of each type. Furthermore, it is shown that the Shor code is, in fact, capable of fully restoring qubit integrity under a continuum of bit or phase errors, a property that has no counterpart in the classical world of error-correction codes. But the exploration of QECC does not stop here! We shall discover the elegant Calderbank–Shor–Steane (CSS) codes, which have the capability of correcting any number of errors t, both bit-flip and phase-flip. As an application of the CSS code, I then describe the 7-qubit Hadamard–Steane code, which can correct up to one error on single qubits. A corresponding quantum circuit, based on an original generator-matrix example, is presented.
Quantum repetition code
In Chapter 11, we saw that the simplest form of error-correction code (ECC) is the repetition code, based on the principle of majority logic. The background assumption is that in a given message sequence, or bit string, the probability of a bit error is sufficiently small for the majority of bits to be correctly transmitted through the channel.
Because of the reader's interest in information theory, it is assumed that, to some extent, he or she is relatively familiar with probability theory, its main concepts, theorems, and practical tools. Whether a graduate student or a confirmed professional, it is possible, however, that a good fraction, if not all of this background knowledge has been somewhat forgotten over time, or has become a bit rusty, or even worse, completely obliterated by one's academic or professional specialization!
This is why this book includes a couple of chapters on probability basics. Should such basics be crystal clear in the reader's mind, however, then these two chapters could be skipped at once. They can always be revisited later for backup, should some of the associated concepts and tools present any hurdles in the following chapters. This being stated, some expert readers may yet dare testing their knowledge by considering some of this chapter's (easy) problems, for starters. Finally, any parent or teacher might find the first chapter useful to introduce children and teens to probability.
I have sought to make this review of probabilities basics as simple, informal, and practical as it could be. Just like the rest of this book, it is definitely not intended to be a math course, according to the canonic theorem–proof–lemma–example suite. There exist scores of rigorous books on probability theory at all levels, as well as many Internet sites providing elementary tutorials on the subject.
This relatively short but mathematically intense chapter brings us to the core of Shannon's information theory, with the definition of channel capacity and the subsequent, most famous channel coding theorem (CCT), the second most important theorem from Shannon (next to the source coding theorem, described in Chapter 8). The formal proof of the channel coding theorem is a bit tedious, and, therefore, does not lend itself to much oversimplification. I have sought, however, to guide the reader in as many steps as is necessary to reach the proof without hurdles. After defining channel capacity, we will consider the notion of typical sequences and typical sets (of such sequences) in codebooks, which will make it possible to tackle the said CCT. We will first proceed through a formal proof, as inspired from the original Shannon paper (but consistently with our notation, and with more explanation, where warranted); then with different, more intuitive or less formal approaches.
Channel capacity
In Chapter 12, I have shown that in a noisy channel, the mutual information, H(X;Y) = H(Y) − H(Y|X), represents the measure of the true information contents in the output or recipient source Y, given the equivocation H(Y|X), which measures the informationless channel noise. We have also shown that mutual information depends on the input probability distribution, p(x).
This chapter represents our first step into quantum information theory (QIT). The key to operating such a transition is to become familiar with the concept of the quantum bit, or qubit, which is a probabilistic superposition of the classical 0 and 1 bits. In the quantum world, the classical 0 and 1 bits become the pure states |0〉 and |1〉, respectively. It is as if a coin can be classically in either heads or tails states, but is now allowed to exist in a superposition of both! Then I show that qubits can be physically transformed by the action of unitary matrices, which are also called operators. I show that such qubit transformations, resulting from any qubit manipulation, can be described by rotations on a 2D surface, which is referred to as the Bloch sphere. The Pauli matrices are shown to generate all such unitary transformations. These transformations are reversible, because they are characterized by unitary matrices; this property always makes it possible to trace the input information carried by qubits. I will then describe different types of elementary quantum computations performed by elementary quantum gates, forming a veritable “zoo” of unitary operators, called I, X, Y, Z, H, CNOT, CCNOT, CROSSOVER or SWAP, controlled-U, and controlled-controlled-U. These gates can be used to form quantum circuits, involving any number of qubits, and of which several examples and tools for analysis are provided. Finally, the concept of tensor product, as progressively introduced through the above description, is eventually formalized.
This relatively short chapter on channel entropy describes the entropy properties of communication channels, of which I have given a generic description in Chapter 11 concerning error-correction coding. It will also serve to pave the way towards probably the most important of all Shannon's theorems, which concerns channel coding, as described in the more extensive Chapter 13. Here, we shall consider the different basic communication channels, starting with the binary symmetric channel, and continuing with nonbinary, asymmetric channel types. In each case, we analyze the channel's entropy characteristics and mutual information, given a discrete source transmitting symbols and information thereof, through the channel. This will lead us to define the symbol error rate (SER), which corresponds to the probability that symbols will be wrongly received or mistaken upon reception and decoding.
Binary symmetric channel
The concept of the communication channel was introduced in Chapter 11. To recall briefly, a communication channel is a transmission means for encoded information. Its constituents are an originator source (generating message symbols), an encoder, a transmitter, a physical transmission pipe, a receiver, a decoder, and a recipient source (restituting message symbols). The two sources (originator and recipient) may be discrete or continuous. The encoding and decoding scheme may include not only symbol-to-codeword conversion and the reverse, but also data compression and error correction, which we will not be concerned with in this chapter. Here, we shall consider binary channels.
The concept of entropy is central to information theory (IT). The name, of Greek origin (entropia, tropos), means turning point or transformation. It was first coined in 1864 by the physicist R. Clausius, who postulated the second law of thermodynamics. Among other implications, this law establishes the impossibility of perpetual motion, and also that the entropy of a thermally isolated system (such as our Universe) can only increase. Because of its universal implications and its conceptual subtlety, the word entropy has always been enshrouded in some mystery, even, as today, to large and educated audiences.
The subsequent works of L. Boltzmann, which set the grounds of statistical mechanics, made it possible to provide further clarifications of the definition of entropy, as a natural measure of disorder. The precursors and founders of the later information theory (L. Szilárd, H. Nyquist, R. Hartley, J. von Neumann, C. Shannon, E. Jaynes, and L. Brillouin) drew as many parallels between the measure of information (the uncertainty in communication-source messages) and physical entropy (the disorder or chaos within material systems). Comparing information with disorder is not at all intuitive. This is because information (as we conceive it) is pretty much the conceptual opposite of disorder! Even more striking is the fact that the respective formulations for entropy that have been successively made in physics and IT happen to match exactly. A legend has it that Shannon chose the word “entropy” from the following advice of his colleague von Neumann: “Call it entropy.
This chapter considers the continuous-channel case represented by the Gaussian channel, namely, a continuous communication channel with Gaussian additive noise. This will lead to a fundamental application of Shannon's coding theorem, referred to as the Shannon–Hartley theorem (SHT), another famous result of information theory, which also credits the earlier 1920 contribution of Ralph Hartley, who derived what remained known as the Hartley's law of communication channels. This theorem relates channel capacity to the signal and noise powers, in a most elegant and simple formula. As a recent and little-noticed development in this field, I will describe the nonlinear channel, where the noise is also a function of the transmitted signal power, owing to channel nonlinearities (an exclusive feature of certain physical transmission pipes, such as optical fibers). As we shall see, the modified SHT accounting for nonlinearity represents a major conceptual progress in information theory and its applications to optical communications, although its existence and consequences have, so far, been overlooked in textbooks. This chapter completes our description of classical information theory, as resting on Shannon's works and founding theorems. Upon completion, we will then be equipped to approach the field of quantum information theory, which represents the second part of this series of chapters.
Gaussian channel
Referring to Chapter 6, a continuous communications channel assumes a continuous originator source, X, whose symbol alphabet x1,…, xi can be viewed as representing time samples of a continuous, real variable x, which is associated with a continuous probability distribution function or PDF, p(x).
This final chapter concerns cryptography, the principle of securing information against access or tampering by third parties. Classical cryptography refers to the manipulation of classical bits for this purpose, while quantum cryptography can be viewed as doing the same with qubits. I describe these two approaches in the same chapter, as in my view the field of cryptography should be understood as a whole and appreciated within such a broader framework, as opposed to focusing on the specific applications offered by the quantum approach. I, thus, begin by introducing the notions of message encryption, message decryption, and code breaking, the action of retrieving the message information contents without knowledge of the code's secret algorithm or secret key. I then consider the basic algorithms to achieve encryption and decryption with binary numbers, which leads to the early IBM concept of the Lucifer cryptosystem, which is the ancestor of the first data encryption standard (DES). The principle of double-key encryption, which alleviates the problem of key exchange, is first considered as an elegant solution but it is unsafe against code-breaking. Then the revolutionary principles of cryptography without key exchange and public-key cryptography (PKC) are considered, the latter also being known as RSA. The PKC–RSA cryptosystem is based on the extreme difficulty of factorizing large numbers. This is the reason for the description made earlier in Chapter 20 concerning Shor's factorization algorithm.
The previous chapter introduced the concept of coding optimality, as based on variable-length codewords. As we have learnt, an optimal code is one for which the mean codeword length closely approaches or is equal to the source entropy. There exist several families of codes that can be called optimal, as based on various types of algorithms. This chapter, and the following, will provide an overview of this rich subject, which finds many applications in communications, in particular in the domain of data compression. In this chapter, I will introduce Huffman codes, and then I will describe how they can be used to perform data compression to the limits predicted by Shannon. I will then introduce the principle of block codes, which also enable data compression.
Huffman codes
As we have learnt earlier, variable-length codes are in the general case more efficient than fixed-length ones. The most frequent source symbols are assigned the shortest codewords, and the reverse for the less frequent ones. The coding-tree method makes it possible to find some heuristic codeword assignment, according to the above rule. Despite the lack of further guidance, the result proved effective, considering that we obtained η = 96.23% with a ternary coding of the English-character source (see Fig. 8.3, Table 8.3). But we have no clue as to whether other coding trees with greater coding efficiencies may ever exist, unless we try out all the possibilities, which is impractical.
This mathematically intensive chapter takes us through our first steps in the domain of quantum computation (QC) algorithms. The simplest of them is the Deutsch algorithm, which makes it possible to determine whether or not a Boolean function is constant for any input. The key result is that this QC algorithm provides the answer at once, whereas in the classical case it would take two independent calculations. I describe next the generalization of the former algorithm to n qubits, referred to as the Deutsch–Jozsa algorithm. Although they have no specific or useful applications in quantum computing, both algorithms represent a most elegant means of introducing the concept of quantum computation parallelism. I then describe two most important QC algorithms, which nicely exploit quantum parallelism. The first is the quantum Fourier transform (QFT), for which a detailed analysis of QFT circuits and quantum-gate requirements is also provided. As will be shown in the next chapter, a key application of QFT concerns the famous Shor's algorithm, which makes it possible to factor numbers into primes in terms of polynomials. The second algorithm, no less famous than Shor's, is referred to as the Grover quantum database search, whose application is the identification of database items with a quadratic gain in speed.
Deutsch algorithm
Our exploration of quantum algorithms shall begin with the solution of a very basic problem: finding whether or not a Boolean function f(x) is a constant.