To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 9, we presented the quantum rule of combination of subsystems through the tensor product. In this chapter, we will discuss a key elaboration of this rule that applies to composite systems with a specific symmetry, namely, invariance under exchange of identical particles.
Spin was introduced as part of the effort to understand the structure of atoms prior to the development of mature quantum theory. Variations of Bohr’s model described atoms in terms of three quantum numbers, roughly similar to , and that appear when solving the Schrödinger equation in central potentials. In this context, Pauli proposed that, in each atom, there exists at most one electron for each triplet of quantum numbers. This proposal is Pauli’s famous “exclusion principle,” which we will analyze in Chapter 15.
The first step towards quantum theory was a response to a problem that could not be addressed by the concepts and methods of classical physics: the radiation from black bodies.
One of the most important concepts of classical mechanics is that of a closed system. A closed system is loosely defined as a system whose components interact only with each other, and it is characterized by phase space volume conservation and energy conservation – see Section 1.2.
We saw in Chapter 2 that Born’s statistical interpretation of the wave function was one of the building blocks of quantum theory. According to Born’s interpretation, the wave function of a particle at a given moment of time defined a probability density with respect to the position . This result is generalized to state vectors of an Hilbert space and to general observables through the following procedure.
In Chapter 1, we presented the fundamental principles of classical physics, and then we motivated and presented the fundamental principles of quantum physics. The two sets of principles are summarized and compared in Table 10.1.
Through information theory, problems of communication and compression can be precisely modeled, formulated, and analyzed, and this information can be transformed by means of algorithms. Also, learning can be viewed as compression with side information. Aimed at students and researchers, this book addresses data compression and redundancy within existing methods and central topics in theoretical data compression, demonstrating how to use tools from analytic combinatorics to discover and analyze precise behavior of source codes. It shows that to present better learnable or extractable information in its shortest description, one must understand what the information is, and then algorithmically extract it in its most compact form via an efficient compression algorithm. Part I covers fixed-to-variable codes such as Shannon and Huffman codes, variable-to-fixed codes such as Tunstall and Khodak codes, and variable-to-variable Khodak codes for known sources. Part II discusses universal source coding for memoryless, Markov, and renewal sources.
This chapter addresses a basic integer encoding problem whose impact on the total memory footprint and speed performance of the underlying application is too easily underestimated or neglected. The problem consists of squeezing the space (in bits) required to store an increasing sequence of integers, and then supporting efficient query operations such as decompressing the sequence from the beginning or from some other position, checking whether an integer occurs in the sequence, or finding the smallest integer larger than the queried one. This problem occurs in several common applications, such as in the storage of the posting lists of search engines, or of the adjacency lists of trees and graphs, or of the encoding of sequences of offsets (pointers). The integer coders here discussed, analyzed, and illustrated with many running examples are Elias’ γ- and δ-codes, Rice’s code, PForDelta code, variable-byte code, (s, c)-dense codes, interpolative code, and, finally, the very elegant and powerful Elias–Fano code.
This chapter addresses a problem related to lists, the basic data structure underlying the design of many algorithms that manage interconnected items. It starts with an easy-to-state but I/O-inefficient solution derived from the optimal one designed for the classic RAM model; it then discusses increasingly sophisticated solutions that are elegant and efficient in the two-level memory model, and are still simple enough to be implemented with a few lines of code. The treatment of this problem will also allow us to highlight a subtle relation between parallel computation and I/O-efficient computation, which can be deployed to derive efficient disk-aware algorithms from efficient parallel algorithms.
This chapter deals with a classic educational problem, called the subarray sum. The specialty of this problem is that it has a simple formulation, which finds other useful variations and applications, and it admits a sequence of algorithmic solutions of increasing sophistication and elegance, which imply a significant reduction in their time and I/O complexities. Te ultimate result is a linear-time and -I/O algorithm, which will allow the reader to enter into the “game” of time and I/O complexity evaluations. The chapter concludes with a discussion of some interesting variations of this problem which arise from computational biology applications, and admit no immediate algorithmic solutions, thus stressing the fact that “ five minutes thinking” is not enough for designing efficient algorithms.
This chapter introduces to the concepts of algorithm, model of computation, and computational resources, starting from the classic ones (time and space) and then moving to I/O complexity by introducing the two-level memory model, which constitutes a simple, yet very effective, approximation of modern hierarchical memories. Examples are given in order to motivate the importance of counting I/Os for estimating the real performance of algorithms in modern computers.
This chapter describes another family of compressors that do not derive statistics about the text to be compressed; rather, they derive a dictionary of strings and substitute their occurrences in the input text via proper references to that dictionary. The choice of the dictionary is of course crucial in determining how well the file is compressed, and it is sometimes ineffective to transmit the full dictionary along with each compressed file. Starting in 1977, Abraham Lempel and Jacob Ziv introduced a family of compressors that successfully addressed these problems by designing two algorithms, named LZ77 and LZ78, that process the input text to incrementally construct a proper dictionary that is, thus, not transmitted. The Lempel–Ziv compressors are very popular because of their gzip instantiation, and constitute the base of more sophisticated compressors in use today, such as 7zip, LZMA, Brotli, and LZ4. The chapter discusses LZ77, LZ78, and an interesting variant, known as LZW, and at the end presents some theoretical considerations about the effectiveness of those compressors in terms of the empirical entropy of the compressed text.
This is one of the most studied problems in computer science, which is just touched in basic algorithm and data structure courses offered at the undergraduate level.This chapter makes a step forward by first introducing universal hashing, which solves some negative issues that basic hash functions experience, and then moves on to describe several advanced approaches to hashing, such as perfect hashing, Cuckoo hashing, minimal ordered perfect hashing, and finally Bloom filters. The theoretical analysis and algorithm description is enriched by figures and pseudocodes, plus several running examples that drive the reader to a better understanding of these important and advanced algorithmic concepts and tools.