To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 2 serves as a primer on quantum mechanics tailored for quantum computing. It reviews essential concepts such as quantum states, operators, superposition, entanglement, and the probabilistic nature of quantum measurements. This chapter focuses on two-level quantum systems (i.e. qubits). Mathematical formulations that are specific to quantum mechanics are introduced, such as Dirac (bra–ket) notation, the Bloch sphere, density matrices, and Kraus operators. This provides the reader with the necessary tools to understand quantum algorithms and the behaviour of quantum systems. The chapter concludes with a review of the quantum harmonic oscillator, a model to describe quantum systems that are complementary to qubits and used in some quantum computer implementations.
This chapter explores the origin, key components, and essential concepts of quantum computing. It begins by charting the series of discoveries by various scientists that crystallized into the idea of quantum computing. The text then examines how certain applications have driven the evolution of quantum computing from a theoretical concept to an international endeavour. Additionally, the text clarifies the distinctions between quantum and classical computers, highlighting the DiVincenzo criteria, which are the five criteria for quantum computing. It also introduces the circuit model as the foundational paradigm for quantum computation. Lastly, the chapter sheds light on the reasons for the belief that quantum computers are more powerful than classical ones (touching on quantum computational complexity) and physically realizable (touching on quantum error correction).
The third chapter examines the capabilities of liquid-state NMR systems for quantum computing. It begins by grounding the reader in the basics of spin dynamics and NMR spectroscopy, followed by a discussion on the encoding of qubits into the spin states of the nucleus of atoms inside molecules. The narrative progresses to describe the implementation of single-qubit gates via external magnetic fields, weaving in key concepts such as the rotating-wave approximation, the Rabi cycle, and pulse shaping. The technique for orchestrating two-qubit gates, leveraging the intrinsic couplings between the spins of nuclei of atoms within a molecule, is subsequently detailed. Additionally, the chapter explains the process of detecting qubits’ states through the collective nuclear magnetization of the NMR sample and outlines the steps for qubit initialization. Attention then shifts to the types of noise that affect NMR quantum computers, shedding light on decoherence and the critical T1 and T2 times. The chapter wraps up by providing a synopsis, evaluating the strengths and weaknesses of liquid-state NMR for quantum applications, and a note on the role of entanglement in quantum computing.
The final chapter details some methods for evaluating the performance of quantum computers. It begins by delineating the essential features of quantum benchmarks and organizes them into a three-tiered framework. Initially, it discusses early-stage benchmarks that provide a detailed analysis of basic operations on a few qubits, emphasizing fidelity tests and tomography. Then, it progresses to intermediate-stage benchmarks that provide a more generalized appraisal of gate quality, circuit depth, and length. Concluding the benchmarking spectrum, later-stage benchmarks are introduced, aimed at evaluating the overall reliability and efficiency of quantum computers operating with a large number of qubits (e.g. 1000 or more).
The dramatic increase in computer performance has been extraordinary, but not for all computations: it has key limits and structure. Software architects, developers, and even data scientists need to understand how exploit the fundamental structure of computer performance to harness it for future applications. Ideal for upper level undergraduates, Computer Architecture for Scientists covers four key pillars of computer performance and imparts a high-level basis for reasoning with and understanding these concepts: Small is fast – how size scaling drives performance; Implicit parallelism – how a sequential program can be executed faster with parallelism; Dynamic locality – skirting physical limits, by arranging data in a smaller space; Parallelism – increasing performance with teams of workers. These principles and models provide approachable high-level insights and quantitative modelling without distracting low-level detail. Finally, the text covers the GPU and machine-learning accelerators that have become increasingly important for mainstream applications.
In Chapter 2 we saw that a computer performs computation by processing instructions. A computer instruction set must include a variety of features to achieve flexible programmability, including varied arithmetic and logic operations, conditional computation, and application-defined data structures. As a result, the execution of each instruction requires a number of steps: instruction fetch and decode, arithmetic or logic computation, read or write memory, and determination of the next instruction. The instruction set definition is a contract between software and hardware, the fundamental software–hardware interface, that enables software to be portable. After portability, the next critical attribute is performance, so computer hardware is designed to execute instructions as fast as possible.
Memory is a critical part of computing systems. In the organization of computers and the programming model, memory was first separated logically from the computing (CPU) part, and then later physically. This separation of CPU and memory in a structure known as the von Neumann architecture was covered in Chapter 2 and is illustrated in Figure 5.1.
Sequential abstraction has enabled software to manage the complex demands of constructing computing applications, debugging software and hardware, and program composition. However, with the end of Dennard scaling (see Section 3.3.4), we have been unable to create sequential computers with sufficient speed and capacity to meet the needs of ever-larger computing applications. As a result, computer hardware systems were forced to adopt explicit parallelism, both within a single chip (multicore CPUs) and at datacenter scale (supercomputers and cloud computing). In this chapter, we describe this shift to parallelism. In single-chip CPUs, the shift has produced multicore processors with first 2 or 4 cores, but growing rapidly to 64 cores (2020) and beyond. Understanding of multicore chips, parallel building blocks used in even larger parallel computers, provides an invaluable perspective on how to understand and increase performance.
A computer instruction set defines the correct execution of a program as the instructions processed one after another – that is, sequentially (see Chapter 2). This sequential abstraction enables composition of arithmetic operations (add, xor), operations on memory (state), and also grants extraordinary power to branch instructions that compose blocks of instructions conditionally. In this chapter, we explore the central importance of the sequential abstraction for managing the complexity of large-scale software and hardware systems. Subsequently, we consider creative techniques that both preserve the illusion of sequence and allow the processor implementation to increase the speed of program progress. These techniques are known as instruction-level parallelism (ILP), and accelerate program execution by executing instructions in a program in pipelined (overlapped), out-of-order, and even speculative fashion. Understanding ILP provides a perspective on how commercial processors really execute programs – far different from the step-by-step recipe of the sequential abstraction.
This book is for the growing community of scientists and even engineers who use computing and need a scientific understanding of computer architecture – those who view computation as an intellectual multiplier, and consequently are interested in capabilities, scaling, and limits, not mechanisms. That is, the scientific principles behind computer architecture, and how to reason about hardware performance for higher-level ends. With the dramatic rise of both data analytics and artificial intelligence, there has been a rapid growth in interest and progress in data science. There has also been a shift in the center of mass of computer science upward and outward, into a wide variety of sciences (physical, biological, and social), as well as nearly every aspect of society.
The end of Dennard scaling forced a shift to explicit parallelism, and the adoption of multicore parallelism as a vehicle for performance scaling (see Chapter 3, specifically Section 3.3.4). Even with multicore, the continued demand for both higher performance and energy efficiency has driven a growing interest in accelerators. In fact, their use has become so widespread that in many applications effective use of accelerators is a requirement. We discuss why accelerators are attractive, and when they can deliver large performance benefits. Specifically, we discuss both graphics processing units (GPUs) that aspire to be general parallel accelerators, and other emerging focused opportunities, such as machine learning accelerators. We close with broader discussion of where acceleration is most effective, and where it is not. Software architects designing applications will find this perspective on benefits and challenges of acceleration essential. These criteria will shape both design and evolution, as well as use of customized accelerator architectures in the future.