To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter examines quantum decoherence, a process by which quantum information is lost due to environmental interactions. Various noise channels, such as bit-flip, phase-flip, and depolarizing channels, are discussed to illustrate common errors in qubit states. The Kraus representation and Lindblad equation offer frameworks for modeling these interactions. Metrics such as T1 (relaxation time) and T2 (decoherence time) are introduced to measure qubit stability. Understanding decoherence mechanisms is critical for developing strategies to preserve quantum information, laying the groundwork for quantum error correction techniques and highlighting the challenges in creating reliable quantum systems.
This chapter covers quantum error correction, essential for preserving quantum information in the presence of noise. It introduces the bit-flip and phase-flip codes as foundational error-correction methods, building toward Shor’s code, which corrects general single-qubit errors. Logical qubits are formed by encoding physical qubits to maintain stability. Stabilizer codes are presented as a systematic framework for error correction, enabling fault-tolerant quantum computing. These principles are crucial for creating scalable quantum systems that can perform reliable computations, even in noisy environments, addressing a central challenge in quantum computing’s practical implementation.
This chapter explores classical computation fundamentals, starting with Turing machines as a foundation for defining computability. The universal Turing machine is introduced, emphasizing the theoretical basis for all computable functions. Computational complexity is discussed, differentiating between tractable and intractable problems and explaining complexity classes as a framework for problem-solving. The chapter also covers the circuit model, providing a bridge between theoretical constructs and modern computer architecture. Finally, the concept of reversible computation is introduced, which has implications for energy-efficient processing. Through these topics, the chapter delineates classical computation’s limitations, setting up the motivation to transition into quantum approaches in subsequent chapters.
In recent years, speech recognition devices have become central to our everyday lives. Systems such as Siri, Alexa, speech-to-text, and automated telephone services, are built by people applying expertise in sound structure and natural language processing to generate computer programmes that can recognise and understand speech. This exciting new advancement has led to a rapid growth in speech technology courses being added to linguistics programmes; however, there has so far been a lack of material serving the needs of students who have limited or no background in computer science or mathematics. This textbook addresses that need, by providing an accessible introduction to the fundamentals of computer speech synthesis and automatic speech recognition technology, covering both neural and non-neural approaches. It explains the basic concepts in non-technical language, providing step-by-step explanations of each formula, practical activities and ready-made code for students to use, which is also available on an accompanying website.
The algorithm based on gradient descent in the previous chapter is simple and computationally efficient, at least provided the projection can be computed. There are two limitations, however.
The purpose of this chapter is to introduce the necessary tools from optimisation, convex geometry and convex analysis. You can safely skip this chapter, referring back as needed. The main concepts introduced are as follows:
We already saw an application of exponential weights to linear and quadratic bandits in Chapter 7. The same abstract algorithm can also be used for convex bandits but the situation is more complicated. Throughout this chapter we assume the losses are bounded and there is no noise:
Function classes like Fb are non-parametric. In this chapter we shift gears by studying two important parametric classes: Fb,lin and Fb,quad. The main purpose of this chapter is to use the machinery designed for linear bandits to prove an upper bound on the minimax regret for quadratic bandits. On the positive side the approach is both elementary and instructive. More negatively, the resulting algorithm is not computationally efficient. Before the algorithms and regret analysis we need three tools: covering numbers, optimal experimental design and the exponential weights algorithm.
This chapter introduces quantum computation by comparing classical and quantum computers. Core concepts including qubits, superposition, and entanglement are introduced, setting the stage for deeper exploration. Various quantum computing models are summarized, with a focus on the circuit and topological models. The chapter explains why quantum computing is necessary, especially for tasks beyond classical computing’s limits. It discusses existing quantum platforms and provides an overview of their capabilities and limitations. The chapter also offers a brief historical perspective, touches on computational energy efficiency, and forecasts a quantum future where quantum and classical computing work in tandem. This groundwork provides essential insights into quantum computation’s potential and upcoming chapters’ explorations of algorithmic and theoretical principles.
Like the bisection method (Chapter 4), cutting plane methods are most naturally suited to the stochastic setting. For the remainder of the chapter we assume the setting is stochastic and the loss function is bounded:
Over the past few decades, graph theory has developed into one of the central areas of modern mathematics, with close (and growing) connections to areas of pure mathematics such as number theory, probability theory, algebra and geometry, as well as to applied areas such as the theory of networks, machine learning, statistical physics, and biology. It is a young and vibrant area, with several major breakthroughs having occurred in just the past few years. This book offers the reader a gentle introduction to the fundamental concepts and techniques of graph theory, covering classical topics such as matchings, colourings and connectivity, alongside the modern and vibrant areas of extremal graph theory, Ramsey theory, and random graphs. The focus throughout is on beautiful questions, ideas and proofs, and on illustrating simple but powerful techniques, such as the probabilistic method, that should be part of every young mathematician's toolkit.
Learn to program more effectively, faster, with better results… and enjoy both the learning experience and the benefits it ultimately brings. While this undergraduate-level textbook is motivated by formal methods, so encouraging habits that lead to correct and concise computer programs, its informal presentation sidesteps any rigid reliance on formal logic which programmers are sometimes led to believe is required. Instead, a straightforward and intuitive use of simple 'What's true here?' comments encourages precision of thought without prescription of notation. Drawing on decades of the author's experience in teaching/industry, the text's careful presentation concentrates on key principles of structuring and reasoning about programs, applying them first to small, understandable algorithms. Then students can concentrate on turning those reliably into their corresponding – and correct – program source codes. The text includes over 200 exercises, for many of which full solutions are provided. A set of all solutions is available for instructors' use.
Point clouds derived from UAV photogrammetry are a cost-effective alternative to LiDAR for infrastructure inspections, but they often include both structural and non-structural elements that complicate analysis. Traditional denoising filters remove outliers indiscriminately and frequently erode edges, making it difficult to preserve the curved tunnel lining while distinguishing bolts, access gates, or pipelines. In contrast, segmentation-based approaches leverage geometric context to explicitly separate lining surfaces from ancillary components, thereby enabling more accurate deformation analysis and structural assessment. To that end, this paper presents a novel approach for denoising image point clouds using a synthetic training dataset to address the scarcity of labeled public data for enhancing point cloud quality. Unlike other denoising approaches that rely on projections or assume points lie on a predefined surface shape, this segmentation-based denoising method retains only meaningful points in their original locations, allowing for more accurate analysis of deformation. Enhanced by synthetic training datasets, the application of the proposed denoising method to a road tunnel image point cloud and a subway tunnel terrestrial laser scanning point cloud demonstrates its potential to enhance point cloud quality in tunnels with diverse geometries and point cloud data resources, even when data are limited. The method achieves an 80% mean intersection over union for both the road tunnel and the subway tunnel from manual annotation. This enables an improvement in structural deformation analysis at the mm level.