We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Synthetic Aperture Radar Interferometry (InSAR) is an active remote sensing method that uses repeated radar scans of the Earth's solid surface to measure relative deformation at centimeter precision over a wide swath. It has revolutionized our understanding of the earthquake cycle, volcanic eruptions, landslides, glacier flow, ice grounding lines, ground fluid injection/withdrawal, underground nuclear tests, and other applications requiring high spatial resolution measurements of ground deformation. This book examines the theory behind and the applications of InSAR for measuring surface deformation. The most recent generation of InSAR satellites have transformed the method from investigating 10's to 100's of SAR images to processing 1000's and 10,000's of images using a wide range of computer facilities. This book is intended for students and researchers in the physical sciences, particularly for those working in geophysics, natural hazards, space geodesy, and remote sensing. This title is also available as Open Access on Cambridge Core.
This study explored mental workload recognition methods for carrier-based aircraft pilots utilising multiple sensor physiological signal fusion and portable devices. A simulation carrier-based aircraft flight experiment was designed, and subjective mental workload scores and electroencephalogram (EEG) and photoplethysmogram (PPG) signals from six pilot cadets were collected using NASA Task Load Index (NASA-TLX) and portable devices. The subjective scores of the pilots in three flight phases were used to label the data into three mental workload levels. Features from the physiological signals were extracted, and the interrelations between mental workload and physiological indicators were evaluated. Machine learning and deep learning algorithms were used to classify the pilots’ mental workload. The performances of the single-modal method and multimodal fusion methods were investigated. The results showed that the multimodal fusion methods outperformed the single-modal methods, achieving higher accuracy, precision, recall and F1 score. Among all the classifiers, the random forest classifier with feature-level fusion obtained the best results, with an accuracy of 97.69%, precision of 98.08%, recall of 96.98% and F1 score of 97.44%. The findings of this study demonstrate the effectiveness and feasibility of the proposed method, offering insights into mental workload management and the enhancement of flight safety for carrier-based aircraft pilots.
Temporal variability and methodological differences in data normalization, among other factors, complicate effective trend analysis of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) wastewater surveillance data and its alignment with coronavirus disease 2019 (COVID-19) clinical outcomes. As there is no consensus approach for these analyses yet, this study explored the use of piecewise linear trend analysis (joinpoint regression) to identify significant trends and trend turning points in SARS-CoV-2 RNA wastewater concentrations (normalized and non-normalized) and corresponding COVID-19 case rates in the greater Las Vegas metropolitan area (Nevada, USA) from mid-2020 to April 2023. The analysis period was stratified into three distinct phases based on temporal changes in testing protocols, vaccination availability, SARS-CoV-2 variant prevalence, and public health interventions. While other statistical methodologies may require fewer parameter specifications, joinpoint regression provided an interpretable framework for characterization and comparison of trends and trend turning points, revealing sewershed-specific variations in trend magnitude and timing that also aligned with known variant-driven waves. Week-level trend agreement corroborated previous findings demonstrating a close relationship between SARS-CoV-2 wastewater surveillance data and COVID-19 outcomes. These findings guide future applications of advanced statistical methodologies and support the continued integration of wastewater-based epidemiology as a complementary approach to traditional COVID-19 surveillance systems.
The 1994 discovery of Shor's quantum algorithm for integer factorization—an important practical problem in the area of cryptography—demonstrated quantum computing's potential for real-world impact. Since then, researchers have worked intensively to expand the list of practical problems that quantum algorithms can solve effectively. This book surveys the fruits of this effort, covering proposed quantum algorithms for concrete problems in many application areas, including quantum chemistry, optimization, finance, and machine learning. For each quantum algorithm considered, the book clearly states the problem being solved and the full computational complexity of the procedure, making sure to account for the contribution from all the underlying primitive ingredients. Separately, the book provides a detailed, independent summary of the most common algorithmic primitives. It has a modular, encyclopedic format to facilitate navigation of the material and to provide a quick reference for designers of quantum algorithms and quantum computing researchers.
Guideline-based tobacco treatment is infrequently offered. Electronic health record-enabled patient-generated health data (PGHD) has the potential to increase patient treatment engagement and satisfaction.
Methods:
We evaluated outcomes of a strategy to enable PGHD in a medical oncology clinic from July 1, 2021 to December 31, 2022. Among 12,777 patients, 82.1% received a tobacco screener about use and interest in treatment as part of eCheck-in via the patient portal.
Results:
We attained a broad reach (82.1%) and moderate response rate (30.9%) for this low-burden PGHD strategy. Patients reporting current smoking (n = 240) expressed interest in smoking cessation medication (47.9%) and counseling (35.8%). As a result of patient requests via PGHD, most tobacco treatment requests by patients were addressed by their providers (40.6–80.3%). Among patients with active smoking, those who received/answered the screener (n = 309 ) were more likely to receive tobacco treatment compared with usual care patients who did not have the patient portal (n = 323) (OR = 2.72, 95% CI = 1.93–3.82, P < 0.0001) using propensity scores to adjust for the effect of age, sex, race, insurance, and comorbidity. Patients who received yet ignored the screener (n = 1024) compared with usual care were also more likely to receive tobacco treatment, but to a lesser extent (OR = 2.20, 95% CI = 1.68–2.86, P < 0.0001). We mapped observed and potential benefits to the Translational Science Benefits Model (TSBM).
Discussion:
PGHD via patient portal appears to be a feasible, acceptable, scalable, and cost-effective approach to promote patient-centered care and tobacco treatment in cancer patients. Importantly, the PGHD approach serves as a real world example of cancer prevention leveraging the TSBM.
This chapter covers quantum algorithmic primitives for loading classical data into a quantum algorithm. These primitives are important in many quantum algorithms, and they are especially essential for algorithms for big-data problems in the area of machine learning. We cover quantum random access memory (QRAM), an operation that allows a quantum algorithm to query a classical database in superposition. We carefully detail caveats and nuances that appear for realizing fast large-scale QRAM and what this means for algorithms that rely upon QRAM. We also cover primitives for preparing arbitrary quantum states given a list of the amplitudes stored in a classical database, and for performing a block-encoding of a matrix, given a list of its entries stored in a classical database.
This chapter covers the multiplicative weights update method, a quantum algorithmic primitive for certain continuous optimization problems. This method is a framework for classical algorithms, but it can be made quantum by incorporating the quantum algorithmic primitive of Gibbs sampling and amplitude amplification. The framework can be applied to solve linear programs and related convex problems, or generalized to handle matrix-valued weights and used to solve semidefinite programs.
This chapter covers quantum algorithmic primitives related to linear algebra. We discuss block-encodings, a versatile and abstract access model that features in many quantum algorithms. We explain how block-encodings can be manipulated, for example by taking products or linear combinations. We discuss the techniques of quantum signal processing, qubitization, and quantum singular value transformation, which unify many quantum algorithms into a common framework.
In the Preface, we motivate the book by discussing the history of quantum computing and the development of the field of quantum algorithms over the past several decades. We argue that the present moment calls for adopting an end-to-end lens in how we study quantum algorithms, and we discuss the contents of the book and how to use it.
This chapter covers the quantum adiabatic algorithm, a quantum algorithmic primitive for preparing the ground state of a Hamiltonian. The quantum adiabatic algorithm is a prominent ingredient in quantum algorithms for end-to-end problems in combinatorial optimization and simulation of physical systems. For example, it can be used to prepare the electronic ground state of a molecule, which is used as an input to quantum phase estimation to estimate the ground state energy.
This chapter covers quantum linear system solvers, which are quantum algorithmic primitives for solving a linear system of equations. The linear system problem is encountered in many real-world situations, and quantum linear system solvers are a prominent ingredient in quantum algorithms in the areas of machine learning and continuous optimization. Quantum linear systems solvers do not themselves solve end-to-end problems because their output is a quantum state, which is one of its major caveats.
This chapter presents an introduction to the theory of quantum fault tolerance and quantum error correction, which provide a collection of techniques to deal with imperfect operations and unavoidable noise afflicting the physical hardware, at the expense of moderately increased resource overheads.
This chapter covers the quantum algorithmic primitive called quantum gradient estimation, where the goal is to output an estimate for the gradient of a multivariate function. This primitive features in other primitives, for example, quantum tomography. It also features in several quantum algorithms for end-to-end problems in continuous optimization, finance, and machine learning, among other areas. The size of the speedup it provides depends on how the algorithm can access the function, and how difficult the gradient is to estimate classically.
This chapter covers quantum algorithms for numerically solving differential equations and the areas of application where such capabilities might be useful, such as computational fluid dynamics, semiconductor chip design, and many engineering workflows. We focus mainly on algorithms for linear differential equations (covering both partial and ordinary linear differential equations), but we also mention the additional nuances that arise for nonlinear differential equations. We discuss important caveats related to both the data input and output aspects of an end-to-end differential equation solver, and we place these quantum methods in the context of existing classical methods currently in use for these problems.
This chapter covers the quantum algorithmic primitive of approximate tensor network contraction. Tensor networks are a powerful classical method for representing complex classical data as a network of individual tensor objects. To evaluate the tensor network, it must be contracted, which can be computationally challenging. A quantum algorithm for approximate tensor network contraction can provide a quantum speedup for contracting tensor networks that satisfy certain conditions.
This chapter provides an overview of how to perform quantum error correction using the surface code, which is the most well-studied quantum error correcting code for practical quantum computation. We provide formulas for the code distance—which determines the resource overhead when using the surface code—as a function of the desired logical error rate and underlying physical error rate. We discuss several decoders for the surface code and the possibility of experiencing the backlog problem if the decoder is too slow.
This chapter covers quantum tomography, a quantum algorithmic primitive that enables a quantum algorithm to learn a full classical description of a quantum state. Generally, the goal of a quantum tomography procedure is to obtain this description using as few copies of the state as possible. The optimal number of copies may depend on what kind of measurements are allowed and what error metric is being used, and in most cases, quantum tomography procedures have been developed with provably optimal complexity.