We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A model is proposed for the one-dimensional spectrum and streamwise Reynolds stress in pipe flow for arbitrarily large Reynolds numbers. Constructed in wavenumber space, the model comprises four principal contributions to the spectrum: streaks, large-scale motions, very-large-scale motions and incoherent turbulence. It accounts for the broad and overlapping spectral content of these contributions from different eddy types. The model reproduces well the broad structure of the premultiplied one-dimensional spectrum of the streamwise velocity, although the bimodal shape that has been observed at certain wall-normal locations, and the $-5/3$ slope of the inertial subrange, are not captured effectively because of the simplifications made within the model. Regardless, the Reynolds stress distribution is well reproduced, even within the near-wall region, including key features of wall-bounded flows such as the Reynolds number dependence of the inner peak, the formation of a logarithmic region, and the formation of an outer peak. These findings suggest that many of these features arise from the overlap of energy content produced by both inner- and outer-scaled eddy structures combined with the viscous-scaled influence of the wall. The model is also used to compare with canonical turbulent boundary layer and channel flows, and despite some differences being apparent, we speculate that with only minor modifications to its coefficients, the model can be adapted to these flows as well.
The next-generation radio astronomy instruments are providing a massive increase in sensitivity and coverage, largely through increasing the number of stations in the array and the frequency span sampled. The two primary problems encountered when processing the resultant avalanche of data are the need for abundant storage and the constraints imposed by I/O, as I/O bandwidths drop significantly on cold storage. An example of this is the data deluge expected from the SKA Telescopes of more than 60 PB per day, all to be stored on the buffer filesystem. While compressing the data is an obvious solution, the impacts on the final data products are hard to predict. In this paper, we chose an error-controlled compressor – MGARD – and applied it to simulated SKA-Mid and real pathfinder visibility data, in noise-free and noise-dominated regimes. As the data have an implicit error level in the system temperature, using an error bound in compression provides a natural metric for compression. MGARD ensures the compression incurred errors adhere to the user-prescribed tolerance. To measure the degradation of images reconstructed using the lossy compressed data, we proposed a list of diagnostic measures, exploring the trade-off between these error bounds and the corresponding compression ratios, as well as the impact on science quality derived from the lossy compressed data products through a series of experiments. We studied the global and local impacts on the output images for continuum and spectral line examples. We found relative error bounds of as much as 10%, which provide compression ratios of about 20, have a limited impact on the continuum imaging as the increased noise is less than the image RMS, whereas a 1% error bound (compression ratio of 8) introduces an increase in noise of about an order of magnitude less than the image RMS. For extremely sensitive observations and for very precious data, we would recommend a $0.1\%$ error bound with compression ratios of about 4. These have noise impacts two orders of magnitude less than the image RMS levels. At these levels, the limits are due to instabilities in the deconvolution methods. We compared the results to the alternative compression tool DYSCO, in both the impacts on the images and in the relative flexibility. MGARD provides better compression for similar error bounds and has a host of potentially powerful additional features.
The Mental Health Bill, 2025, proposes to remove autism and learning disability from the scope of Section 3 of the Mental Health Act, 1983 (MHA). The present article represents a professional and carer consensus statement that raises concerns and identifies probable unintended consequences if this proposal becomes law. Our concerns relate to the lack of clear mandate for such proposals, conceptual inconsistency when considering other conditions that might give rise to a need for detention and the inconsistency in applying such changes to Part II of the MHA but not Part III. If the proposed changes become law, we anticipate that detentions would instead occur under the less safeguarded Deprivation of Liberty Safeguards framework, and that unmanaged risks will eventuate in behavioural consequences that will lead to more autistic people or those with a learning disability being sent to prison. Additionally, there is a concern that the proposed definitional breadth of autism and learning disability gives rise to a risk that people with other conditions may unintentionally be unable to be detained. We strongly urge the UK Parliament to amend this portion of the Bill prior to it becoming law.
Quality improvement programmes (QIPs) are designed to enhance patient outcomes by systematically introducing evidence-based clinical practices. The CONQUEST QIP focuses on improving the identification and management of patients with COPD in primary care. The process of developing CONQUEST, recruiting, preparing systems for participation, and implementing the QIP across three integrated healthcare systems (IHSs) is examined to identify and share lessons learned.
Approach and development:
This review is organized into three stages: 1) development, 2) preparing IHSs for implementation, and 3) implementation. In each stage, key steps are described with the lessons learned and how they can inform others interested in developing QIPs designed to improve the care of patients with chronic conditions in primary care.
Stage 1 was establishing and working with steering committees to develop the QIP Quality Standards, define the target patient population, assess current management practices, and create a global operational protocol. Additionally, potential IHSs were assessed for feasibility of QIP integration into primary care practices. Factors assessed included a review of technological infrastructure, QI experience, and capacity for effective implementation.
Stage 2 was preparation for implementation. Key was enlisting clinical champions to advocate for the QIP, secure participation in primary care, and establish effective communication channels. Preparation for implementation required obtaining IHS approvals, ensuring Health Insurance Portability and Accountability Act compliance, and devising operational strategies for patient outreach and clinical decision support delivery.
Stage 3 was developing three IHS implementation models. With insight into the local context from local clinicians, implementation models were adapted to work with the resources and capacity of the IHSs while ensuring the delivery of essential elements of the programme.
Conclusion:
Developing and launching a QIP programme across primary care practices requires extensive groundwork, preparation, and committed local champions to assist in building an adaptable environment that encourages open communication and is receptive to feedback.
Artificial intelligence is dramatically reshaping scientific research and is coming to play an essential role in scientific and technological development by enhancing and accelerating discovery across multiple fields. This book dives into the interplay between artificial intelligence and the quantum sciences; the outcome of a collaborative effort from world-leading experts. After presenting the key concepts and foundations of machine learning, a subfield of artificial intelligence, its applications in quantum chemistry and physics are presented in an accessible way, enabling readers to engage with emerging literature on machine learning in science. By examining its state-of-the-art applications, readers will discover how machine learning is being applied within their own field and appreciate its broader impact on science and technology. This book is accessible to undergraduates and more advanced readers from physics, chemistry, engineering, and computer science. Online resources include Jupyter notebooks to expand and develop upon key topics introduced in the book.
The theory of kernels offers a rich mathematical framework for the archetypical tasks of classification and regression. Its core insight consists of the representer theorem that asserts that an unknown target function underlying a dataset can be represented by a finite sum of evaluations of a singular function, the so-called kernel function. Together with the infamous kernel trick that provides a practical way of incorporating such a kernel function into a machine learning method, a plethora of algorithms can be made more versatile. This chapter first introduces the mathematical foundations required for understanding the distinguished role of the kernel function and its consequence in terms of the representer theorem. Afterwards, we show how selected popular algorithms, including Gaussian processes, can be promoted to their kernel variant. In addition, several ideas on how to construct suitable kernel functions are provided, before demonstrating the power of kernel methods in the context of quantum (chemistry) problems.
In this chapter, we change our viewpoint and focus on how physics can influence machine learning research. In the first part, we review how tools of statistical physics can help to understand key concepts in machine learning such as capacity, generalization, and the dynamics of the learning process. In the second part, we explore yet another direction and try to understand how quantum mechanics and quantum technologies could be used to solve data-driven task. We provide an overview of the field going from quantum machine learning algorithms that can be run on ideal quantum computers to kernel-based and variational approaches that can be run on current noisy intermediate-scale quantum devices.
In this chapter, we introduce the field of reinforcement learning and some of its most prominent applications in quantum physics and computing. First, we provide an intuitive description of the main concepts, which we then formalize mathematically. We introduce some of the most widely used reinforcement learning algorithms. Starting with temporal-difference algorithms and Q-learning, followed by policy gradient methods and REINFORCE, and the interplay of both approaches in actor-critic algorithms. Furthermore, we introduce the projective simulation algorithm, which deviates from the aforementioned prototypical approaches and has multiple applications in the field of physics. Then, we showcase some prominent reinforcement learning applications, featuring some examples in games; quantum feedback control; quantum computing, error correction and information; and the design of quantum experiments. Finally, we discuss some potential applications and limitations of reinforcement learning in the field of quantum physics.
This chapter discusses more specialized examples on how machine learning can be used to solve problems in quantum sciences. We start by explaining the concept of differentiable programming and its use cases in quantum sciences. Next, we describe deep generative models, which have proven to be an extremely appealing tool for sampling from unknown target distributions in domains ranging from high-energy physics to quantum chemistry. Finally, we describe selected machine learning applications for experimental setups such as ultracold systems or quantum dots. In particular, we show how machine learning can help in tedious and repetitive experimental tasks in quantum devices or in validating quantum simulators with Hamiltonian learning.
In this chapter, we describe basic machine learning concepts connected to optimization and generalization. Moreover, we present a probabilistic view on machine learning that enables us to deal with uncertainty in the predictions we make. Finally, we discuss various basic machine learning models such as support vector machines, neural networks, autoencoders, and autoregressive neural networks. Together, these topics form the machine learning preliminaries needed for understanding the contents of the rest of the book.
In this chapter, we review the growing field of research aiming to represent quantum states with machine learning models, known as neural quantum states. We introduce the key ideas and methods and review results about the capacity of such representations. We discuss in details many applications of neural quantum states, including but not limited to finding the ground state of a quantum system, solving its time evolution equation, quantum tomography, open quantum system dynamics and steady-state solution, and quantum chemistry. Finally, we discuss the challenges to be solved to fully unleash the potential of neural quantum states.
In this chapter, we introduce the reader to basic concepts in machine learning. We start by defining the artificial intelligence, machine learning, and deep learning. We give a historical viewpoint on the field, also from the perspective of statistical physics. Then, we give a very basic introduction to different tasks that are amenable for machine learning such as regression or classification and explain various types of learning. We end the chapter by explaining how to read the book and how chapters depend on each other.