We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Multicenter clinical trials are essential for evaluating interventions but often face significant challenges in study design, site coordination, participant recruitment, and regulatory compliance. To address these issues, the National Institutes of Health’s National Center for Advancing Translational Sciences established the Trial Innovation Network (TIN). The TIN offers a scientific consultation process, providing access to clinical trial and disease experts who provide input and recommendations throughout the trial’s duration, at no cost to investigators. This approach aims to improve trial design, accelerate implementation, foster interdisciplinary teamwork, and spur innovations that enhance multicenter trial quality and efficiency. The TIN leverages resources of the Clinical and Translational Science Awards (CTSA) program, complementing local capabilities at the investigator’s institution. The Initial Consultation process focuses on the study’s scientific premise, design, site development, recruitment and retention strategies, funding feasibility, and other support areas. As of 6/1/2024, the TIN has provided 431 Initial Consultations to increase efficiency and accelerate trial implementation by delivering customized support and tailored recommendations. Across a range of clinical trials, the TIN has developed standardized, streamlined, and adaptable processes. We describe these processes, provide operational metrics, and include a set of lessons learned for consideration by other trial support and innovation networks.
The stars of the Milky Way carry the chemical history of our Galaxy in their atmospheres as they journey through its vast expanse. Like barcodes, we can extract the chemical fingerprints of stars from high-resolution spectroscopy. The fourth data release (DR4) of the Galactic Archaeology with HERMES (GALAH) Survey, based on a decade of observations, provides the chemical abundances of up to 32 elements for 917 588 stars that also have exquisite astrometric data from the Gaia satellite. For the first time, these elements include life-essential nitrogen to complement carbon, and oxygen as well as more measurements of rare-earth elements critical to modern-life electronics, offering unparalleled insights into the chemical composition of the Milky Way. For this release, we use neural networks to simultaneously fit stellar parameters and abundances across the whole wavelength range, leveraging synthetic grids computed with Spectroscopy Made Easy. These grids account for atomic line formation in non-local thermodynamic equilibrium for 14 elements. In a two-iteration process, we first fit stellar labels to all 1 085 520 spectra, then co-add repeated observations and refine these labels using astrometric data from Gaia and 2MASS photometry, improving the accuracy and precision of stellar parameters and abundances. Our validation thoroughly assesses the reliability of spectroscopic measurements and highlights key caveats. GALAH DR4 represents yet another milestone in Galactic archaeology, combining detailed chemical compositions from multiple nucleosynthetic channels with kinematic information and age estimates. The resulting dataset, covering nearly a million stars, opens new avenues for understanding not only the chemical and dynamical history of the Milky Way but also the broader questions of the origin of elements and the evolution of planets, stars, and galaxies.
Indirect calorimetry (IC) is regarded as the benchmark for measuring resting energy expenditure (REE)(1) but validity and reliability in adults with overweight or obesity have not been systematically appraised(2). The aim of our research was to evaluate the diagnostic accuracy of IC for REE in adults with overweight or obesity. A rapid systematic review was conducted. PubMed and Web of Science were searched to December 2023. Eligible studies measured REE by IC in adults with overweight or obesity (BMI ≥ 25 kg/m2 or mean BMI > 30 kg/m2) reporting validity and/or reliability. Studies were selected using Covidence and critically appraised using the CASP diagnostic study checklist. From n = 4022 records, n = 21 studies utilising n = 13 different IC devices were included (n = 10 reported concurrent validity, n = 7 reported predictive validity, n = 7 reported reliability). A hand-held IC had poor validity and inconsistent reliability (n = 6 studies). Standard desktop-based ICs (n = 9 devices) were examined by across n = 18 studies; most demonstrated high validity, predictive ability, and good to excellent reliability. An IC accelerometer showed weak validity (n = 1 study); a body composition-based IC showed strong validity (n = 1 study); and a whole-room IC demonstrated excellent reliability (n = 1 study). Standard desktop-based IC demonstrated the most consistent validity, predictive ability, and reliability for REE in adults with overweight or obesity. Hand-held IC may have limited validity and reliability. Accelerometer, body composition-based, and whole-room IC devices require further evaluation. Inconsistent findings are attributed to differing methodologies and reference standards. Further research is needed to examine the diagnostic accuracy of IC in adults with overweight and obesity.
Posttraumatic stress disorder (PTSD) has been associated with advanced epigenetic age cross-sectionally, but the association between these variables over time is unclear. This study conducted meta-analyses to test whether new-onset PTSD diagnosis and changes in PTSD symptom severity over time were associated with changes in two metrics of epigenetic aging over two time points.
Methods
We conducted meta-analyses of the association between change in PTSD diagnosis and symptom severity and change in epigenetic age acceleration/deceleration (age-adjusted DNA methylation age residuals as per the Horvath and GrimAge metrics) using data from 7 military and civilian cohorts participating in the Psychiatric Genomics Consortium PTSD Epigenetics Workgroup (total N = 1,367).
Results
Meta-analysis revealed that the interaction between Time 1 (T1) Horvath age residuals and new-onset PTSD over time was significantly associated with Horvath age residuals at T2 (meta β = 0.16, meta p = 0.02, p-adj = 0.03). The interaction between T1 Horvath age residuals and changes in PTSD symptom severity over time was significantly related to Horvath age residuals at T2 (meta β = 0.24, meta p = 0.05). No associations were observed for GrimAge residuals.
Conclusions
Results indicated that individuals who developed new-onset PTSD or showed increased PTSD symptom severity over time evidenced greater epigenetic age acceleration at follow-up than would be expected based on baseline age acceleration. This suggests that PTSD may accelerate biological aging over time and highlights the need for intervention studies to determine if PTSD treatment has a beneficial effect on the aging methylome.
A new fossil of Lycidae, Domipteron gaoi n. gen. n. sp., is described from Miocene Dominican amber. The fossil exhibits a combination of characteristics found in both Calopterini and Eurrhacini. To determine its systematic placement, we conducted phylogenetic analyses based on adult morphological features. Our analyses indicate that the new fossil belongs to Calopterini.
The 1994 discovery of Shor's quantum algorithm for integer factorization—an important practical problem in the area of cryptography—demonstrated quantum computing's potential for real-world impact. Since then, researchers have worked intensively to expand the list of practical problems that quantum algorithms can solve effectively. This book surveys the fruits of this effort, covering proposed quantum algorithms for concrete problems in many application areas, including quantum chemistry, optimization, finance, and machine learning. For each quantum algorithm considered, the book clearly states the problem being solved and the full computational complexity of the procedure, making sure to account for the contribution from all the underlying primitive ingredients. Separately, the book provides a detailed, independent summary of the most common algorithmic primitives. It has a modular, encyclopedic format to facilitate navigation of the material and to provide a quick reference for designers of quantum algorithms and quantum computing researchers.
This chapter covers quantum algorithmic primitives for loading classical data into a quantum algorithm. These primitives are important in many quantum algorithms, and they are especially essential for algorithms for big-data problems in the area of machine learning. We cover quantum random access memory (QRAM), an operation that allows a quantum algorithm to query a classical database in superposition. We carefully detail caveats and nuances that appear for realizing fast large-scale QRAM and what this means for algorithms that rely upon QRAM. We also cover primitives for preparing arbitrary quantum states given a list of the amplitudes stored in a classical database, and for performing a block-encoding of a matrix, given a list of its entries stored in a classical database.
This chapter covers the multiplicative weights update method, a quantum algorithmic primitive for certain continuous optimization problems. This method is a framework for classical algorithms, but it can be made quantum by incorporating the quantum algorithmic primitive of Gibbs sampling and amplitude amplification. The framework can be applied to solve linear programs and related convex problems, or generalized to handle matrix-valued weights and used to solve semidefinite programs.
This chapter covers quantum algorithmic primitives related to linear algebra. We discuss block-encodings, a versatile and abstract access model that features in many quantum algorithms. We explain how block-encodings can be manipulated, for example by taking products or linear combinations. We discuss the techniques of quantum signal processing, qubitization, and quantum singular value transformation, which unify many quantum algorithms into a common framework.
In the Preface, we motivate the book by discussing the history of quantum computing and the development of the field of quantum algorithms over the past several decades. We argue that the present moment calls for adopting an end-to-end lens in how we study quantum algorithms, and we discuss the contents of the book and how to use it.
This chapter covers the quantum adiabatic algorithm, a quantum algorithmic primitive for preparing the ground state of a Hamiltonian. The quantum adiabatic algorithm is a prominent ingredient in quantum algorithms for end-to-end problems in combinatorial optimization and simulation of physical systems. For example, it can be used to prepare the electronic ground state of a molecule, which is used as an input to quantum phase estimation to estimate the ground state energy.
This chapter covers quantum linear system solvers, which are quantum algorithmic primitives for solving a linear system of equations. The linear system problem is encountered in many real-world situations, and quantum linear system solvers are a prominent ingredient in quantum algorithms in the areas of machine learning and continuous optimization. Quantum linear systems solvers do not themselves solve end-to-end problems because their output is a quantum state, which is one of its major caveats.
This chapter presents an introduction to the theory of quantum fault tolerance and quantum error correction, which provide a collection of techniques to deal with imperfect operations and unavoidable noise afflicting the physical hardware, at the expense of moderately increased resource overheads.
This chapter covers the quantum algorithmic primitive called quantum gradient estimation, where the goal is to output an estimate for the gradient of a multivariate function. This primitive features in other primitives, for example, quantum tomography. It also features in several quantum algorithms for end-to-end problems in continuous optimization, finance, and machine learning, among other areas. The size of the speedup it provides depends on how the algorithm can access the function, and how difficult the gradient is to estimate classically.
This chapter covers quantum algorithms for numerically solving differential equations and the areas of application where such capabilities might be useful, such as computational fluid dynamics, semiconductor chip design, and many engineering workflows. We focus mainly on algorithms for linear differential equations (covering both partial and ordinary linear differential equations), but we also mention the additional nuances that arise for nonlinear differential equations. We discuss important caveats related to both the data input and output aspects of an end-to-end differential equation solver, and we place these quantum methods in the context of existing classical methods currently in use for these problems.
This chapter covers the quantum algorithmic primitive of approximate tensor network contraction. Tensor networks are a powerful classical method for representing complex classical data as a network of individual tensor objects. To evaluate the tensor network, it must be contracted, which can be computationally challenging. A quantum algorithm for approximate tensor network contraction can provide a quantum speedup for contracting tensor networks that satisfy certain conditions.