We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Emotion plays a critical role in every human interaction and permeates all social activity. Displaying, responding to, and talking about emotions is thus central to human language, communication, and social interaction. However, emotions are multidimensional, indeterminate, and inherently situated phenomena, which makes studying them in contextualised settings challenging for researchers. This groundbreaking book illustrates what a sociopragmatic perspective brings to the broader scholarly understanding of emotion and its role in social life, and sets out to lay the necessary foundations for a sociopragmatic theorisation of emotion. It brings together a renowned team of multidisciplinary scholars to demonstrate how evaluation, relationships, and morality are central to any account of emotions in discourse and interaction. It also exemplifies how a sociopragmatic approach to emotions pays more attention to the role that different discourse systems play in how emotions are expressed, interpreted, responded to, and talked about across different languages and cultures.
Mountainous regions host globally unique biodiversity, but face growing threats from climate and land-use change. The Alps stand out as a key mountain range in Europe, where the ski industry is extensive and impacts ecosystems and their associated biodiversity. However, climate change is projected to reduce natural snow precipitation, thus understanding snow dynamics and the ski industry’s role is crucial for developing effective conservation strategies. Ski-piste creation generally has detrimental consequences for mountain biodiversity, yet pistes often retain substantial snow throughout spring that, when melting, may create favourable foraging conditions for mountain birds. This study investigates whether ski-pistes provide suitable foraging habitat and explores their broader importance for mountain avifauna. Field surveys in spring 2023 in the western Italian Alps recorded 17 bird species using the melting snow on ski-pistes as a foraging habitat. Snow presence was a significant factor influencing bird presence. Birds systematically selected areas with intermediate snow cover interspersed with muddy patches, a microhabitat that likely has a high availability of invertebrate prey emerging from the soil. Given that snow is retained on ski-pistes for longer than on the surrounding habitat, the pistes may represent a useful source of food for mountain birds in spring. However, this needs to be considered in relation to the negative impacts of skiing on alpine biodiversity, which may include a likely increased reliance on artificial snow in response to the projected decline in natural snow precipitation under climate change. Understanding these effects is essential to ensure that future conservation strategies support mountain bird communities without exacerbating the environmental costs associated with artificial snow production.
This article contends that the humanitarianism that developed in Europe in the 1930s and 1940s and, in particular, because of the Spanish Civil War, was shaped by a transnational network that was fundamentally female. Within this network, women with diverse political experiences converged; however, suffragism, pacifism and anti-fascism occupied a central place. Humanitarianism became for them a favourable space from which to intervene politically. To demonstrate this, we focus on the CAEERF, an aid organisation formed in 1939 in response to the arrival of Spanish refugees in France. It was created, led by and composed mainly of women from different backgrounds. The first part of this article concerns anti-fascist and humanitarian women’s networks that emerged during the Spanish Civil War. The second traces the journey of the British Quaker Edith Mary Pye, the driving force behind the CAEERF. The third and fourth parts discuss its creation and the work that it carried out on the ground.
The Romans were among the first societies to extensively exploit fish resources, establishing large-scale salting and preservation plants where small pelagic fish were fermented to produce sauces such as garum. Here, the authors demonstrate that, despite being crushed and exposed to acidic conditions, usable DNA can be recovered from ichthyological residues at the bottom of fish-salting vats. At third-century AD Adro Vello (O Grove), Galicia, they confirm the use of European sardines (Sardina pilchardus) and move beyond morphology to explore population range and admixture and reveal the potential of this overlooked archaeological resource.
Artificial intelligence is dramatically reshaping scientific research and is coming to play an essential role in scientific and technological development by enhancing and accelerating discovery across multiple fields. This book dives into the interplay between artificial intelligence and the quantum sciences; the outcome of a collaborative effort from world-leading experts. After presenting the key concepts and foundations of machine learning, a subfield of artificial intelligence, its applications in quantum chemistry and physics are presented in an accessible way, enabling readers to engage with emerging literature on machine learning in science. By examining its state-of-the-art applications, readers will discover how machine learning is being applied within their own field and appreciate its broader impact on science and technology. This book is accessible to undergraduates and more advanced readers from physics, chemistry, engineering, and computer science. Online resources include Jupyter notebooks to expand and develop upon key topics introduced in the book.
The theory of kernels offers a rich mathematical framework for the archetypical tasks of classification and regression. Its core insight consists of the representer theorem that asserts that an unknown target function underlying a dataset can be represented by a finite sum of evaluations of a singular function, the so-called kernel function. Together with the infamous kernel trick that provides a practical way of incorporating such a kernel function into a machine learning method, a plethora of algorithms can be made more versatile. This chapter first introduces the mathematical foundations required for understanding the distinguished role of the kernel function and its consequence in terms of the representer theorem. Afterwards, we show how selected popular algorithms, including Gaussian processes, can be promoted to their kernel variant. In addition, several ideas on how to construct suitable kernel functions are provided, before demonstrating the power of kernel methods in the context of quantum (chemistry) problems.
In this chapter, we change our viewpoint and focus on how physics can influence machine learning research. In the first part, we review how tools of statistical physics can help to understand key concepts in machine learning such as capacity, generalization, and the dynamics of the learning process. In the second part, we explore yet another direction and try to understand how quantum mechanics and quantum technologies could be used to solve data-driven task. We provide an overview of the field going from quantum machine learning algorithms that can be run on ideal quantum computers to kernel-based and variational approaches that can be run on current noisy intermediate-scale quantum devices.
In this chapter, we introduce the field of reinforcement learning and some of its most prominent applications in quantum physics and computing. First, we provide an intuitive description of the main concepts, which we then formalize mathematically. We introduce some of the most widely used reinforcement learning algorithms. Starting with temporal-difference algorithms and Q-learning, followed by policy gradient methods and REINFORCE, and the interplay of both approaches in actor-critic algorithms. Furthermore, we introduce the projective simulation algorithm, which deviates from the aforementioned prototypical approaches and has multiple applications in the field of physics. Then, we showcase some prominent reinforcement learning applications, featuring some examples in games; quantum feedback control; quantum computing, error correction and information; and the design of quantum experiments. Finally, we discuss some potential applications and limitations of reinforcement learning in the field of quantum physics.
This chapter discusses more specialized examples on how machine learning can be used to solve problems in quantum sciences. We start by explaining the concept of differentiable programming and its use cases in quantum sciences. Next, we describe deep generative models, which have proven to be an extremely appealing tool for sampling from unknown target distributions in domains ranging from high-energy physics to quantum chemistry. Finally, we describe selected machine learning applications for experimental setups such as ultracold systems or quantum dots. In particular, we show how machine learning can help in tedious and repetitive experimental tasks in quantum devices or in validating quantum simulators with Hamiltonian learning.
In this chapter, we describe basic machine learning concepts connected to optimization and generalization. Moreover, we present a probabilistic view on machine learning that enables us to deal with uncertainty in the predictions we make. Finally, we discuss various basic machine learning models such as support vector machines, neural networks, autoencoders, and autoregressive neural networks. Together, these topics form the machine learning preliminaries needed for understanding the contents of the rest of the book.
In this chapter, we review the growing field of research aiming to represent quantum states with machine learning models, known as neural quantum states. We introduce the key ideas and methods and review results about the capacity of such representations. We discuss in details many applications of neural quantum states, including but not limited to finding the ground state of a quantum system, solving its time evolution equation, quantum tomography, open quantum system dynamics and steady-state solution, and quantum chemistry. Finally, we discuss the challenges to be solved to fully unleash the potential of neural quantum states.
In this chapter, we introduce the reader to basic concepts in machine learning. We start by defining the artificial intelligence, machine learning, and deep learning. We give a historical viewpoint on the field, also from the perspective of statistical physics. Then, we give a very basic introduction to different tasks that are amenable for machine learning such as regression or classification and explain various types of learning. We end the chapter by explaining how to read the book and how chapters depend on each other.
Distinguishing between different phases of matter and detecting phase transitions are some of the most central tasks in many-body physics. Traditionally, these tasks are accomplished by searching for a small set of low-dimensional quantities capturing the macroscopic properties of each phase of the system, so-called order parameters. Because of the large state space underlying many-body systems, success generally requires a great deal of human intuition and understanding. In particular, it can be challenging to define an appropriate order parameter if the symmetry breaking pattern is unknown or the phase is of topological nature and thus exhibits nonlocal order. In this chapter, we explore the use of machine learning to automate the task of classifying phases of matter and detecting phase transitions. We discuss the application of various machine learning techniques, ranging from clustering to supervised learning and anomaly detection, to different physical systems, including the prototypical Ising model that features a symmetry-breaking phase transition and the Ising gauge theory which hosts a topological phase of matter.