We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The main ideas are introduced in a historical context. Beginning with phase retrieval and ending with neural networks, the reader will get a sense of the book’s broad scope.
An isolated system is described by a classical Hamiltonian dynamics. In the long-time limit, the trajectory of such a system yields a histogram, i.e., a distribution for any observable. With one plausible assumption, introduced here as a fundamental principle, this histogram is shown to lead to the microcanonical distribution. Pressure, temperature, and chemical potential can then be identified microscopically. This dynamical approach thus recovers the results that are often obtained for equilibrium by minimizing a postulated entropy function.
In this chapter, we introduce the reader to basic concepts in machine learning. We start by defining the artificial intelligence, machine learning, and deep learning. We give a historical viewpoint on the field, also from the perspective of statistical physics. Then, we give a very basic introduction to different tasks that are amenable for machine learning such as regression or classification and explain various types of learning. We end the chapter by explaining how to read the book and how chapters depend on each other.
The chapter is an introduction to basic equilibrium aspects of phase transitions. It starts by reviewing thermodynamics and the thermodynamic description of phase transitions. Next, lattice models, such as the paradigmatic Ising model, are introduced as simple physical models that permit a mechano-statistical study of phase transitions from a more microscopic point of view. It is shown that the Ising model can quite faithfully describe many different systems after suitable interpretation of the lattice variables. Special emphasis is placed on the mean-field concept and the mean-field approximations. The deformable Ising model is then studied as an example that illustrates the interplay of different degrees of freedom. Subsequently, the Landau theory of phase transitions is introduced for continuous and first-order transitions, as well as critical and tricritical behaviour are analysed. Finally, scaling theories and the notion of universality within the framework of the renormalization group are briefly discussed.
Chapter 1 begins by re-examining the textbook quantum postulates. It concludes with the realization that some of them are inconsistent with quantum mathematics, but also that they may not have to be postulated. Indeed, in the following two chapters it is shown that their consequences follow from the other, consistent postulates. This simplification of the quantum foundations provides a consistent, convenient, and solid starting point. The emergence of the classical from the quantum substrate is based on this foundation of “core quantum postulates”—the “quantum credo”. Discussion of the postulates is accompanied by a brief summary of their implications for the interpretation of quantum theory. This discussion touches on questions of interpretation that are implicit throughout the book, but will be addressed more fully in Chapter 9. Chapter 1 ends with a “decoherence primer” that provides a quick introduction to decoherence (discussed in detail in Part II). Its aim is to provide the reader with an overview of the process that will play an important role throughout the book, and to motivate Chapters 2 and 3 that lay the foundations for the physics of decoherence (Part II) as well as for quantum Darwinism, the subject of Chapters 7 and 8.
The Green’s function method is among the most powerful and versatile formalisms in physics, and its nonequilibrium version has proved invaluable in many research fields. With entirely new chapters and updated example problems, the second edition of this popular text continues to provide an ideal introduction to nonequilibrium many-body quantum systems and ultrafast phenomena in modern science. Retaining the unique and self-contained style of the original, this new edition has been thoroughly revised to address interacting systems of fermions and bosons, simplified many-body approaches like the GKBA, the Bloch equations, and the Boltzmann equations, and the connection between Green’s functions and newly developed time-resolved spectroscopy techniques. Small gaps in the theory have been filled, and frequently overlooked subtleties have been systematically highlighted and clarified. With an abundance of illustrative examples, insightful discussions, and modern applications, this book remains the definitive guide for students and researchers alike.
Network science has exploded in popularity since the late 1990s. But it flows from a long and rich tradition of mathematical and scientific understanding of complex systems. We can no longer imagine the world without evoking networks. And network data is at the heart of it. In this chapter, we set the stage by highlighting network sciences ancestry and the exciting scientific approaches that networks have enabled, followed by a tour of the basic concepts and properties of networks.
I introduce the problem of “dry active matter” more precisely, describing the symmetries (both underlying, and broken) of the state I wish to consider, and also discuss how shocking it is that such systems can exhibit long-ranged order – that is, all move together – even in d = 2.
This chapter explains what we mean by “fields” and “waves” in physics, and argues that quantum waves are just as “real” as other waves we experience in daily life, such as water waves and sound waves.
In this chapter we provide an overview of data modeling and describe the formulation of probabilistic models. We introduce random variables, their probability distributions, associated probability densities, examples of common densities, and the fundamental theorem of simulation to draw samples from discrete or continuous probability distributions. We then present the mathematical machinery required in describing and handling probabilistic models, including models with complex variable dependencies. In doing so, we introduce the concepts of joint, conditional, and marginal probability distributions, marginalization, and ancestral sampling.
This chapter starts with an introductory survey on the physical background and historical events that lead to the emergence of the density matrix renormalization group (DMRG) and its tensor network generalization. We then briefly overview the major progress on the renormalization group methods of tensor networks and their applications in the past three decades. The tensor network renormalization was initially developed to solve quantum many-body problems, but its application field has grown constantly. It has now become an irreplaceable tool for investigating strongly correlated problems, statistical physics, quantum information, quantum chemistry, and artificial intelligence.
After a discussion of best programming practices and a brief summary of basic features of the Python programming language, chapter 1 discusses several modern idioms. These include the use of list comprehensions, dictionaries, the for-else idiom, as well as other ways to iterate Pythonically. Throughout, the focus is on programming in a way which feels natural, i.e., working with the language (as opposed to working against the language). The chapter also includes basic information on how to make figures using Matplotlib, as well as advice on how to effectively use the NumPy library, with an emphasis on slicing, vectorization, and broadcasting. The chapter is rounded out by a physics project, which studies the visualization of electric fields, and a problem set.
Opening with a brief sketch of the evolution of research evaluation is followed by a description of the publication-oriented nature of academia today. The Introduction provides the necessary contextual information for investigating research evaluation systems. It then defines two critical blind spots in the contemporary literature on research evaluation systems. The first is the absence, within histories of the science of measuring and evaluating research, of the Soviet Union and post-socialist countries. This is despite the fact that these countries have played a key part in this history, from its very inception. The second relates to thinking about global differences in studies of the transformations in scholarly communication. It is stressed that the contexts in which countries confront the challenges of the publish or perish culture and questionable journals and conferences should be taken into account in discussions about them. Through its overview of diverse histories of evaluation and its identification of core issues in the literature, the chapter introduces readers to the book’s core arguments.
At the macroscale, thermodynamics rules the balances of energy and entropy. In nonisolated systems, the entropy changes due to the contributions from the internal entropy production, which is always nonnegative according to the second law, and the exchange of entropy with the environment. The entropy production is equal to zero at equilibrium and positive out of equilibrium. Thermodynamics can be formulated either locally for continuous media or globally for systems in contact with several reservoirs. Accordingly, the entropy production is expressed in terms of either the local or the global affinities and currents, the affinities being the thermodynamic forces driving the system away from equilibrium. Depending on the boundary and initial conditions, the system can undergo relaxation towards equilibrium or nonequilibrium stationary or time-dependent macrostates. As examples, thermodynamics is applied to diffusion, electric circuits, reaction networks, and engines.
Leonard Euler’s ingenious approach to the conundrum which surrounded the seven bridges of Königsberg provided us not only with the definite solution to this intriguing problem, but also planted the seed from which the mathematical field of graph theory germinated. Although Euler’s now-historic negative resolution ended the tedious explorative search for a viable path through the city by inspired inhabitants and visitors of this Prussian town, this brute-force approach certainly merits further investigation in light of many modern-day problems which rely on such an approach due to the lack of better options. Is it possible to formulate this active exploration of the network of Königsberg’s bridges in mathematical terms? The affirmative answer to this question leads us to another field of mathematics, operator theory. This chapter will provide a coarse introduction into the very basics of operator calculus, the algebraic tool utilised to describe operations on and mappings between finite vector spaces. The application of this formalism to graph-theoretical objects will then establish the conceptual framework for Operator Graph Theory, the central objective of this book.