We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter follows a logic of exposition initiated by Gibbs in 1902. On the one hand, some theoretical results in statistical mechanics have been derived in Chapter 3, while, on another hand, some theoretical/experimental results are expressed within thermodynamics, and parallels are drawn between the two approaches. To this end, the theory of thermodynamics and its laws are presented. The chapter takes an approach where each stated law is attached to a readable source material and a person’s writing. The exposition of the second law follows the axiomatics of Carathéodory, for example. This has the advantage of decoupling the physics from the mathematics. The structure of thermodynamic theory with the scaling behaviour of thermodynamic variables, Massieu potentials and Legendre transformations is also developed. Finally, correspondence relations are postulated between thermodynamics and statistical mechanics, allowing one to interpret thermodynamic variables as observational states associated to certain probability laws. Applications are given, including the Gibbs paradox. The equivalence between the canonical and the microcanonical ensembles is analysed in detail.
Exposure to multiple languages may support the development of Theory of Mind (ToM) in neurotypical (NT) and autistic children. However, previous research mainly applied group comparisons between monolingual and bilingual children, and the underlying mechanism of the observed difference remains unclear. The present study, therefore, sheds light on the effect of bilingualism on ToM in both NT and autistic children by measuring language experiences with a continuous operationalization. We measure ToM with a behavioral, linguistically simple tablet-based task, allowing inclusive assessment in autistic children. Analyses revealed no difference between monolingual and bilingual NT and autistic children. However, more balanced exposure to different languages within contexts positively predicted first-order false belief understanding in NT children but not autistic children. Mediation analysis showed that the impact in NT children was a direct effect and not mediated via other cognitive skills.
Based on the long-running Probability Theory course at the Sapienza University of Rome, this book offers a fresh and in-depth approach to probability and statistics, while remaining intuitive and accessible in style. The fundamentals of probability theory are elegantly presented, supported by numerous examples and illustrations, and modern applications are later introduced giving readers an appreciation of current research topics. The text covers distribution functions, statistical inference and data analysis, and more advanced methods including Markov chains and Poisson processes, widely used in dynamical systems and data science research. The concluding section, 'Entropy, Probability and Statistical Mechanics' unites key concepts from the text with the authors' impressive research experience, to provide a clear illustration of these powerful statistical tools in action. Ideal for students and researchers in the quantitative sciences this book provides an authoritative account of probability theory, written by leading researchers in the field.
This chapter traces the development of an economic sublime associated with modern neoclassical economics. A precursor to Fredric Jameson’s postmodern “hysterical sublime,” the sublimity of neoclassical economics derived from the thrilling sense that, through math, economics could access vast and terrifying universal forces, and connect individuals directly to them. However, the most popular expression of the economic sublime was not mathematical but literary, and consisted primarily of naturalist novels that chronicled human encounters with economic laws allegedly so regular, universal, and inexorable that they amounted to a new branch of physics. Through readings of novels by Theodore Dreiser, Jack London, and Frank Norris, this chapter shows how literature helped train readers to understand neoclassical economics not just as natural, but also as an indispensable source of romantic pleasure.
One of life’s most fundamental revelations is change. Presenting the fascinating view that pattern is the manifestation of change, this unique book explores the science, mathematics, and philosophy of change and the ways in which they have come to inform our understanding of the world. Through discussions on chance and determinism, symmetry and invariance, information and entropy, quantum theory and paradox, the authors trace the history of science and bridge the gaps between mathematical, physical, and philosophical perspectives. Change as a foundational concept is deeply rooted in ancient Chinese thought, and this perspective is integrated into the narrative throughout, providing philosophical counterpoints to customary Western thought. Ultimately, this is a book about ideas. Intended for a wide audience, not so much as a book of answers, but rather an introduction to new ways of viewing the world.
For a system in contact with a heat bath, it is shown how the distribution of any observable follows from a microcanonical description for the isolated system consisting of the system of interest and heat bath. The weak coupling approximation then leads to the standard expression for the canonical distribution. Free energy, canonical entropy, and pressure are introduced. For large systems, the equivalence of this canonical description with the microcanonical one is shown. For systems in contact with a particle reservoir, the grand-canonical distribution is derived. If the weak coupling approximation does not hold, the corrections due to strong coupling are determined. In particular, internal energy, free energy ,and entropy are identified such that the usual relations for these thermodynamic potentials hold true even in strong coupling.
The chapter begins with the basic thermodynamic concepts that form the basis of high-speed flow theory, including a basic physical understanding of the second law of thermodynamics. This results in the ability to use the isentropic flow relationships in analyzing the properties of a compressible flow field, which results in the ability to analyze flow in a stream tube, and understand how a converging–diverging nozzle works. The basic relations for determining the change in flow properties across shock waves and expansion fans are developed, which make it possible to analyze flow fields using shock and expansion calculation methods. The basic relations for viscous flow are developed, leading to the relations for calculating the local skin-friction coefficient for a compressible boundary layer. The reader will then be able to understand the cause and effect of shock–boundary layer and shock–shock interactions. Finally, concepts for how flight vehicles are tested in wind tunnels are developed, which explains why it is difficult to fully model full-scale flight characteristics.
In this original and modern book, the complexities of quantum phenomena and quantum resource theories are meticulously unravelled, from foundational entanglement and thermodynamics to the nuanced realms of asymmetry and beyond. Ideal for those aspiring to grasp the full scope of quantum resources, the text integrates advanced mathematical methods and physical principles within a comprehensive, accessible framework. Including over 760 exercises throughout, to develop and expand key concepts, readers will gain an unrivalled understanding of the topic. With its unique blend of pedagogical depth and cutting-edge research, it not only paves the way for a deep understanding of quantum resource theories but also illuminates the path toward innovative research directions. Providing the latest developments in the field as well as established knowledge within a unified framework, this book will be indispensable to students, educators, and researchers interested in quantum science's profound mysteries and applications.
Chapter 6 builds upon the foundation of divergences from Chapter 5, advancing into entropies and relative entropies with an axiomatic approach, and the inclusion of the additivity axiom. The chapter delves into the classical and quantum relative entropies, establishing their core properties and revealing the significance of the KL-divergence introduced in Chapter 5, notably characterized by asymptotic continuity. Quantum relative entropies are addressed as generalizations of classical ones, with a focus on the conditions necessary for these measures in the quantum framework. Several variants of relative entropies are discussed, including Renyi relative entropies and their extensions to quantum domain such as the Petz quantum Renyi divergence, minimal quantum Renyi divergence, and the maximal quantum Renyi divergence. This discourse underlines the relevance of continuity and its relation to faithfulness in relative entropies. The concept of entropy is portrayed as a measure with a broad spectrum of interpretations and applications across fields, from thermodynamics and information theory to cosmology and economics.
In this study, we use a novel design to test for directional behavioral spillover and cognitive load effects in a set of multiple repeated games. Specifically, in our experiment, each subject plays a common historical game with two different matches for 100 rounds. After 100 rounds, the subject switches to a new game with one match and continues playing the historical game with the other match. This design allows us to identify the direction of any behavioral spillover. Our results show that participants exhibit both behavioral spillover and cognitive load effects. First, for pairs of Prisoners’ Dilemma and Alternation games, we find that subjects apply strategies from the historical game when playing the new game. Second, we find that those who participate in a Self Interest game as either their historical or new game achieve Pareto efficient outcomes more often in the Prisoners’ Dilemma and Alternation games compared to their control counterparts. Overall, our results show that, when faced with a new game, participants use strategies that reflect both behavioral spillover and cognitive load effects.
Information processing is a process of uncertainty resolution. Information-theoretic constructs such as surprisal and entropy reflect the fine-grained probabilistic knowledge which people have accumulated over time. The information-theoretic constructs explain the extent of processing difficulty that people encounter, for example when comprehending language. Processing difficulty and cognitive effort in turn are a direct reflection of predictability.
The extensive thermodynamic variables of a fluid are introduced as the internal energy, volume, and number of molecules. The entropy is defined and also shown to be extensive. Taking the total derivative of the internal energy produces the first law of thermodynamics and defines the intensive parameters of temperature, pressure, and chemical potential. Changing variables from extensive variables to intensive variables is accomplished with the Legendre transform and defines alternative energies such as the Helmholtz free energy, enthalpy, and Gibbs free energy. Thermodynamic equilibrium requires that each element of a system have the same temperature, pressure, and chemical potential. For equilibrium to be stable, the material properties of each element must satisfy certain derived constraints. First-order phase transition are treated for a single-species system. Multispecies systems are treated and a widely used expression for how the chemical potentials of each species depend on the concentration of the species is derived. Chemical reactions are treated as is osmosis. The thermodynamics of solid systems is addressed along with mineral solubility in liquid solutions.
This chapter covers digital information sources in some depth. It provides intuition on the information content of a digital source and introduces the notion of redundancy. As a simple but important example, discrete memoryless sources are described. The concept of entropy is defined as a measure of the information content of a digital information source. The properties of entropy are studied, and the source-coding theorem for a discrete memoryless source is given. In the second part of the chapter, practical data compression algorithms are studied. Specifically, Huffman coding, which is an optimal data-compression algorithm when the source statistics are known, and Lempel–Ziv (LZ) and Lempel–Ziv–Welch (LZW) coding schemes, which are universal compression algorithms (not requiring the source statistics), are detailed.
This paper studies the problem of scaling ordinal categorical data observed over two or more sets of categories measuring a single characteristic. Scaling is obtained by solving a constrained entropy model which finds the most probable values of the scales given the data. A Kullback-Leibler statistic is generated which operationalizes a measure for the strength of consistency among the sets of categories. A variety of data of two and three sets of categories are analyzed using the entropy approach.
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\chi ^2$$\end{document} and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
When a simple random sample of size n is employed to establish a classification rule for prediction of a polytomous variable by an independent variable, the best achievable rate of misclassification is higher than the corresponding best achievable rate if the conditional probability distribution is known for the predicted variable given the independent variable. In typical cases, this increased misclassification rate due to sampling is remarkably small relative to other increases in expected measures of prediction accuracy due to samplings that are typically encountered in statistical analysis.
This issue is particularly striking if a polytomous variable predicts a polytomous variable, for the excess misclassification rate due to estimation approaches 0 at an exponential rate as n increases. Even with a continuous real predictor and with simple nonparametric methods, it is typically not difficult to achieve an excess misclassification rate on the order of n−1. Although reduced excess error is normally desirable, it may reasonably be argued that, in the case of classification, the reduction in bias is related to a more fundamental lack of sensitivity of misclassification error to the quality of the prediction. This lack of sensitivity is not an issue if criteria based on probability prediction such as logarithmic penalty or least squares are employed, but the latter measures typically involve more substantial issues of bias. With polytomous predictors, excess expected errors due to sampling are typically of order n−1. For a continuous real predictor, the increase in expected error is typically of order n−2/3.
An information-theoretic framework is used to analyze the knowledge content in multivariate cross classified data. Several related measures based directly on the information concept are proposed: the knowledge content (S) of a cross classification, its terseness (Zeta), and the separability (GammaX) of one variable, given all others. Exemplary applications are presented which illustrate the solutions obtained where classical analysis is unsatisfactory, such as optimal grouping, the analysis of very skew tables, or the interpretation of well-known paradoxes. Further, the separability suggests a solution for the classic problem of inductive inference which is independent of sample size.
Let $(X,\mathcal {B},\mu ,T)$ be a probability-preserving system with X compact and T a homeomorphism. We show that if every point in $X\times X$ is two-sided recurrent, then $h_{\mu }(T)=0$, resolving a problem of Benjamin Weiss, and that if $h_{\mu }(T)=\infty $, then every full-measure set in X contains mean-asymptotic pairs (that is, the associated process is not tight), resolving a problem of Ornstein and Weiss.
Conspiracy theories explain anomalous events as the outcome of secret plots by small groups of people with malevolent aims. Is every conspiracy unique, or do they all share a common thread? That is, might conspiracy explanations stem from a higher-order belief that binds together a wide variety of overtly independent phenomena under a common umbrella? We can call this belief the conspiracy frame. Network science allows us to examine this frame at two different levels: by examining the structural coherence of individual conspiracies and by examining the higher-level interconnectivity of the conspiracy beliefs as a whole.