To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Drag reduction induced by a polydisperse solution of polyethylene oxide is investigated by direct numerical simulations of the Navier–Stokes equations coupled with the Lagrangian evolution of the polymers, modelled as dumbbells. Simulation parameters are chosen to match the experimental conditions of Berman (1977), who measured the polymer molecular weight distribution. Drag reduction is induced only by the few high molecular weight polymers fully stretched by the turbulent flow, whilst the hundreds of parts per million of low molecular weight chains are ineffective.
Morphodynamic descriptions of fluid deformable surfaces are relevant for a range of biological and soft matter phenomena, spanning materials that can be passive or active, as well as ordered or topological. However, a principled, geometric formulation of the correct hydrodynamic equations has remained opaque, with objective rates proving a central, contentious issue. We argue that this is due to a conflation of several important notions that must be disambiguated when describing fluid deformable surfaces. These are the Eulerian and Lagrangian perspectives on fluid motion, and three different types of gauge freedom: in the ambient space; in the parameterisation of the surface; and in the choice of frame field on the surface. We clarify these ideas, and show that objective rates in fluid deformable surfaces are time derivatives that are invariant under the first of these gauge freedoms, and which also preserve the structure of the ambient metric. The latter condition reduces a potentially infinite number of possible objective rates to only two: the material derivative and the Jaumann rate. The material derivative is invariant under the Galilean group, and therefore applies to velocities, whose rate captures the conservation of momentum. The Jaumann derivative is invariant under all time-dependent isometries, and therefore applies to local order parameters, or symmetry-broken variables, such as the nematic $Q$-tensor. We provide examples of material and Jaumann rates in two different frame fields that are pertinent to the current applications of the fluid mechanics of deformable surfaces.
The study of the quantum–classical correspondence has been focused on the quantum measurement problem. However, most of the discussion in the preceding chapters is motivated by a broader question: Why do we perceive our quantum Universe as classical? Therefore, emergence of the classical phase space and Newtonian dynamics from the quantum Hilbert space must be addressed. Chapter 6 starts by re-deriving decoherence rate for non-local superpositions using the Wigner representation of quantum states. We then discuss the circumstances that, in some situations, make classical points a useful idealization of the quantum states of many-body systems. This classical structure of phase space emerges along with the (at least approximately reversible) Newtonian equations of motion. Approximate reversibility is a non-trivial desideratum given that the quantum evolution of the corresponding open system is typically irreversible. We show when such approximately reversible evolution is possible. We also discuss quantum counterparts of classically chaotic systems and show that, as a consequence of decoherence, their evolution tends to be fundamentally irreversible: They produce entropy at the rate determined by the Lyapunov exponents that characterize classical chaos. Thus, quantum decoherence provides a rigorous rationale for the approximations that led to Boltzmann’s H-theorem.
In this work, we investigate the mixing of active scalars in two dimensions by the stirring action of stochastically generated weak shock waves. We use Fourier pseudospectral direct numerical simulations of the interaction of shock waves with two non-reacting species to analyse the mixing dynamics for different Atwood numbers (At). Unlike passive scalars, the presence of density gradients in active scalars alters the molecular diffusion term and makes the species diffusion nonlinear, introducing a concentration gradient-driven term and a density gradient-driven nonlinear dissipation term in the concentration evolution equation. We show that the direction of concentration gradient causes the interface across which molecular diffusion occurs to expand outwards or inwards, even without any stirring action. Shock waves enhance the mixing process by increasing the perimeter of the interface and by sustaining concentration gradients. Negative Atwood number mixtures sustain concentration gradients for a longer time than positive Atwood number mixtures due to the so-called nonlinear dissipation terms. We estimate the time until that when the action of stirring is dominant over molecular mixing. We also highlight the role of baroclinicity in increasing the interface perimeter in the stirring dominant regime. We compare the stirring effect of shock waves on mixing of passive scalars with active scalars and show that the vorticity generated by baroclinicity is responsible for the folding and stretching of the interface in the case of active scalars. We conclude by showing that lighter mixtures with denser inhomogeneities ($At\lt 0$) take a longer time to homogenise than the denser mixtures with lighter inhomogeneities ($At\gt 0$).
Chapter 5 explores the consequences of decoherence. We live in a Universe that is fundamentally quantum. Yet, our everyday world appears to be resolutely classical. The aim of Chapter 5 is to discuss how preferred classical states, and, more generally, classical physics, arise, as an excellent approximation, on a macroscopic level of a quantum Universe. We show why quantum theory results in the familiar “classical reality” in open quantum systems, that is, systems interacting with their environments. We shall see how and why, and to what extent, quantum theory accounts for our classical perceptions. We shall not complete this task here—a more detailed analysis of how the information is acquired by observers is needed for that, and this task will be taken up in Part III of the book. Moreover, Chapter 5 shows that not just Newtonian physics but also equilibrium thermodynamics follows from the same symmetries of entanglement that led to Born’s rule (in Chapter 3).
Elastoinertial turbulence (EIT) is a chaotic state that emerges in the flows of dilute polymer solutions. Direct numerical simulation (DNS) of EIT is highly computationally expensive due to the need to resolve the multiscale nature of the system. While DNS of two-dimensional (2-D) EIT typically requires $O(10^6)$ degrees of freedom, we demonstrate here that a data-driven modelling framework allows for the construction of an accurate model with 50 degrees of freedom. We achieve a low-dimensional representation of the full state by first applying a viscoelastic variant of proper orthogonal decomposition to DNS results, and then using an autoencoder. The dynamics of this low-dimensional representation is learned using the neural ordinary differential equation (NODE) method, which approximates the vector field for the reduced dynamics as a neural network. The resulting low-dimensional data-driven model effectively captures short-time dynamics over the span of one correlation time, as well as long-time dynamics, particularly the self-similar, nested travelling wave structure of 2-D EIT in the parameter range considered.
When a droplet impacts onto a superheated liquid pool, vapour generation and drainage within the gas cushion play a crucial role in postponing or even preventing contact between the droplet and the pool surface. Through direct numerical simulations, we closely examine the transient dynamics of vapour flow confined within the thin film, with a particular focus on the minimum thickness of this film under a range of impact conditions. Our numerical findings manifest the significant influence of evaporation on the vertical motion of the liquid–vapour interface, revealing how the minimum film thickness evolves in response to variations in impact velocity and degree of superheat. In our numerical simulations, we have identified two distinct evolution laws for the minimum film thickness, corresponding to moderate and high superheat regimes, respectively. These regimes are differentiated by the dominance of evaporation effects within the vapour film during the early falling stage. Subsequently, we establish scaling relations to characterize these regimes by carefully balancing inertial, pressure and evaporation effects within the thin vapour film. Furthermore, we observe that the vapour pressure eventually reaches equilibrium with the rapid increase in capillary pressure at the spreading front, thereby controlling the minimum thickness of the vapour layer in both moderate and high superheat regimes. We derive self-similar solutions based on this equilibrium, and the predicted minimum film thickness aligns remarkably well with our numerical results. This provides compelling evidence that evaporation alone is insufficient to prevent droplet–pool coalescence.
Quantum Darwinism demonstrates not only that preferred states are selected for their stability but also that information about them is broadcast by the same environment that causes decoherence and einselection. That environment acts both as a censor and as an advertising agent that disseminates information about pointer states while suppressing complementary information. Chapter 8 explores the implications and limitations of quantum Darwinism using models inspired by the structure of the Universe we inhabit. We perceive our Universe using light and other means of information transmission. We explore models that have a well-defined relation with our everyday reality, and where one can also selectively relax some of the idealized assumptions and investigate the consequences. Light is the communication channel through which we obtain most of our information. Fortunately, it is an ideal channel in the sense of quantum Darwinism, and simple but realistic cases are exactly solvable. The solution presented herein demonstrates the inevitability of the consensus between observers who rely on scattered photons: The emergence of classical objective reality (classical because pointer states are einselected, and objective because redundancy imposes consensus) is inevitable. This is how the classical world we perceive emerges from within the quantum Universe we inhabit.
The aim in Chapter 7 is to take into account the role of the means of information transmission on the nature of the states that can be perceived. Our point of departure is the recognition that the information we obtain is acquired by observers who monitor fragments of the same environment that decohered the system, einselecting preferred pointer states in the process. Moreover, we only intercept a fraction of the environment. The only information about the system that can be transmitted by its fraction must have been reproduced in many copies in that environment. This process of amplification limits what can be found out to the states einselected by decoherence. Quantum Darwinism provides a simple and natural explanation of this restriction, and, hence, of the objective existence—the essence of classicality—for the einselected states. This chapter introduces and develops information-theoretic tools and concepts (including, e.g., redundancy) that allow one to explore and characterize correlations and information flows between systems, environments, and observers, and illustrates them on an exactly solvable yet non-trivial model.
Chapter 4 begins to discuss decoherence, and, thus, to address the overarching question: How does the classical world—classical states that are responsible for the objective reality of our everyday experience—emerge from within the Universe that is, as we know from compelling experimental evidence, made out of quantum stuff. The short answer to this question is that decoherence selects (from the vast number of superpositions that populate Hilbert space in the process of environment-induced superselection (also known as einselection) the few states that are—in contrast to all the other alternatives—stable in spite of their immersion in the environment. Decoherence is illustrated with a detailed discussion of two models. A spin decohered by an environment of spins as well as quantum Brownian motion have become paradigmatic models of decoherence for good reason: They are exactly solvable and yet they capture (albeit in an idealized manner) the emergence of the preferred classical states in settings that are relevant for quantum measurements and for Newtonian dynamics in effectively classical phase space.
Dust storms are a unique form of high-Reynolds-number particle-laden turbulence associated with intense electrical activity. Using a wavelet-based analysis method on field measurement data, Zhang et al. (2023 J. Fluid Mech.963, A15) found that wind velocity intermittency intensifies during dust storms, but it is weaker than both dust concentration and electric field. However, the linear and nonlinear multifield coupling characteristics, which significantly influence particle transport and turbulence modulation, remain poorly understood. To address this issue, we obtained high-fidelity datasets of wind velocity, dust concentration, and electric field at the Qingtu Lake Observation Array. By extending the wavelet-based data analysis method, we investigated localised linear and quadratic nonlinear coupling characteristics in strong turbulence–particle–electrostatics coupling regimes. Our findings reveal that linear coupling behaviour is largely dominated by the multifield intermittent components. At small scales, due to very high intermittency, no strong phase synchronisation can be formed, and the interphase linear coupling is weak and notably intermittent. At larger scales, however, perfect phase synchronisation emerges, and dust concentration and electric field exhibit strong, non-intermittent linear coupling, suggesting that large-scale coherent structures play a dominant role in driving the coupling. Importantly, the multifield spectra show well-developed $-1$ and $-5/3$ power-law regions, but the spectral breakpoints for dust concentration and electric field are two decades lower than that for streamwise wind velocity. This difference is due to the broader range and stronger intensity of quadratic nonlinear coupling in dust concentration and electric field, which leads to the broadening of Kolmogorov’s $-5/3$ power-law spectrum.
The dynamics of small-scale structures in free-surface turbulence is crucial to large-scale phenomena in natural and industrial environments. Here, we conduct experiments on the quasi-flat free surface of a zero-mean-flow turbulent water tank over the Reynolds number range $Re_{\lambda } = 207$–312. By seeding microscopic floating particles at high concentrations, the fine scales of the flow and the velocity-gradient tensor are resolved. A kinematic relation is derived expressing the contribution of surface divergence and vorticity to the dissipation rate. The probability density functions of divergence, vorticity and strain rate collapse once normalised by the Kolmogorov scales. Their magnitude displays strong intermittency and follows chi-square distributions with power-law tails at small values. The topology of high-intensity events and two-point statistics indicate that the surface divergence is characterised by dissipative spatial and temporal scales, while the high-vorticity and high-strain-rate regions are larger, long-lived, concurrent and elongated. The second-order velocity structure functions obey the classic Kolmogorov scaling in the inertial range when the dissipation rate on the surface is considered, with a different numerical constant than in three-dimensional turbulence. The cross-correlation among divergence, vorticity and strain rate indicates that the surface-attached vortices are strengthened during downwellings and diffuse when those dissipate. Sources (sinks) in the surface velocity fields are associated with strong (weak) surface-parallel stretching and compression along perpendicular directions. The floating particles cluster over spatial and temporal scales larger than those of the sinks. These results demonstrate that, compared with three-dimensional turbulence, in free-surface turbulence the energetic scales leave a stronger imprint on the small-scale quantities.
Chapter 2 shows how the discreteness that sets the stage for discontinuous quantum jumps between a restricted set of states is a consequence of the symmetry breaking that resolves the tension between the unitarity of quantum evolutions, and repeatable information transfer (the essence of quantum Darwinism, the subject of Chapters 7 and 8). Chapter 2 shows that, while the quantum superposition principle declares that every superposition is an equally legal quantum state, repeatability restricts states that can be recorded (found out) multiple times to an orthogonal set determined by the unitary dynamics of the process responsible for the repeated information transfer (i.e., for amplification). Such states persist and can imprint the evidence of their continued presence in other systems, e.g., on the subsystems of the environment. They become the elements of objective reality—e.g., outcomes of the measurements we perceive. Moreover, Chapter 2 motivates the need for the derivation of the probabilities of measurements (to be carried out in Chapter 3).