To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the first chapters of this book we have seen methods suitable for a first-principles simulation of the interaction between a fluid and solid objects immersed in it. The associated computational burden is considerable and it is evident that those methods cannot handle large numbers of particles. In this chapter we develop an alternative approach which, while approximate, permits the simulation of thousands, or even millions, of particles immersed in a flow. The key feature which renders this possible is that the exchanges of momentum (and also possibly mass and energy) between the particle and the surrounding fluid are modeled, rather than directly resolved. This implies an approximate representation that is based on incorporating assumptions into the development of the mathematical model.
One of the most common approaches used today to model many particle-laden flows is based on the “point-particle approximation,” i.e. the treatment of individual particles as mathematical point sources of mass, momentum, and energy. This approximation requires an examination of the assumptions and limitations inherent to this approach, aspects that are given consideration in this chapter. Point-particle methods have relatively wide application and have proven a useful tool for modeling many complex systems, especially those comprised of a very large ensemble of particles. Details of the numerical aspects inherent to point-particle treatments are highlighted.
We start by putting point-particle methods into the context established earlier in this text and, in particular, in the previous chapter.
Boundary integral methods are powerful numerical techniques for solving multiphase hydrodynamic and aerodynamic problems in conditions where the Stokes or potential-flow approximations are applicable. Stokes flows correspond to the low Reynolds number limit, and potential flows to the high Reynolds number regime where fluid vorticity can be neglected. For both Stokes and potential flows, the velocity field in the system satisfies linear governing equations. The total flow can thus be represented as a superposition of flows produced by appropriate point sources and dipoles at the fluid interfaces.
In the boundary integral approach the flow equations are solved directly for the velocity field at the fluid interfaces, rather than in the bulk fluid. Thus, these methods are well suited for describing multiphase systems. Examples of systems for which boundary integral algorithms are especially useful include suspensions of rigid particles or deformable drops under Stokes-flow conditions. Applications of boundary integral methods in fluid dynamics, however, cover a broader range. At one end of this range are investigations of the hydrodynamic mobility of macromolecules; at the other end are calculations of the flow field around an airplane wing in a potential flow approximation. Here we will not address the potential flow case, limiting ourselves to Stokes flow.
Introduction
In the present chapter we discuss boundary integral methods for multiphase flows in the Stokes-flow regime. We review the governing differential equations, derive their integral form, and show how to use the resulting boundary integral equations to determine the motion of particles and drops. Specific issues that are relevant for the numerical implementation of these equations are also described.
The previous chapters have been devoted to methods capable of delivering “numerically exact” solutions of the Navier–Stokes equations as applied to various multiphase flow problems. In spite of their efficiency, these methods still require a substantial amount of computation even for relatively simple cases. It is therefore evident that the simulation of more complex flows approaching those encountered in most natural situations or technological contexts (sediment transport, fluidized beds, electric power generation, and many others) cannot be pursued by those means but must be based on a different approach. Furthermore, even if we did have detailed knowledge, e.g., of the motion of all the particles and of the interstitial fluid, most often, for practical purposes, we would be interested in quantities obtained by applying some sort of averaging to this immense amount of information. This observation suggests that it might be advantageous to attempt to formulate equations governing the time evolution of these averages directly. In this approach, rather than aiming at a detailed solution of the Navier–Stokes equations, we would be satisfied with a reduced description based on simplified mathematical models. While one may try to base such models on intuition, a more reliable way is perhaps to start from the exact equations and carry out a process of averaging which would filter out the inessential details retaining the basic physical processes which determine the behavior of the system.
Introduction
The issue of averaging in multiphase flow is a long-standing one with a history which stretches nearly as far back as for single-phase turbulence.
This book deals with multiphase flows, i.e. systems in which different fluid phases, or fluid and solid phases, are simultaneously present. The fluids may be different phases of the same substance, such as a liquid and its vapor, or different substances, such as a liquid and a permanent gas, or two liquids. In fluid-solid systems, the fluid may be a gas or a liquid, or gases, liquids, and solids may all coexist in the flow domain.
Without further specification, nearly all of fluid mechanics would be included in the previous paragraph. For example, a fluid flowing in a duct would be an instance of a fluid-solid system. The age-old problem of the fluid-dynamic force on a body (e.g. a leaf in the wind) would be another such instance, while the action of wind on ocean waves would be a situation involving a gas and a liquid.
In the sense in which the term is normally understood, however, multiphase flow denotes a subset of this very large class of problems. A precise definition is difficult to formulate as, often, whether a certain situation should be considered as a multiphase flow problem depends more on the point of view – or even the motivation – of the investigator than on its intrinsic nature. For example, wind waves would not fall under the purview of multiphase flow, even though some of the physical processes responsible for their behavior may be quite similar to those affecting gas–liquid stratified flows, e.g. in a pipe – a prime example of a multiphase system. The wall of a duct or a tree leaf may be considered as boundaries of the flow domain of interest, which would not qualify these as multiphase flow problems.
Nearly half a century of computational fluid dynamics has shown that it is very hard to beat uniform structured grids in terms of ease of implementation and computational efficiency. It is therefore not surprising that a large fraction of the most popular methods for finite Reynolds number multiphase flows today are methods where the governing equations are solved on such grids. The possibility of writing one set of governing equations for the whole flow field, frequently referred to as the “one-fluid” formulation, has been known since the beginning of large-scale computational studies of multiphase flows. It was, in particular, used by researchers at the Los Alamos National Laboratory in the early 1960s for the marker-and-cell (MAC) method, which permitted the first successful simulation of the finite Reynolds number motion of free surfaces and fluid interfaces. This approach was based on using marker particles distributed uniformly in each fluid to identify the different fluids. The material properties were reconstructed from the marker particles and sometimes separate surface markers were also introduced to facilitate the computation of the surface tension. While the historical importance of the MAC method for multiphase flow simulations cannot be overstated, it is now obsolete. In current usage, the term “MAC method” usually refers to a projection method using a staggered grid.
When the governing equations are solved on a fixed grid, the different fluids must be identified by a marker function that is advected by the flow. Several methods have been developed for that purpose. The volume-of-fluid (VOF) method is the oldest and, after many improvements and innovations, continues to be widely used. Other marker function methods include the level-set method, the phase-field method, and the constrained interpolated propagation (CIP) method.
In the previous chapter we have presented segregated solution methods for multifluid models. When the interaction among the phases is very strong, or the processes to be simulated have short time scales, the methods that we now describe, in which the equations are more tightly coupled in the solution procedure, are preferable.
Work on methods of this type received a strong impulse with the development of nuclear reactor thermohydraulic safety codes in the 1970s and 1980s. This activity led to well-known codes such as RELAP, TRAC, SIMMER, and several others. The latest developments of these codes focus on the refinement of models, the inclusion of three-dimensional capabilities, better data structure, and vectorization, rather than fundamental changes in the basic algorithms. By and large, the numerical methods they employ are an outgrowth of the ICE approach (Implicit Continuous Eulerian) developed by Harlow and Amsden (1971) in the late 1960s. While very robust and stable, these methods, described in Section 11.2, are only first-order accurate in space and time and have other shortcomings. The more recent work, some of which is outlined in the second part of this chapter, is based on newer developments in computational fluid dynamics which are summarized in Section 11.3.
A tendency toward more strongly coupled solution methods is also evident in contemporary work springing from the segregated approach described in the previous chapter (see, e.g. Kunz et al., 1998, 1999, 2000). These developments lead to a gradual blurring of the distinction between the two approaches.
The ground-state energy and properties of any many-electron atom ormolecule may be rigorously computed by variationally computing thetwo-electron reduced density matrix rather than the many-electronwavefunction. While early attempts fifty years ago to compute theground-state 2-RDM directly were stymied because the 2-RDM must beconstrained to represent an N-electron wavefunction, recentadvances in theory and optimization have made direct computation ofthe 2-RDM possible. The constraints in the variational calculationof the 2-RDM require a special optimization known as a semidefiniteprogramming. Development of first-order semidefinite programmingfor the 2-RDM method has reduced the computational costs of thecalculation by orders of magnitude [Mazziotti, Phys. Rev. Lett.93 (2004) 213001]. The variational 2-RDM approach is effective atcapturing multi-reference correlation effects that are especiallyimportant at non-equilibrium molecular geometries. Recent work on2-RDM methods will be reviewed and illustrated with particularemphasis on the importance of advances in large-scale semidefiniteprogramming.
We present a sparse grid/hyperbolic cross discretization for many-particle problems.It involves the tensor product of a one-particle multilevel basis. Subsequent truncation of the associated series expansion then results in a sparse grid discretization.Here, depending on the norms involved, different variants of sparse grid techniques for many-particle spaces can be derivedthat, in the best case, result in complexities and error estimates which are independent of the number of particles.Furthermore we introduce an additional constraint which gives antisymmetric sparse grids which are suited to fermionic systems. We apply the antisymmetric sparse grid discretization to the electronic Schrödinger equationand compare costs, accuracy, convergence rates and scalability with respect to the number of electrons present in the system.
The Diffusion Monte Carlo method is devoted to the computation ofelectronic ground-state energies of molecules. In this paper, we focus onimplementations of this method which consist in exploring theconfiguration space with a fixed number of random walkers evolvingaccording to a stochastic differential equation discretized in time. Weallow stochastic reconfigurations of the walkers to reduce thediscrepancy between the weights that they carry. On a simpleone-dimensional example, we prove the convergence of the method for afixed number of reconfigurations when the number of walkers tends to+∞ while the timestep tends to 0. We confirm our theoreticalrates of convergence by numerical experiments. Various resamplingalgorithms are investigated, both theoretically and numerically.
We present a novel application of best N-term approximation theoryin the framework of electronic structure calculations. The paper focusses on thedescription of electron correlations within a Jastrow-type ansatz for thewavefunction. As a starting point we discuss certain natural assumptions onthe asymptotic behaviour of two-particle correlation functions$\mathcal{F}^{(2)}$ near electron-electron and electron-nuclear cusps. Basedon Nitsche's characterization of best N-term approximation spaces$A_{q}^{\alpha}(H^{1})$, we prove that $\left.\mathcal{F}^{(2)}\inA_{q}^{\alpha}(H^{1})\right.$ for q>1 and $\alpha=\frac{1}{q}-\frac{1}{2}$with respect to a certain class of anisotropic wavelet tensor product bases.Computational arguments are given in favour of this specific class compared toother possible tensor product bases. Finally, we compare the approximationproperties of wavelet bases with standard Gaussian-type basis sets frequentlyused in quantum chemistry.
In this paper, we discuss advancedthermostatting techniques for sampling molecular systems in the canonical ensemble.We first survey work on dynamical thermostatting methods, including the Nosé-Poincaré method, and generalized bath methods which introduce a more complicated extended model to obtain better ergodicity. We describe a general controlled temperature model, projective thermostatting molecular dynamics(PTMD) and demonstrate that it flexibly accommodates existing alternativethermostatting methods, such as Nosé-Poincaré, Nosé-Hoover(with or without chains), Bulgac-Kusnezov, or recursive Nosé-PoincaréChains. These schemes offer possible advantages for use incomputing thermodynamic quantities, and facilitate the developmentof multiple time-scale modelling and simulation techniques. Inaddition, PTMD advances a preliminary step toward therealization of true nonequilibrium motion for selected degrees offreedom, by shielding the variables of interest from the artificialeffect of thermostats. We discuss extension of the PTMD method for constant temperature and pressure models. Finally, we demonstrate schemes for simulating systems with an artificial temperature gradient, by enabling the use of two temperature baths within the PTMD framework.
The present article is an overview of some mathematical results, whichprovide elements of rigorous basis for some multiscalecomputations in materials science. The emphasis is laid upon atomisticto continuum limits for crystalline materials. Various mathematicalapproaches are addressed. Thesetting is stationary. The relation to existing techniques used in the engineeringliterature is investigated.
The purpose of the present article is to compare different phase-spacesampling methods, such as purely stochastic methods (Rejection method, Metropolized independence sampler, Importance Sampling),stochastically perturbed Molecular Dynamics methods (Hybrid Monte Carlo, Langevin Dynamics, Biased Random Walk), and purelydeterministic methods (Nosé-Hoover chains, Nosé-Poincaré and RecursiveMultiple Thermostats (RMT) methods). After recalling some theoretical convergence properties forthe various methods, we provide some new convergence resultsfor the Hybrid Monte Carlo scheme, requiring weaker (and easier tocheck) conditions than previously known conditions. We then turn to the numericalefficiency of the sampling schemes for a benchmark model of linearalkane molecules. In particular, the numerical distributions that are generated are compared in a systematic way, on the basisof some quantitative convergence indicators.
This paper reviews popular acceleration techniques to converge the non-linear self-consistent field equations appearing in quantum chemistry calculations with localized basis sets. The different methodologies, as well as their advantages and limitations are discussed within the same framework. Several illustrative examples of calculations are presented. This paper attempts to describe recent achievements and remaining challenges in this field.
Although the intellectual merits of computational modelling across various length and time scales are generally well accepted, good illustrative examples are often lacking. One way to begin appreciating the benefits of the multiscale approach is to first gain experience in probing complex physical phenomena at one scale at a time. Here we discuss materials modelling at two characteristic scales separately, the atomistic level where interactions are specified through classical potentials and the electronic level where interactions are treated quantum mechanically. The former is generally sufficient for dealing with mechanical deformation at large strain, whereas the latter is necessary for treating chemical reactions or electronic transport. We will discuss simulations of defect nucleation using molecular dynamics, the study of water-silica reactions using a tight-binding approach, the design of a semiconductor-oxide interface using density functional theory, and the analysis of conjugated polymer in molecular actuation using Hartree-Fock calculations. The diversity of the problems discussed notwithstanding, our intent is to lay the groundwork for future problems in materials research, a few will be mentioned, where modelling at the electronic and atomistic scales are needed in an integrated fashion. It is in these problems that the full potential of multiscale modelling can be realized.