To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Offering a comprehensive treatment of adhesive particle flows, this book adopts a particle-level approach oriented toward directly simulating the various fluid, electric field, collision, and adhesion forces and torques acting on the particles, within the framework of a discrete-element model. It is ideal for professionals and graduate students working in engineering and atmospheric and condensed matter physics, materials science, environmental science, and other disciplines where particulate flows have a significant role. The presentation is applicable to a wide range of flow fields, including aerosols, colloids, fluidized beds, and granular flows. It describes both physical models of the various forces and torques on the particles as well as practical aspects necessary for efficient implementation of these models in a computational framework.
The mathematical theory of elasticity based on the idealization presented in this chapter is a remarkable classical branch of the mechanics of continua, which has advanced far in the more than 200 years it has been studied. In this chapter a concise presentation of the fundamentals of this theory will be given, bearing in mind readers who have not met it before, and it will also serve as a preparation for the next chapter, where we discuss the mathematical modeling of fracture phenomena, which nowadays is the principal area of attention.
The fundamental idealization
A crucially important property of a deformable solid continuum is that it is possible for it to possess non-trivial stress distributions even when the body is at rest, i.e. when the velocity is everywhere equal to zero.
The theory of elasticity as a science is older than fluid mechanics. Its basic law, which was developed to a fundamental model, was formulated by Robert Hooke more than 300 years ago, in the article Hooke (1678).
Readers already know the formulation of Hooke's law for an elastic rod. A rod is an elastic body whose length l is substantially larger than its cross-sectional size s and which has a constant cross-section area S (see Figure 5.1, taken from the book of Galileo Galilei (1638)). Let us take the longitudinal direction of the rod as the x1 axis of a system of orthonormal Cartesian coordinates.
In his preface to this book, Professor G. I. Barenblatt recounts the saga of the course of mechanics of continua on which the book is based. This saga originated at the Moscow State University under the aegis of the renowned Rector I. G. Petrovsky and moved with the author first to the Moscow Institute for Physics and Technology, then to Cambridge University in England, then to Stanford University, until it reached its final home as a much loved and appreciated course at the mathematics department of the University of California, Berkeley. Those not fortunate enough to have been able to attend the course now have the opportunity to see what has made it so special.
The present book is a masterful exposition of fluid and solid mechanics, informed by the ideas of scaling and intermediate asymptotics, a methodology and point of view of which Professor Barenblatt is one of the originators. Most physical theories are intermediate, in the sense that they describe the behavior of physical systems on spatial and temporal scales intermediate between much smaller scales and much larger scales; for example, the Navier–Stokes equations describe fluid motion on spatial scales larger than molecular scales but not so large that relativity must be taken into account and on time scales larger than the time scale of molecular collisions but not so large that the vessel that contains the fluid collapses through aging.
The term “error” is going to appear throughout this book in different contexts. The varieties of error we will be concerned with are:
Experimental error. We may wish to calculate some function y(x1, …, xn), where the quantities xi are measured. Any such measurement has associated errors, and they will affect the accuracy of the calculated y.
Roundoff error. Even if x were measured exactly, odds are it cannot be represented exactly in a digital computer. Consider π, which cannot be represented exactly in decimal form. We can write π ≈ 3.1416, by rounding the exact number to fit 5 decimal figures. Some roundoff error occurs in almost every calculation with real numbers, and controlling how strongly it impacts the final result of a calculation is always an important numerical consideration.
Approximation error. Sometimes we want one thing but calculate another, intentionally, because the other is easier or has more favorable properties. For example, one might choose to represent a complicated function by its Taylor series. When substituting expressions that are not mathematically identical we introduce approximation error.
Experimental error is largely outside the scope of numerical treatment, and we'll assume here, with few exceptions, that it's just something we have to live with. Experimental error plays an important role in data fitting, which will be described at length in Chapter 8.
Data fitting can be viewed as a generalization of polynomial interpolation to the case where we have more data than is needed to construct a polynomial of specified degree.
C.F. Gauss claims to have first developed solutions to the least squares problem, and both Gaussian elimination and the Gauss-Seidel iterative method were developed to solve these problems [52, 79]. In fact, interest in least squares by Galileo predates Gauss by over 200 years – a comprehensive history and analysis is given by Harter [97]. In addition to Gauss’ contributions, the Jacobi iterative method [118] and the Cholesky decomposition method [13] were developed to solve least squares problems. Clearly, the least squares problem was (and continues to be) a problem of considerable importance. All these methods were applied to the normal equations, which represent an overdetermined system as a square and symmetric positive definite matrix. Despite the astounding historical importance of the normal equations, the argument will be made that you should not ever use them. Extensions of least squares to nonlinear problems, and linear problems with normal error, are described.
Least squares refers to a best fit in the L2 norm, and this is by far the most commonly used norm. However, other norms are important for certain applications. Covariance-weighting leads to minimization in the Mahalanobis norm. L1 is commonly used in financial modeling, and L∞ may be most suitable when the underlying error distribution is uniform, versus normal.
The ab initio determination of molecular structure is an example that will use:
• singular value decomposition (Section 3.4) in particular the Lowdin decomposition (8.13) encountered in data fitting;
• the QR method for eigenvalues and eigenvectors (Section 3.3);
• interpolation (Chapter 5);
• fixed point iteration (Chapter 6) with stabilization by damping (Section 8.4);
• data fitting (Chapter 8), which involves solutions of linear systems (Chapter 2);
• integration: generally (Chapter 9), and Gaussian integration in particular (Section 9.4, Problem 9.6, which requires numerical root finding (Section 6.6) to set up), and the use of recursions, e.g., (10.35); and
• optimization with the variable metric method (Section 7.4).
In addition, concerns about numerical error (Chapter 1) are omnipresent. We will encounter the error function (Problems 5.7 and 9.7), and lots of Gaussian integrals (equations (8.10), (11.8), and (11.29)). We will use a paradigm called the variational principle that is essentially what motivated the conjugate gradient algorithm (Section 4.1).
The chemical physics problem is described in Section 12.1, and the Hartree–Fock–Roothaan equations are derived. These determine the energy of a particular molecular configuration. The HFR equations rely on a number of integrals of Gaussian functions that are introduced in their simplest form in Section 12.2. With these simple formulas we determine the energy of the H2 molecule for prescribed geometry, demonstrating in Section 12.3 reasonable results compared to far more elaborate calculations. In Section 12.4 the more complex Gaussian integrals are addressed.