To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study preconditioning techniques for discontinuous Galerkin discretizations of isotropic linear elasticity problems in primal (displacement) formulation. We propose subspace correction methods based on a splitting of the vector valued piecewise linear discontinuous finite element space, that are optimal with respect to the mesh size and the Lamé parameters. The pure displacement, the mixed and the traction free problems are discussed in detail. We present a convergence analysis of the proposed preconditioners and include numerical examples that validate the theory and assess the performance of the preconditioners.
There is a resurgence of applications in which the calculus of variations has direct relevance. In addition to application to solid mechanics and dynamics, it is now being applied in a variety of numerical methods, numerical grid generation, modern physics, various optimization settings and fluid dynamics. Many applications, such as nonlinear optimal control theory applied to continuous systems, have only recently become tractable computationally, with the advent of advanced algorithms and large computer systems. This book reflects the strong connection between calculus of variations and the applications for which variational methods form the fundamental foundation. The mathematical fundamentals of calculus of variations (at least those necessary to pursue applications) is rather compact and is contained in a single chapter of the book. The majority of the text consists of applications of variational calculus for a variety of fields.
We first prove existence and uniqueness of optimal transportation maps for the Monge’s problem associated to a cost function with a strictly convex constraint in the Euclidean plane ℝ2. The cost function coincides with the Euclidean distance if the displacement y − x belongs to a given strictly convex set, and it is infinite otherwise. Secondly, we give a sufficient condition for existence and uniqueness of optimal transportation maps for the original Monge’s problem in ℝ2. Finally, we get existence of optimal transportation maps for a cost function with a convex constraint, i.e. y − x belongs to a given convex set with at most countable flat parts.
Any two-input left-invariant control affine system of full rank, evolving on theEuclidean group SE (2), is (detached) feedback equivalent to one ofthree typical cases. In each case, we consider an optimal control problem which is thenlifted, via the Pontryagin Maximum Principle, to a Hamiltonian system onthe dual space 𝔰𝔢 (2)*. These reduced Hamilton − Poisson systems are the maintopic of this paper. A qualitative analysis of each reduced system is performed. Thisanalysis includes a study of the stability nature of all equilibrium states, as well asqualitative descriptions of all integral curves. Finally, the reduced Hamilton equationsare explicitly integrated by Jacobi elliptic functions. Parametrisations for all integralcurves are exhibited.
The parabolic equations driven by linearly multiplicative Gaussian noise are stabilizable in probability by linear feedback controllers with support in a suitably chosen open subset of the domain. This procedure extends to Navier − Stokes equations with multiplicative noise. The exact controllability is also discussed.
In this paper, we consider two-scale limits obtained with increasing homogenization periods, each period being an entire multiple of the previous one. We establish that, up to a measure preserving rearrangement, these two-scale limits form a martingale which is bounded: the rearranged two-scale limits themselves converge both strongly in L2 and almost everywhere when the period tends to +∞. This limit, called the Two-Scale Shuffle limit, contains all the information present in all the two-scale limits in the sequence.
In this chapter, we work with objects that possess a magnitude and a direction. These objects are known as physical vectors or simply vectors. There are two types of vectors: bound vectors, which are fixed to a specified point in the space, and free vectors, which are allowed to move around in the space. Ironically, free vectors are often used when working in rigid domains, whereas bound vectors are often used when working in flowing or flexible domains. We mostly deal with bound vectors.
We denote vectors with underlined bold letters, such as v, and we denote scalars with nonunderlined letters, such as a, unless otherwise noted. Familiar examples of vectors are velocity, acceleration, and forces. For these vectors, the concept of direction and magnitude are natural and easy to grasp. However, it is important to note that vectors can be built depending on the user's interpretation and objectives. As long as a magnitude and direction can be attached to a physical property, then vector analysis can be used. For instance, for angular velocities of a rigid body, one needs to describe how fast the rotation is, whether the rotation is counterclockwise or clockwise, and where the axis of rotation is. By attaching an arrow whose direction is along the axis of rotation, whose length determines how fast the rotation is, and pointing in the direction consistent with a counterclockwise or clockwise convention, the angular velocity becomes a vector. In our case, we adapt the right-hand screw convention to represent the counterclockwise direction as a positive direction (see Figure 4.1).
This book was written as a textbook on applied mathematics for engineers and scientist, with the expressed goal of merging both analytical and numerical methods more tightly than other textbooks. The role of appliedmathematics has continued to grow increasingly important with advancement of science and technology, ranging from modeling and analysis of natural phenomenon to simulation and optimization of man-made systems. With the huge and rapid advances of computing technology, larger and more complex problems can now be tackled and analyzed in a very timely fashion. In several cases, what used to require supercomputers can now be solved using personal computers. Nonetheless, as the technological tools continue to progress, it has become even more imperative that the results can be understood and interpreted clearly and correctly, as well as the need for a deeper knowledge behind the strengths and limitations of the numerical methods used. This means that we cannot forgo the analytical techniques because they continue to provide indispensable insights on the veracity and meaning of the results. The analytical tools remain to be of prime importance for basic understanding for buildingmathematical models and data analysis. Still, when it comes to solving large and complex problems, numerical methods are needed.
In several cases, the analytical solution of ordinary differential equations, including high-order, multiple, and nonlinear types, may not be easy to obtain or evaluate. In some cases, it requires truncation of an infinite series, whereas in other cases, it may require numerical integration via quadratures.
An alternative approach is to determine the solution directly by numerical methods. This means that the solution to a differential equation will not be given as a function of the independent variables. Instead, the numerical solution is a set of points discretized over the chosen range of the independent variables. These points can then be plotted and processed further for subsequent analysis. Thus, unlike analytical solutions, numerical solutions do not yield compact formulas. Nonetheless, numerical methods are able to handle a much larger class of ordinary differential equations.
We begin the chapter with the problems in which all the fixed conditions are set at the initial point, for example, t = 0 or x = x0, depending on which the independent variable is. These problems are known as initial value problems, or IVP for short. We discuss some of the better known methods for solving initial value problems, such as the one-step methods (e.g., Euler methods and Runge-Kutta methods) and multistep methods (e.g., the Adams-Bashforth methods and Adams-Moulton methods).
One of the most basic applications of matrices is the solution of multiple equations. Generally, problems involving multiple equations can be categorized as either linear or nonlinear types. If the problems involve only linear equations, then they can be readily formulated as Ax = b, and different matrix approaches can be used to find the vector of unknowns given by x. When the problem is nonlinear, more complex approaches are needed. Numerical approaches to the solution of nonlinear equations, such as the Newton method and its variants, also take advantage of matrix equations.
In this chapter, we first discuss the solution of the linear equation Ax = b. This includes direct and indirect methods. The indirect methods are also known as iterative methods. The distinguishing feature between these two types of approaches is that direct methods (or noniterative) methods obtain the solution using various techniques such as reduction by elimination, factorization, forward or backward substitution, matrix splitting, or direct inversion. Conversely, the indirect (iterative) methods require an initial guess for the solution, and the solution is improved using iterative algorithms until the solution meets some specified criterion of maximum number of iterations or minimum tolerance on the errors.
In this chapter, we discuss the solution of first-order differential equations. The main technique that is used is known as the method of characteristics, in which various paths in the domain of the independent variables, known as characteristics, are obtained. The solutions are then propagated through these paths, yielding the characteristic solution curves. (Note that the two terms are different: characterics vs. characteristic curves). A solution surface can then be constructed by combining (or “bundling”) these characteristic curves.
The method of characteristics is applicable to partial differential equations that take on particular forms known as quasilinear equations. Special cases are the semi-linear, linear, and strictly linear forms. It can be shown that for the semilinear cases, the characteristics will not intersect each other. However, for the general quasi-linear equations, the characteristics can intersect. When they do, the solution will become discontinuous, and the discontinuities are known as shocks. Moreover, if the discontinuity is given in the initial conditions, rarefaction occurs, in which a fan of characteristics are filled in to complete the solution surface. A brief treatment of shocks and rarefaction is given in Section J.1 as an appendix.
In Section 10.3, we discuss a set of conditions known as Lagrange-Charpit conditions. These conditions are used to generate the solutions of some important classes of nonlinear first order PDEs.
The next two chapters contain a detailed discussion of vector and tensor analysis.
Chapter 4 contains the basic concepts of vectors and tensors, including vector and tensor algebra. We begin with a description of vectors as an abstract object having a magnitude and direction, whereas tensors are then defined as operators on vectors. Several algebraic operations are summarized together with their matrix representations. Differential calculus of vector and tensors are then introduced with the aid of gradient operators, resulting in operations such as gradients, divergences, and curls. Next, we discuss the transformations of rectangular coordinates to curvilinear coordinates, such as cylindrical, spherical, and other general orthogonal coordinate systems.
Chapter 5 then focuses on the integral calculus of vectors. Detailed discussions of line, surface, and volume integrations are included in the appendix, including the mechanics of calculations. Instead, the chapter discusses various important integral theorems such as the divergence theorem, the Stokes' theorem, and the general Lieb-nitz formula. An application section is included to show how several physical models, especially those based on conservation laws, can be cast in terms of tensor calculus, which is independent of coordinate systems. The models generated are generally in the form of partial differential equations that are applicable to problems in mechanics, fluid dynamics, general physico-chemical processes, and electromagnetics. The solutions of these models are the subject of Part III and Part IV of the book.
Several models of physical systems come in the form of differential equations. The main advantage of these models lies in their flexibility through the specifications of initial and/or boundary conditions or forcing functions. Although several physical models result in partial differential equations, there are also several important cases in which the models can be reduced to ordinary differential equations. One major class involves dynamic models (i.e., time-varying systems) in which the only independent variable is the time variable, known as initial value problems. Another case is when only one of the spatial dimensions is the only independent variable. For this case, it is possible that boundary conditions are specified at different points, resulting in multiple-point boundary value problems.
There are four chapters included in this part of the book to handle the analytical solutions, numerical solutions, qualitative analysis, and series solutions of ordinary differential equations. Chapter 6 discusses the analytic approaches to solving first- and second-order differential equations, including similarity transformation methods. For higher order linear differential equations, we apply matrix methods to obtain the solutions in terms of matrix exponentials and matrizants. The chapter also includes the use of Laplace transforms for solving the high-order linear ordinary differential equations.