To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we focus our attention on obtaining analytical solutions of linear differential equations with coefficients that are not constant. These solutions are not as simple as those for which the coefficients were constant. One general approach is to use a power series formulation.
In Section 9.1, we describe the main approaches of power series solution. Depending on the equation, one can choose to expand the solution around an ordinary point or a singular point. Each of these choices will determine the structure of the series. For an ordinary point, the expansion is simply a Taylor series, whereas for a singular point, we need a series known as a Frobenius series.
Although the power series method is straightforward, power series solutions can be quite lengthy and complicated. Nonetheless, for certain equations, solutions can be found based on the parameters of the equations, thus yielding direct solutions. This is the case for two important classes of second-order equations that have several applications. These are the Legendre equations and Bessel equations, which we describe in Sections 9.2 and 9.3, respectively.
We have also included other important equations in the exercises, such as hyper-geometric equations, Jacobi equations, Laguerre equations, Hermite equations, and so forth, where the same techniques given in this chapter can be used to generate the useful functions and polynomials. Fortunately, the special functions and polynomials that solve these equations, including Legendre polynomials, Legendre functions and Bessel functions, are included in several computer software programs such as MATLAB.
In this chapter, we discuss the integral transform methods for solving linear partial differential equations. Although there are several types of transforms available, the methods that are most often used are the Laplace transform methods and the Fourier transform methods. Basically, an integral transform is used to map the differential equation domain to another domain in which one of the dimensions is reduced from derivative operations to algebraic operations. This means that if we begin with an ordinary differential equation, the integral transform will map the equation to an algebraic equation (cf. Section 6.7). For a 2D problem, the integral transforms should map the partial differential equation to an ordinary differential equation, and so on.
We begin in Section 12.1 with a very brief introduction to general integral transforms. Then, in Section 12.2, we discuss the details of Fourier transforms, its definition, some particular examples, and then the properties. Surprisingly, the initial developments of Fourier transforms are unable to be applied to some of the most useful functions, including step function and sinusoidal functions. Although there were several ad hoc approaches to overcome these problems, it was not until the introduction of the theory of distributions that the various ad hoc approaches were unified and gained a solid mathematical grounding. This theory allows the extension of classical Fourier transform to handle the problematic functions.
Matrix theory is a powerful field of mathematics that has found applications in the solution of several real-world problems, ranging from the solution of algebraic equations to the solution of differential equations. Its importance has also been enhanced by the rapid development of several computer programs that have improved the efficiency of matrix analysis and the solution of matrix equations.
We have allotted three chapters to discussing matrix theory. Chapter 1 contains the basic notations and operations. These include conventions and notations for the various structural, algebraic, differential, and integral operations. As such, this chapter focuses on how to formulate problems in terms of matrix equations, the various approaches of matrix algebraic manipulations, and matrix partitions.
Chapter 2 then focuses on the solution of the linear equation given by Ax = b, and it includes both direct and indirect methods. The most direct method is to find the inverse of A and then evaluate x = A−1b. However, the major practical issue is that matrix inverses become unwieldy when the matrices are large. This chapter is concerned with finding the solutions by reformulating the problem to take advantage of available matrix properties. Direct methods use various factorizations of A based on matrices that are more easily invertible, whereas indirect methods use an iterative process starting with an initial guess of the solution. The methods can then be applied to linear least-squares problems, as well as to the solution of multivariable nonlinear equations.
In some applications, the qualitative behavior of the solution, rather than the explicit solution, is of interest. For instance, one could be interested in the determination of whether operating at an equilibrium point is stable or not. In most cases, we may want to see how the different solutions together form a portrait of the behavior around particular neighborhoods of interest. The portraits can show how different points such as sources, sinks, or saddles are interacting to affect neighboring solutions. For most scientific applications, a better understanding of a process requires the larger portrait, including how they would change with variations in critical parameters.
We begin this chapter with a brief summary on the existence and uniqueness of solutions to differential equations. Then we define and discuss the equilibrium points of autonomous sets of differential equations, because these points determine the sinks, sources, or saddles in the solution domain. Next, we explain some of the technical terms, such as integral curves, flows, and trajectories, which are used to define different types of stability around equilibrium points. Specifically, we have Lyapunov stability, quasi-asymptotic stability, and asymptotic stability.
We then briefly investigate the various types of behavior available for a linear second-order system, dx/dt = Ax, A[=]2 × 2, for example, nodes, focus, and centers. Using the tools provided in previous chapters, we end up with a convenient map that relates the different types of behavior, stable or unstable, to the trace and determinant of A.
In this chapter, we review some definitions and operations of matrices. Matrices play very important roles in the computation and analysis of several mathematical problems. They allow for compact notations of large sets of linear algebraic equations. Various matrix operations such as addition, multiplication, and inverses can be combined to find the required solutions in a more tractable manner. The existence of several software tools, such as MATLAB, have also made it very efficient to approach the solution by posing several problems in the form of matrix equations. Moreover, the matrices possess internal properties such as determinant, rank, trace, eigenvalues, and eigenvectors, which can help characterize the systems under consideration.
We begin with the basic notation and definitions in Section 1.1. The matrix notations introduced in this chapter are used throughout the book. Then in Section 1.2, we discuss the various matrix operations. Several matrix operations should be familiar to most readers, but some may not be as familiar, such as Kronecker products. We have classified the operations as either structural or algebraic. The structural operations are those operations that involve only the collection and arrangement of the elements. On the other hand, the algebraic operations pertain to those in which algebraic operations are implemented among the elements of a matrix or group of matrices. The properties of the different matrix operations such as associativity, com-mutativity, and distributivity properties are summarized in Section 1.3.
In this chapter, we discuss the major integral theorems that are used to develop physical laws based on integrals of vector differential operations. The general theorems include the divergence theorem, the Stokes' theorem, and various lemmas such as the Green's lemma.
The divergence theorem is a very powerful tool in the development of several physical laws, especially those that involve conservation of physical properties. It connects volume integrals with surface integrals of fluxes of the property under consideration. In addition, the divergence theorem is also key to yielding several other integral theorems, including the Green's identities, some of which are used extensively in the development of finite element methods.
Stokes' theorem involves surface integrals and contour integrals. In particular, it relates curls of velocity fields with circulation integrals. In addition to its usefulness in developing physical laws, Stokes' theorem also offers a key criteria for path independence of line integrals inside a given region that can be determined to be simply connected. We discuss how to determine whether the regions are simply connected in Section 5.3.
In Section 5.5, we discuss the Leibnitz theorems involving the derivative of volume integrals in both 1D and 3D space with respect to a parameter α in which the boundaries and integrands are dependent on the same parameter α These are important when dealing with time-dependent volume integrals.
In this chapter, we discuss the major approaches to obtain analytical solutions of ordinary differential equations. We begin with the solutions of first-order differential equations. Several first-order differential equations can be transformed into two major solution approaches: the separation of variables approach and the exact differential approach. We start with a brief review of both approaches, and then we follow them with two sections on how to reduce other problems to either of these methods. First, we discuss the use of similarity transformations to reduce differential equations to become separable. We show that these transformations cover other well-known approaches, such as homogeneous-type differential equations and isobaric differential equations, as special cases. The next section continues with the search for integrating factors that would transform a given differential equation to become exact. Important special cases of this approach include first-order linear differential equations and the Bernoulli equations (after some additional variable transformation).
Next, we discuss the solution of second-order differential equations. We opted to focus first on the nonlinear types, leaving the solution of linear second-order differential equations to be included in the later sections that handle high-order linear differential equations. The approaches we consider are those that would reduce the order of the differential equations, with the expectation that once they are first-order equations, techniques of the previous sections can be used to continue the solution process. Specifically, we use a change of variables to handle the cases in which either the independent variable or dependent variable are explicitly absent in the differential equation.
This part of the book focuses on partial differential equations (PDEs), including the solution, both analytical and numerical methods, and some classification methods. Because the general topic of PDEs is very large, we have chosen to cover only some general methods mainly applicable to linear PDEs, with the exception of nonlinear first-order PDEs.
In Chapter 10, we focus on the solution of first-order PDEs, including the method of characteristics and Lagrange-Charpit methods. The second half of the chapter is devoted to classification of high-order PDEs, based on the factorization of the principal parts to determine whether the equations are hyperbolic, parabolic, or elliptic.
In Chapter 11, we discuss the analytical solutions of linear PDEs. We begin with reducible PDEs that allow for the method of separation of variables. To satisfy various types of initial and boundary conditions, Sturm-Liouville equations are used to obtain orthogonal functions. The techniques can then be extended to the case of nonhomogenous PDEs and nonhomogeneous boundary conditions based on eigenfunction expansions.
In Chapter 12, we discuss integral transform methods such as Fourier and Laplace transforms methods. For the Fourier transforms, we cover the important concepts of the classic transforms, including the use of distribution theory and tempered distributions to find the generalized Fourier transforms of step functions, sine, and cosine functions. A brief but substantial introduction to distribution theory is included in the appendix. For numerical implementation purposes, we have also included a discussion of the fast Fourier transform in the appendix.
In this chapter, we discuss the finite element method for the solution of partial differential equations. It is an important solution approach when the shape of the domain (including possible holes inside the domains) cannot be conveniently transformed to a single rectangular domain. This includes domains whose boundaries cannot be formulated easily under existing coordinate systems.
In contrast to finite difference methods which are based on replacing derivatives with discrete approximations, finite element (FE) methods approach the problem by piecewise interpolation methods. Thus the FE method first partitions the whole domain Ω into several small pieces Ωn, which are known as the finite elements represented by a set of nodes in the domain. The sizes and shapes of the finite elements do not have to be uniform, and often the sizes may need to be varied to balance accuracy with computational efficiency.
Instead of tackling the differential equations directly, the problem is to first recast it as a set of integral equations known as the weak form of the partial differential equation. There are several ways in which this integral is formulated, including least squares, collocation, and weighted residual. We focus on a particular weighted residual method known as the Galerkin method. These integrals are then imposed on each of the finite elements. The finite elements that are attached to the boundaries of Ω will have the additional requirements of satisfying the boundary conditions.
In this chapter, we discuss one powerful approach to obtain a numerical solution of partial differential equations. The basic approach is to replace derivatives by discrete formulas called finite difference approximations. After these approximations are applied to the given differential equations, the boundary conditions are included by modifying the equations that involve the boundary points. This often results in a large and sparse matrix equation, in which the desired values at the chosen grid points are combined as a vector or a matrix, depending on whether the problem involves one dimension or two dimensions. For steady-state cases, the unknown vector can be obtained by solving an algebraic equation. Conversely, for non–steady-state cases, algebraic iteration is used to obtain a time-marching solution.
Throughout the chapter, we limit our discussion to a discretization based on uniform grids. Under this assumption, different finite difference approximations can be formulated using the Taylor series expansions as discussed in Section 13.1. Several formulations are possible depending on the choice of neighboring points for different order of derivatives. Formulas for various approximations of first-order and second-order derivatives, including their order of accuracy, are given in Tables 13.1 and 13.2, respectively.
Once the derivatives in the differential equations are replaced with their finite difference approximations, the resulting formulas can be recast as matrix algebraic equations. We limit our applications to second-order linear partial differential equations. We first discuss the time-independent cases before moving on to time-dependent cases.
In this chapter, we focus on the case of linear partial differential equations. In general, we consider a partial differential equation to be linear if the partial derivatives together with their coefficients can be represented by an operator L such that it satisfies the property that L (αu + βv) = αLu + βLv, where α and β are constants, whereas u and v are two functions of the same set of independent variables. This linearity property allows for superposition of basis solutions to fit the boundary and initial conditions.
We limit our discussion in this chapter to three approaches: operator factorization of reducible linear operators, separation of variables, and similarity transformations. Another set of solution methods known as integral transforms approach are discussed instead in Chapter 12.
The method using operator factorization is described in Section 11.2, and it is applicable to a class of partial differential equations known as reducible differential equations. We show how this approach can be applied to the one-dimensional wave equation to yield the well-known d'Alembert's solutions. In Section K.1 of the appendix, we consider how the d'Alembert solutions can be applied and modified to fit different boundary conditions, including infinite, semi-infinite, and finite spatial domains.
The separation of variables method is described in Section 11.3. It may not be as general at first glance, but it is an important and powerful approach because it has yielded useful solutions to several important linear problems in science and engineering. These include the linear diffusion, wave, and elliptic problems.