To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The pth ($p\geq 1$) moment exponential stability, almost surely exponential stability and stability in distribution for stochastic McKean–Vlasov equation are derived based on some distribution-dependent Lyapunov function techniques.
We prove an effective estimate with a power saving error term for the number of square-tiled surfaces in a connected component of a stratum of quadratic differentials whose vertical and horizontal foliations belong to prescribed mapping class group orbits and which have at most L squares. This result strengthens asymptotic counting formulas in the work of Delecroix, Goujard, Zograf, Zorich, and the author.
This article studies the dynamical behaviour of classical solutions of a hyperbolic system of balance laws, derived from a chemotaxis model with logarithmic sensitivity, with time-dependent boundary conditions. It is shown that under suitable assumptions on the boundary data, solutions starting in the $H^2$-space exist globally in time and the differences between the solutions and their corresponding boundary data converge to zero as time goes to infinity. There is no smallness restriction on the magnitude of the initial perturbations. Moreover, numerical simulations show that the assumptions on the boundary data are necessary for the above-mentioned results to hold true. In addition, numerical results indicate that the solutions converge asymptotically to time-periodic states if the boundary data are time-periodic.
This comprehensive introduction to global spectral methods for fractional differential equations from leaders of this emerging field is designed to be accessible to graduate students and researchers across math, science, and engineering. The book begins by covering the foundational fractional calculus concepts needed to understand and model anomalous transport phenomena. The authors proceed to introduce a series of new spectral theories and new families of orthogonal and log orthogonal functions, then present corresponding spectral and spectral element methods for fractional differential equations. The book also covers the fractional Laplacian in unbounded and bounded domains and major developments in time-integration of fractional models. It ends by sampling the wide variety of real-world applications of fractional modeling, including concentration transport in surface/subsurface dynamics, complex rheology and material damage, and fluid turbulence and geostrophic transport.
In this chapter, we consider the central issue of minimality of the state-space system representation, as well as equivalences of representations. The question introduces important new basic operators and spaces related to the state-space description. In our time-variant context, what we call the Hankel operator plays the central role, via a minimal composition (i.e., product), of a reachability operator and an observability operator. Corresponding results for LTI systems (a special case) follow readily from the LTV case. In a later starred section and for deeper insights, the theory is extended to infinitely indexed systems, but this entails some extra complications, which are not essential for the main, finite-dimensional treatment offered, and can be skipped by students only interested in finite-dimensional cases.
The set of basic topics then continues with a major application domain of our theory: linear least-squares estimation (llse) of the state of an evolving system (aka Kalman filtering), which turns out to be an immediate application of the outer–inner factorization theory developed in Chapter 8. To complete this discussion, we also show how the theory extends naturally to cover the smoothing case (which is often considered “difficult”).
Several types of factorizations solve the main problems of system theory (e.g., identification, estimation, system inversion, system approximation, and optimal control). The factorization type depends on what kind of operator is factorized, and what form the factors should have. This and the following chapter are, therefore, devoted to the two main types of factorization: this chapter treats what is traditionally called coprime factorization, while the next is devoted to inner–outer factorization. Coprime factorization, here called “external factorization” for more generality, characterizes the system’s dynamics and plays a central role in system characterization and control issues. A remarkable result of our approach is the derivation of Bezout equations for time-variant and quasi-separable systems, obtained without the use of Euclidean divisibility theory. From a numerical point of view, all these factorizations reduce to recursively applied QR or LQ factorizations, applied on appropriately chosen operators.
This chapter starts developing our central linear time-variant (LTV) prototype environment, a class that coincides perfectly with linear algebra and matrix algebra, making the correspondence between system and matrix computations a mutually productive reality. People familiar with the classical approach, in which the z-transform or other types of transforms are used, will easily recognize the notational or graphic resemblance, but there is a major difference: everything stays in the context of elementary matrix algebra, no complex function calculus is involved, and only the simplest matrix operations, namely addition and multiplication of matrices, are needed. Appealing expressions for the state-space realization of a system appear, as well as the global representation of the input–output operator in terms of four block diagonal matrices and the (now noncommutative but elementary) causal shift Z. The consequences for and relation to linear time-invariant (LTI) systems and infinitely indexed systems are fully documented in *-sections, which can be skipped by students or readers more interested in numerical linear algebra than in LTI system control or estimation.
From this point on, main issues in system theory are tackled. The very first, considered in this chapter, is the all-important question of system identification. This is perhaps the most basic question in system theory and related linear algebra, with a large pedigree starting from Kronecker's characterization of rational functions to its elegant solution for time-variant systems presented here. Identification, often also called realization, is the problem of deriving the internal system’s equations (called state-space equations) from input–output data. In this chapter, we only consider the causal, or block-lower triangular case, although the theory applies just as well to an anti-causal system, for which one lets the time run backward, applying the same theory in a dual form.
What is a system? What is a dynamical system? Systems are characterized by a few central notions: their state and their behavior foremost, and then some derived notions such as reachability and observability. These notions pop up in many fields, so it is important to understand them in nontechnical terms. This chapter therefore introduces what people call a narrative that aims at describing the central ideas. In the remainder of the book, the ideas presented here are made mathematically precise in concrete numerical situations. It turns out that a sharp understanding of just the notion of state suffices to develop most if not the whole mathematical machinery needed to solve the main engineering problems related to systems and their dynamics.
This chapter considers the Moore–Penrose inversion of full matrices with quasi-separable specifications, that is, matrices that decompose into the sum of a block-lower triangular and a block-upper triangular matrix, whereby each has a state-space realization given. We show that the Moore–Penrose inverse of such a system has, again, a quasi-separable specification of the same order of complexity as the original and show how this representation can be recursively computed with three intertwined recursions. The procedure is illustrated on a 4 ? 4 (block) example.
The following five chapters exhibit further contributions of the theory of time-variant and quasi-separable systems to matrix algebra. This chapter treats LU factorization, or, equivalently, spectral factorization, which is another, often occurring type of factorization of a quasi-separable system. This type of factorization does not necessarily exist and, when it exists, could traditionally not be computed in a stable numerical way (Gaussian elimination). Here we present necessary and sufficient existence conditions and a stable numerical algorithm to compute the factorization using orthogonal transformations applied to the quasi-separable representation.
The book starts out with a motivating chapter to answer the question: Why is it worthwhile to develop system theory? To do so, we jump fearlessly in the very center of our methods, using a simple and straight example in optimization: optimal tracking. Although optimization is not our leading subject– which is system theory– it provides for one of the main application areas, namely the optimization of the performance of a dynamical system in a time-variant environment (for example, driving a car or sending a rocket to the moon). The chapter presents a recursive matrix algebra approach to the optimization problem, known as dynamic programming. Optimal tracking is based on a powerful principle called “dynamic programming,” which lies at the very basis of what ”dynamical” means.