To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter starts developing our central linear time-variant (LTV) prototype environment, a class that coincides perfectly with linear algebra and matrix algebra, making the correspondence between system and matrix computations a mutually productive reality. People familiar with the classical approach, in which the z-transform or other types of transforms are used, will easily recognize the notational or graphic resemblance, but there is a major difference: everything stays in the context of elementary matrix algebra, no complex function calculus is involved, and only the simplest matrix operations, namely addition and multiplication of matrices, are needed. Appealing expressions for the state-space realization of a system appear, as well as the global representation of the input–output operator in terms of four block diagonal matrices and the (now noncommutative but elementary) causal shift Z. The consequences for and relation to linear time-invariant (LTI) systems and infinitely indexed systems are fully documented in *-sections, which can be skipped by students or readers more interested in numerical linear algebra than in LTI system control or estimation.
From this point on, main issues in system theory are tackled. The very first, considered in this chapter, is the all-important question of system identification. This is perhaps the most basic question in system theory and related linear algebra, with a large pedigree starting from Kronecker's characterization of rational functions to its elegant solution for time-variant systems presented here. Identification, often also called realization, is the problem of deriving the internal system’s equations (called state-space equations) from input–output data. In this chapter, we only consider the causal, or block-lower triangular case, although the theory applies just as well to an anti-causal system, for which one lets the time run backward, applying the same theory in a dual form.
What is a system? What is a dynamical system? Systems are characterized by a few central notions: their state and their behavior foremost, and then some derived notions such as reachability and observability. These notions pop up in many fields, so it is important to understand them in nontechnical terms. This chapter therefore introduces what people call a narrative that aims at describing the central ideas. In the remainder of the book, the ideas presented here are made mathematically precise in concrete numerical situations. It turns out that a sharp understanding of just the notion of state suffices to develop most if not the whole mathematical machinery needed to solve the main engineering problems related to systems and their dynamics.
This chapter considers the Moore–Penrose inversion of full matrices with quasi-separable specifications, that is, matrices that decompose into the sum of a block-lower triangular and a block-upper triangular matrix, whereby each has a state-space realization given. We show that the Moore–Penrose inverse of such a system has, again, a quasi-separable specification of the same order of complexity as the original and show how this representation can be recursively computed with three intertwined recursions. The procedure is illustrated on a 4 ? 4 (block) example.
The following five chapters exhibit further contributions of the theory of time-variant and quasi-separable systems to matrix algebra. This chapter treats LU factorization, or, equivalently, spectral factorization, which is another, often occurring type of factorization of a quasi-separable system. This type of factorization does not necessarily exist and, when it exists, could traditionally not be computed in a stable numerical way (Gaussian elimination). Here we present necessary and sufficient existence conditions and a stable numerical algorithm to compute the factorization using orthogonal transformations applied to the quasi-separable representation.
The book starts out with a motivating chapter to answer the question: Why is it worthwhile to develop system theory? To do so, we jump fearlessly in the very center of our methods, using a simple and straight example in optimization: optimal tracking. Although optimization is not our leading subject– which is system theory– it provides for one of the main application areas, namely the optimization of the performance of a dynamical system in a time-variant environment (for example, driving a car or sending a rocket to the moon). The chapter presents a recursive matrix algebra approach to the optimization problem, known as dynamic programming. Optimal tracking is based on a powerful principle called “dynamic programming,” which lies at the very basis of what ”dynamical” means.
This chapter provides for a further extension of constrained interpolation that is capable of solving the constrained model reduction problem, namely the generalization of Schur–Takagi-type interpolation to the time-variant setting. This remarkable result demonstrates the full power of time-variant system theory as developed in this book.
The final chapter completes the scattering theory with an elementary approach to inner embedding of a contractive, quasi-separable causal system (in engineering terms: the embedding of a lossy or passive system in a lossless system, often called Darlington synthesis). Such an embedding is always possible in the finitely indexed case but does not generalize to infinitely indexed matrices. (This last issue requires more advanced mathematical methods and lies beyond the subject matter of the book.)
This chapter introduces and develops the scattering formalism, whose usefulness for interpolation has been demonstrated in Chapter 13, for the case of systems described by state-space realizations. This is in preparation for the next three chapters that use it to solve various further interpolation and embedding problems.
The appendix defines the data model used throughout the book and describes what can best be called an algorithmic design specification, that is, the functional and graphical characterization of an algorithm, chosen so that it can be translated to a computer architecture (be it in soft- or in hardware). We follow hereby a powerful “data flow model” that generalizes the classical signal flow graphs and that can be further formalized to generate the information necessary for the subsequent computer system design at the architectural level (i.e., the assignment of operations, data transfer and memory usage). The model provides for a natural link between mathematical operations and architectural representations. It is, at the same token, well adapted to the generation of parallel processing architectures.
The chapter shows how classical interpolation problems of various types (Schur, Nevanlinna–Pick, Hermite–Fejer) carry over and generalize to the time-variant and/or matrix situation. We show that they all reduce to a single generalized constrained interpolation problem, elegantly solved by time-variant scattering theory. An essential ingredient is the definition of the notion of valuation for time-variant systems, thereby generalizing the notion of valuation in the complex plane provided by the classical z-transform.
This chapter introduces a different kind of problem, namely direct constrained matrix approximation via interpolation, the constraint being positive definiteness. It is the problem of completing a positive definite matrix for which only a well-ordered partial set of entries is given (and also giving necessary and sufficient conditions for the existence of the completion) or, alternatively, the problem of parametrizing positive definite matrices. This problem can be solved elegantly when the specified entries contain the main diagonal and further entries crowded along the main diagonal with a staircase boundary. This problem turns out to be equivalent to a constrained interpolation problem defined for a causal contractive matrix, with staircase entries again specified as before. The recursive solution calls for the development of a machinery known as scattering theory, which involves the introduction of nonpositive metrics and the use of J-unitary transformations where J is a sign matrix.
This chapter presents an alternative theory of external and coprime factorization, using polynomial denominators in the noncommutative time-variant shift Z rather than inner denominators as done in the chapter on inner–outer theory. “Polynomials in the shift Z” are equivalent to block-lower matrices with a support defined by a (block) staircase, and are essentially different from the classical matrix polynomials of module theory, although the net effect on system analysis is remarkably similar. The polynomial method differs substantially and in a complementary way from the inner method. It is computationally simpler but does not use orthogonal transformations. It offers the possibility of treating highly unstable systems using unilateral series. Also, this approach leads to famous Bezout equations that, as mentioned in the abstract of Chapter 7, can be derived without the benefit of Euclidean divisibility methods.