To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The following five chapters exhibit further contributions of the theory of time-variant and quasi-separable systems to matrix algebra. This chapter treats LU factorization, or, equivalently, spectral factorization, which is another, often occurring type of factorization of a quasi-separable system. This type of factorization does not necessarily exist and, when it exists, could traditionally not be computed in a stable numerical way (Gaussian elimination). Here we present necessary and sufficient existence conditions and a stable numerical algorithm to compute the factorization using orthogonal transformations applied to the quasi-separable representation.
The book starts out with a motivating chapter to answer the question: Why is it worthwhile to develop system theory? To do so, we jump fearlessly in the very center of our methods, using a simple and straight example in optimization: optimal tracking. Although optimization is not our leading subject– which is system theory– it provides for one of the main application areas, namely the optimization of the performance of a dynamical system in a time-variant environment (for example, driving a car or sending a rocket to the moon). The chapter presents a recursive matrix algebra approach to the optimization problem, known as dynamic programming. Optimal tracking is based on a powerful principle called “dynamic programming,” which lies at the very basis of what ”dynamical” means.
This chapter provides for a further extension of constrained interpolation that is capable of solving the constrained model reduction problem, namely the generalization of Schur–Takagi-type interpolation to the time-variant setting. This remarkable result demonstrates the full power of time-variant system theory as developed in this book.
The final chapter completes the scattering theory with an elementary approach to inner embedding of a contractive, quasi-separable causal system (in engineering terms: the embedding of a lossy or passive system in a lossless system, often called Darlington synthesis). Such an embedding is always possible in the finitely indexed case but does not generalize to infinitely indexed matrices. (This last issue requires more advanced mathematical methods and lies beyond the subject matter of the book.)
This chapter introduces and develops the scattering formalism, whose usefulness for interpolation has been demonstrated in Chapter 13, for the case of systems described by state-space realizations. This is in preparation for the next three chapters that use it to solve various further interpolation and embedding problems.
The appendix defines the data model used throughout the book and describes what can best be called an algorithmic design specification, that is, the functional and graphical characterization of an algorithm, chosen so that it can be translated to a computer architecture (be it in soft- or in hardware). We follow hereby a powerful “data flow model” that generalizes the classical signal flow graphs and that can be further formalized to generate the information necessary for the subsequent computer system design at the architectural level (i.e., the assignment of operations, data transfer and memory usage). The model provides for a natural link between mathematical operations and architectural representations. It is, at the same token, well adapted to the generation of parallel processing architectures.
The chapter shows how classical interpolation problems of various types (Schur, Nevanlinna–Pick, Hermite–Fejer) carry over and generalize to the time-variant and/or matrix situation. We show that they all reduce to a single generalized constrained interpolation problem, elegantly solved by time-variant scattering theory. An essential ingredient is the definition of the notion of valuation for time-variant systems, thereby generalizing the notion of valuation in the complex plane provided by the classical z-transform.
This chapter introduces a different kind of problem, namely direct constrained matrix approximation via interpolation, the constraint being positive definiteness. It is the problem of completing a positive definite matrix for which only a well-ordered partial set of entries is given (and also giving necessary and sufficient conditions for the existence of the completion) or, alternatively, the problem of parametrizing positive definite matrices. This problem can be solved elegantly when the specified entries contain the main diagonal and further entries crowded along the main diagonal with a staircase boundary. This problem turns out to be equivalent to a constrained interpolation problem defined for a causal contractive matrix, with staircase entries again specified as before. The recursive solution calls for the development of a machinery known as scattering theory, which involves the introduction of nonpositive metrics and the use of J-unitary transformations where J is a sign matrix.
This chapter presents an alternative theory of external and coprime factorization, using polynomial denominators in the noncommutative time-variant shift Z rather than inner denominators as done in the chapter on inner–outer theory. “Polynomials in the shift Z” are equivalent to block-lower matrices with a support defined by a (block) staircase, and are essentially different from the classical matrix polynomials of module theory, although the net effect on system analysis is remarkably similar. The polynomial method differs substantially and in a complementary way from the inner method. It is computationally simpler but does not use orthogonal transformations. It offers the possibility of treating highly unstable systems using unilateral series. Also, this approach leads to famous Bezout equations that, as mentioned in the abstract of Chapter 7, can be derived without the benefit of Euclidean divisibility methods.
This chapter is on elementary matrix operations using a state-space or, equivalently, quasi-separable representation. It is a straightforward but unavoidable chapter. It shows how the recursive structure of the state-space representations is exploited to make matrix addition, multiplication and elementary inversion numerically efficient. The notions of outer operator and inner operator are introduced as basic types of matrices playing a central role in various specific matrix decompositions and factorizations to be treated in further chapters.
This chapter considers likely the most important operation in system theory: inner–outer and its dual, outer–inner factorization. These factorizations play a different role than the previously treated external or coprime factorizations, in that they characterize properties of the inverse or pseudo-inverse of the system under consideration, rather than the system itself. Important is that such factorizations are computed on the state-space representation of the original, that is, the original data. Inner–outer (or outer–inner) factorization is nothing but recursive “QR factorization,” as was already observed in our motivational Chapter 2, and outer–inner is recursive “LQ factorization,” in the somewhat unorthodox terminology used in this book for consistency reasons: QR for “orthogonal Q with a right factor R? and LQ for a “left factor” L with orthogonal Q?. These types of factorizations play the central role in a variety of applications (e.g., optimal tracking, state estimation, system pseudo-inversion, and spectral factorization) to be treated in the following chapters. We conclude the chapter showing how the time-variant, linear results generalize to the nonlinear case.
Let $(A,\mathfrak{m})$ be a Cohen–Macaulay local ring, and then the notion of a $T$-split sequence was introduced in the part-1 of this paper for the $\mathfrak{m}$-adic filtration with the help of the numerical function $e^T_A$. In this article, we explore the relation between Auslander–Reiten (AR)-sequences and $T$-split sequences. For a Gorenstein ring $(A,\mathfrak{m})$, we define a Hom-finite Krull–Remak–Schmidt category $\mathcal{D}_A$ as a quotient of the stable category $\underline{\mathrm{CM}}(A)$. This category preserves isomorphism, that is, $M\cong N$ in $\mathcal{D}_A$ if and only if $M\cong N$ in $\underline{\mathrm{CM}}(A)$.This article has two objectives: first objective is to extend the notion of $T$-split sequences, and second objective is to explore the function $e^T_A$ and $T$-split sequences. When $(A,\mathfrak{m})$ is an analytically unramified Cohen–Macaulay local ring and $I$ is an $\mathfrak{m}$-primary ideal, then we extend the techniques in part-1 of this paper to the integral closure filtration with respect to $I$ and prove a version of Brauer–Thrall-II for a class of such rings.
Using the special value at $u=1$ of Artin–Ihara L-functions, we associate to every $\mathbb {Z}$-cover of a finite connected graph a polynomial, which we call the Ihara polynomial. We show that the number of spanning trees for the finite intermediate graphs of such a cover can be expressed in terms of the Pierce–Lehmer sequence associated to a factor of the Ihara polynomial. This allows us to express the asymptotic growth of the number of spanning trees in terms of the Mahler measure of this polynomial. Specialising to the situation where the base graph is a bouquet or the dumbbell graph gives us back previous results in the literature for circulant and I-graphs (including the generalised Petersen graphs). We also express the p-adic valuation of the number of spanning trees of the finite intermediate graphs in terms of the p-adic Mahler measure of the Ihara polynomial. When applied to a particular $\mathbb {Z}$-cover, our result gives us back Lengyel’s calculation of the p-adic valuations of Fibonacci numbers.
This paper studies the spatio-temporal dynamics of a diffusive plant-sulphide model with toxicity delay. More specifically, the effects of discrete delay and distributed delay on the dynamics are explored, respectively. The deep analysis of eigenvalues indicates that both diffusion and delay can induce Hopf bifurcations. The normal form theory is used to set up an exact formula that determines the properties of Hopf bifurcation in a diffusive plant-sulphide model. A sufficiently small discrete delay does not affect the stability and a sufficiently large discrete delay destabilizes the system. Nonetheless, a sufficiently small or large distributed delay does not affect the stability. Both delays cause instability by inducing Hopf bifurcation rather than Turing bifurcation.