Dynamical systems are physical systems evolving in time. Typically, these systems are modeled mathematically by a system of ordinary differential equations, in which command and disturbance signals act as inputs that, together with pre-existing initial conditions, generate the evolution in time of internal variables (the states) and of output signals. These kinds of model are ubiquitous in engineering, and are used to describe, for instance, the behavior of an airplane, the functioning of a combustion engine, the dynamics of a robot manipulator, or the trajectory of a missile.
Broadly speaking, the problem of control of a dynamical system amounts to determining suitable input signals so as to make the system behave in a desired way, e.g., to follow a desired output trajectory, to be resilient to disturbances, etc. Even an elementary treatment of control of dynamical systems would require an entire text-book. Here, we simply focus on very few specific aspects related to a restricted class of dynamical systems, namely finite-dimensional, linear, and time-invariant systems.
We start our discussion by introducing continuous-time models and their discrete-time counterparts. For discrete-time models, we highlight the connections between the input-output behavior over a finite horizon, and static linear maps described by systems of linear equations. We shall show how certain optimization problems arise naturally in this context, and discuss their interpretation in a control setting.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.