To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The limitation of transfer function representation becomes obvious as we tackle more complex problems. For complex systems with multiple inputs and outputs, transfer function matrices can become very clumsy. In the so-called modern control, the method of choice is state-space or state variables in the time domain – essentially a matrix representation of the model equations. The formulation allows us to make use of theories in linear algebra and differential equations. It is always a mistake to tackle modern control without a firm background in these mathematical topics. For this reason, we will not overreach by doing both the mathematical background and the control together. Without a formal mathematical framework, the explanation is made by means of examples as much as possible. The actual state-space control has to be delayed until after we tackle classical transfer function feedback systems.
What Are We Up to?
Learning how to write the state-space representation of a model.
Understanding the how a state-space representation is related to the transfer function representation.
State-Space Models
Just as we are feeling comfortable with transfer functions, we now switch gears totally. Nevertheless, we are still working with linearized differential equation models in this chapter. Whether we have a high-order differential equation or multiple equations, we can always rearrange them into a set of first-order differential equations. Bold statements indeed! We will see that when we go over the examples.
We now finally launch into the material on controllers. State-space representation is more abstract, and it helps to understand controllers in the classical sense first. We will come back to state-space controller design later. The introduction stays with the basics. Our primary focus is to learn how to design and tune a classical proportional–integra–derivative (PID) controller. Before that, we first need to know how to set up a problem and derive the closed-loop characteristic equation.
What Are We Up to?
Introducing the basic PID control schemes
Deriving the closed-loop transfer function of a system and understanding its properties
PID controllers
We use a simple liquid-level controller to illustrate the concept of a classic feedback control system. In this example (Fig. 5.1), we monitor the liquid level in a vessel and use the information to adjust the opening of an effluent valve to keep the liquid level at some user-specified value (the set point or reference). In this case, the liquid level is both the measured variable and the controlled variable – they are the same in a SISO system. In this respect, the controlled variable is also the output variable of the SISO system. A system refers to the process that we need to control plus the controller and accompanying accessories, such as sensors and actuators.
This is an introductory text written from the perspective of a student. The major concern is not of how much material is covered, but rather, how the most important and basic concepts that one should grasp in a first course are presented. If your instructor is using some other text that you are struggling to understand, I hope that I can help you too. The material here is the result of a process of elimination. The writing and the examples are succinct and self-explanatory, and the style is purposely unorthodox and conversational. To a great extent, the style, content, and the extensive use of footnotes are molded heavily by questions raised in class. I left out very few derivation steps. If they are left out, the missing steps are provided as hints in the Review Problems at the back of each chapter. I strive to eliminate those “easily obtained” results that baffle many of us. Most of you should be able to read the material on your own. You just need basic knowledge in differential equations, and it helps if you have taken a course on writing material balances. With the exception of Chapters 4, 9, and 10, which should be skipped in a quarter-long course, it also helps if you proceed chapter by chapter. The presentation of material is not intended for someone to just jump right in the middle of the text. A very strong emphasis is placed on developing analytical skills.
Classical process control builds on linear ordinary differential equations (ODEs) and the technique of the Laplace transform. This is a topic that we no doubt have come across in an introductory course on differential equations – like two years ago? Yes, we easily have forgotten the details. Therefore an attempt is made here to refresh the material necessary to solve control problems; other details and steps will be skipped. We can always refer back to our old textbook if we want to answer long-forgotten but not urgent questions.
What Are We Up to?
The properties of Laplace transform and the transforms of some common functions. We need them to construct a table for doing an inverse transform.
Because we are doing an inverse transform by means of a look-up table, we need to break down any given transfer functions into smaller parts that match what the table has – what are called partial fractions. The time-domain function is the sum of the inverse transform of the individual terms, making use of the fact that Laplace transform is a linear operator.
The time-response characteristics of a model can be inferred from the poles, i.e., the roots of the characteristic polynomial. This observation is independent of the input function and singularly the most important point that we must master before moving onto control analysis.
Designing is a complex human process that has resisted comprehensive description and understanding. All artifacts surrounding us are the results of designing. Creating these artifacts involves making a great many decisions, which suggests that designing can be viewed as a decision-making process. In the decision-making paradigm of the design process we examine the intended artifact in order to identify possible alternatives and select the most suitable one. An abstract description of the artifact using mathematical expressions of relevant natural laws, experience, and geometry is the mathematical model of the artifact. This mathematical model may contain many alternative designs, and so criteria for comparing these alternatives must be introduced in the model. Within the limitations of such a model, the best, or optimum, design can be identified with the aid of mathematical methods.
In this first chapter we define the design optimization problem and describe most of the properties and issues that occupy the rest of the book. We outline the limitations of our approach and caution that an “optimum” design should be perceived as such only within the scope of the mathematical model describing it and the inevitable subjective judgment of the modeler.
Mathematical Modeling
Although this book is concerned with design, almost all the concepts and results described can be generalized by replacing the word design by the word system.
Im Anfang war die Tat. (In the beginning was the Act.)
J. W. von Goethe (1749–1832)
In designing, as in other endeavors, one learns by doing. In this sense the present chapter, although at the end of the book, is the beginning of the action. The principles and techniques of the previous chapters will be summarized and organized into a problem-solving strategy that can provide guidance in practical design applications. Students in a design optimization course should fix these ideas by applying them to a term project. For the practicing designer, actual problems at the workplace can serve as first trials for this new knowledge, particularly if sufficient experience exists for verifying the first results.
The chapter begins with a review of some modeling implications derived from the discussion in previous chapters about how numerical algorithms work. Although the subject is quite extensive, our goal here is to highlight again the intimacy between modeling and computation that was explored first in Chapters 1 and 2. The reader should be convinced by now of the validity of this approach and experience a sense of closure on the subject.
The next two sections deal with two extremely important practical issues: the computation of derivatives and model scaling. Local computation requires knowledge of derivatives. The accuracy by which derivatives are computed can have a profound influence on the performance of the algorithm. A closed-form computation would be best, and this has become dramatically easier with the advent of symbolic computation programs.
We knew that the islands were beautiful, around here somewhere, feeling our way a little lower or a little higher, a least distance.
George Seferis (1900–1971)
Model analysis by itself can lead to the optimum only in limited and rather opportune circumstances. Numerical iterative methods must be employed for problems of larger size and increased complexity. At the same time, the numerical methods available for solving NLP problems can fail for a variety of reasons. Some of the reasons are not well understood or are not easy to remedy without changes in the model. It is safe to say that no single method exists for solving the general NLP problem with complete reliability. This is why it is important to see the design optimization process as an interplay between analysis and computation. Identifying model characteristics such as monotonicity, redundancy, constraint criticality, and decomposition can assist the computational effort substantially and increase the likelihood of finding and verifying the correct optimum. The literature has many examples of wrong solutions found by overconfident numerical treatment.
Our goal in this chapter is to give an appreciation of what is involved in numerical optimization and to describe a small number of methods that are generally accepted as preferable within our present context. So many methods and variations have been proposed that describing them all would closely resemble an encyclopedia. Workers in the field tend to have their own preferences.
It seems that we reach perfection not when we have nothing more to add, but when we have nothing more to subtract.
Antoine de Saint-Exupéry (Terre des Hommes) (1900–1944)
Building the mathematical model is at least half the work toward realizing an optimum design. The importance of a good model cannot be overemphasized. But what constitutes a “good” model? The ideas presented in the first chapter indicate an important characteristic of a good optimal design model: The model must represent reality in the simplest meaningful manner. An optimization model is “meaningful” if it captures trade-offs that provide rigorous insights to whoever will make decisions in a particular context. One should start with the simplest such model and add complexity (more functions, variables, parameters) only as the need for studying more complicated or extensive trade-offs arises. Such a need is generated by a previous successful (and simpler) optimization study, new analysis models, or changing design requirements. Clearly the process is subjective and benefits from experience and intuition.
Sometimes an optimization study is undertaken after a sophisticated analysis or simulation model has already been constructed and validated. Optimization ideas are then brought in to convert an analysis capability to a design capability. Under these circumstances one should still start with the simplest model possible. One way to reduce complexity is to use metamodels: simpler analysis models extracted from the more sophisticated ones using a variety of data-handling techniques.
A dozen years have passed since this book was first published, and computers are becoming ever more powerful, design engineers are tackling ever more complex systems, and the term “optimization” is routinely used to denote a desire for ever increasing speed and quality of the design process. This book was born out of our own desire to put the concept of “optimal design” on a firm, rigorous foundation and to demonstrate the intimate relationship between the mathematical model that describes a design and the solution methods that optimize it.
A basic premise of the first edition was that a good model can make optimization almost trivial, whereas a bad one can make correct optimization difficult or impossible. This is even more true today. New software tools for computer aided engineering (CAE) provide capabilities for intricate analysis of many difficult performance aspects of a system. These analysis models, often referred to also as simulations, can be coupled with numerical optimization software to generate better designs iteratively. Both the CAE and the optimization software tools have dramatically increased in sophistication, and design engineers are called to design highly complex problems, with few, if any, hardware prototypes.
The success of such attempts depends strongly on how well the design problem has been formulated for an optimization study, and on how familiar the designer is with the workings and pitfalls of iterative optimization techniques.