We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces and surveys the emerging agent-centered artificial intelligence (AI), and highlights the importance of developing theories of action, learning, and negotiation in multi-agent scenarios such as the Internet. It examines in detail the main functionalities software systems would display in a social, agent-centered AI or, in other words, the principles of behavior of the new AI. Approaches to multi-agent behavior differ mainly in regards to the degree of control that the designer should have over individual agents and over the social environment, that is, over the interaction mechanisms. Research on machine learning has been mostly independent of agent research and only recently has it received attention in connection with agents and multi-agent systems. Agent-based applications have enjoyed considerable success in manufacturing, process control, telecommunications systems, air traffic control, traffic and transportation management, information filtering and gathering, electronic commerce, business process management, entertainment, and medical care.
Proportional-integral-derivative (PID) controllers are extensively used for efficient industrial operations. Autotuning such controllers is required for efficient operation. There are two ways of relay autotuning cascade control systems – simultaneous tuning and sequential tuning. This book discusses incorporation of higher order harmonics of relay autotuning for a single loop controller and cascade control systems to get accurate values of controller ultimate gain. It provides a simple method of designing P/PI controllers for series and parallel cascade control schemes. The authors also focus on estimation of model parameters of unstable FOPTD systems, stable SOPTD and unstable SOPTDZ systems using a single relay feedback test. The methodology and final results explained in this book are useful in tuning controllers. The text would be of use to graduate students and researchers for further studies in this area.
This book is an introduction to numerical methods for students in engineering. It covers solution of equations, interpolation and data fitting, solution of differential equations, eigenvalue problems and optimisation. The algorithms are implemented in Python 3, a high-level programming language that rivals MATLAB® in readability and ease of use. All methods include programs showing how the computer code is utilised in the solution of problems. The book is based on Numerical Methods in Engineering with Python, which used Python 2. This new edition demonstrates the use of Python 3 and includes an introduction to the Python plotting package Matplotlib. This comprehensive book is enhanced by the addition of numerous examples and problems throughout.
Striking a balance between theory and practice, this graduate-level text is perfect for students in the applied sciences. The author provides a clear introduction to the classical methods, how they work and why they sometimes fail. Crucially, he also demonstrates how these simple and classical techniques can be combined to address difficult problems. Many worked examples and sample programs are provided to help the reader make practical use of the subject material. Further mathematical background, if required, is summarized in an appendix. Topics covered include classical methods for linear systems, eigenvalues, interpolation and integration, ODEs and data fitting, and also more modern ideas like adaptivity and stochastic differential equations.
Mobile Robotics offers comprehensive coverage of the essentials of the field suitable for both students and practitioners. Adapted from Alonzo Kelly's graduate and undergraduate courses, the content of the book reflects current approaches to developing effective mobile robots. Professor Kelly adapts principles and techniques from the fields of mathematics, physics and numerical methods to present a consistent framework in a notation that facilitates learning and highlights relationships between topics. This text was developed specifically to be accessible to senior level undergraduates in engineering and computer science, and includes supporting exercises to reinforce the lessons of each section. Practitioners will value Kelly's perspectives on practical applications of these principles. Complex subjects are reduced to implementable algorithms extracted from real systems wherever possible, to enhance the real-world relevance of the text.
The term “error” is going to appear throughout this book in different contexts. The varieties of error we will be concerned with are:
Experimental error. We may wish to calculate some function y(x1, …, xn), where the quantities xi are measured. Any such measurement has associated errors, and they will affect the accuracy of the calculated y.
Roundoff error. Even if x were measured exactly, odds are it cannot be represented exactly in a digital computer. Consider π, which cannot be represented exactly in decimal form. We can write π ≈ 3.1416, by rounding the exact number to fit 5 decimal figures. Some roundoff error occurs in almost every calculation with real numbers, and controlling how strongly it impacts the final result of a calculation is always an important numerical consideration.
Approximation error. Sometimes we want one thing but calculate another, intentionally, because the other is easier or has more favorable properties. For example, one might choose to represent a complicated function by its Taylor series. When substituting expressions that are not mathematically identical we introduce approximation error.
Experimental error is largely outside the scope of numerical treatment, and we'll assume here, with few exceptions, that it's just something we have to live with. Experimental error plays an important role in data fitting, which will be described at length in Chapter 8.
Data fitting can be viewed as a generalization of polynomial interpolation to the case where we have more data than is needed to construct a polynomial of specified degree.
C.F. Gauss claims to have first developed solutions to the least squares problem, and both Gaussian elimination and the Gauss-Seidel iterative method were developed to solve these problems [52, 79]. In fact, interest in least squares by Galileo predates Gauss by over 200 years – a comprehensive history and analysis is given by Harter [97]. In addition to Gauss’ contributions, the Jacobi iterative method [118] and the Cholesky decomposition method [13] were developed to solve least squares problems. Clearly, the least squares problem was (and continues to be) a problem of considerable importance. All these methods were applied to the normal equations, which represent an overdetermined system as a square and symmetric positive definite matrix. Despite the astounding historical importance of the normal equations, the argument will be made that you should not ever use them. Extensions of least squares to nonlinear problems, and linear problems with normal error, are described.
Least squares refers to a best fit in the L2 norm, and this is by far the most commonly used norm. However, other norms are important for certain applications. Covariance-weighting leads to minimization in the Mahalanobis norm. L1 is commonly used in financial modeling, and L∞ may be most suitable when the underlying error distribution is uniform, versus normal.
The ab initio determination of molecular structure is an example that will use:
• singular value decomposition (Section 3.4) in particular the Lowdin decomposition (8.13) encountered in data fitting;
• the QR method for eigenvalues and eigenvectors (Section 3.3);
• interpolation (Chapter 5);
• fixed point iteration (Chapter 6) with stabilization by damping (Section 8.4);
• data fitting (Chapter 8), which involves solutions of linear systems (Chapter 2);
• integration: generally (Chapter 9), and Gaussian integration in particular (Section 9.4, Problem 9.6, which requires numerical root finding (Section 6.6) to set up), and the use of recursions, e.g., (10.35); and
• optimization with the variable metric method (Section 7.4).
In addition, concerns about numerical error (Chapter 1) are omnipresent. We will encounter the error function (Problems 5.7 and 9.7), and lots of Gaussian integrals (equations (8.10), (11.8), and (11.29)). We will use a paradigm called the variational principle that is essentially what motivated the conjugate gradient algorithm (Section 4.1).
The chemical physics problem is described in Section 12.1, and the Hartree–Fock–Roothaan equations are derived. These determine the energy of a particular molecular configuration. The HFR equations rely on a number of integrals of Gaussian functions that are introduced in their simplest form in Section 12.2. With these simple formulas we determine the energy of the H2 molecule for prescribed geometry, demonstrating in Section 12.3 reasonable results compared to far more elaborate calculations. In Section 12.4 the more complex Gaussian integrals are addressed.