To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Water shapes the planet and all life upon it. Breaking down traditional disciplinary barriers, this accessible, holistic introduction to the role and importance of water in Earth’s physical and biological environments assumes no prior knowledge. It provides the reader with a clear and coherent explanation of the unique properties of water and how these allow it to affect landscapes and underpin all life on Earth. Contemporary issues surrounding water quality – such as the rise of microplastics and climate change – are highlighted, ensuring readers understand current debates. Giving all of the necessary background and up-to-date references, and including numerous examples and illustrations to explain concepts, worked mathematical calculations, and extensive end-of-chapter questions, this is the ideal introductory textbook for students seeking to understand the inextricable links between water and the environment.
This chapter is a discussion of methods used to study the nervous system at the level of cells. The introduction defines and describes the microanatomy of neurons and populations of glia and gives an overview of organelles. Next is a discussion of microscopy techniques and images, including light microscopy (bright-field and fluorescence) and electron microscopy. Other techniques which rely on microscopy are then described, including unbiased stereology, fluorescence recovery after photobleaching, and flow cytometry. The chapter concludes with a description of a variety of stains, dyes, and anterograde and retrograde tracers, as well as an interpretation of Sholl analysis figures and dendritic spine quantification.
This chapter describes the techniques used in electrophysiology and electrochemistry and explains the figures derived from these methods. The introduction describes how neurons can be modeled as electrical circuits and explains different preparations of electrophysiological samples, the common recording configurations, and the equipment used with these techniques. The techniques are divided into a few major categories: passive neuronal properties, action potential analysis, synaptic events including paired pulse ratios and long-term potentiation, current-voltage plots, and electrochemistry techniques such as fast scan cyclic voltammetry and amperometry.
This chapter introduces state-space descriptions for computational graphs (structures) representing discrete-time LTI systems. They are not only useful in theoretical analysis, but can also be used to derive alternative structures for a transfer function starting from a known structure. The chapter considers systems with possibly multiple inputs and outputs (MIMO systems); systems with a single input and a single output (SISO systems) are special cases. General expressions for the transfer matrix and impulse response matrix are derived in terms of state-space descriptions. The concept of structure minimality is discussed, and related to properties called reachability and observability. It is seen that state-space descriptions give a different perspective on system poles, in terms of the eigenvalues of the state transition matrix. The chapter also revisits IIR digital allpass filters and derives several equivalent structures for them using so-called similarity transformations on state-space descriptions. Specifically, a number of lattice structures are presented for allpass filters. As a practical example of impact, if such a structure is used to implement the second-order allpass filter in a notch filter, then the notch frequency and notch quality can be independently controlled by two separate multipliers.
The constantly changing nature of digital technologies opens opportunities to improve established approaches and to seek out new approaches. And although these opportunities stem from new technologies, they are translated to action by innovative educators and leaders. Hence all educators need to be innovators.
This chapter begins by explaining why educators need to see themselves as learners and innovators. It then conceptualises the nature of change in education settings for the purpose of understanding how best to respond. After which, it explores a range of professional development and learning models, and then considers the nature of innovation. It provides insight and tips that you will be able use to enact your role as an innovator.
This chapter introduces the medium-term hydrothermal planning problem. Two-stage stochastic linear programs are introduced first, and subsequently generalized to multi-stage stochastic linear programs. Various representations of multi-stage stochastic linear programs are presented, including representations on scenario trees, representations on lattices (for Markov processes), as well as representations with stagewise independent uncertainty. These models are applied to a running example of hydrothermal planning, which is used as the basis for introducing the notion of dynamic programming value functions. The value of water is defined, and demonstrated on a hydrothermal planning problem. The chapter then proceeds to focus on the performance of stochastic programs. The wait-and-see and here-and-now value are introduced for two-stage stochastic programs. These are used to define the expected value of perfect information. The expected value solution is also defined, and used in order to introduce the value of the stochastic solution. Sampling is discussed briefly in the context of sample average approximation and importance sampling.
This is a detailed chapter on digital filter design. Specific digital filters such as notch and antinotch filters, and sharp-cutoff lowpass filters such as Butterworth filters are discussed in detail. Also discussed are allpass filters and some of their applications, including the implementation of notch and antinotch filters. Computational graphs (structures) for allpass filters are presented. It is explained how continuous-time filters can be transformed into discrete time by using the bilinear transformation. A simple method for the design of linear-phase FIR filters, called the window-based method, is also presented. Examples include the Kaiser window and the Hamming window. A comparative discussion of FIR and IIR filters is given. It is demonstrated how nonlinear-phase filters can create visible phase distortion in images. Towards the end, a detailed discussion of steady-state and transient components of filter outputs is given. The dependence of transient duration on pole position is explained. The chapter concludes with a discussion of spectral factorization.
This chapter introduces some basic mathematical notions that are used throughout the book. Convex sets and functions, optimization problems, feasible solutions, and optimal solutions are first defined. The chapter then covers duality theory, including the definition of the Lagrangian function and the dual function, which are used to derive the duals of linear programs. Weak and strong duality are then defined and related to certain classes of optimization problems. The Karush–Kuhn–Tucker (KKT) conditions are defined, and their relation to the optimal solution of mathematical programs is discussed. KKT conditions are a fundamental concept used extensively in the book in order to understand the properties and economic interpretations of the various economic models encountered. Subgradients are subsequently defined in order to establish the relation between Lagrange multipliers and the sensitivity of an optimization model with respect to changes in the right-hand side parameters of its constraints. These sensitivity results are also used repeatedly in the book, for instance in order to derive locational marginal prices in chapter 5.
This chapter provides an overview of matrices. Basic matrix operations are introduced first, such as addition, multiplication, transposition, and so on. Determinants and matrix inverses are then defined. The rank and Kruskal rank of matrices are defined and explained. The connection between rank, determinant, and invertibility is elaborated. Eigenvalues and eigenvectors are then reviewed. Many equivalent meanings of singularity (non-invertibility) of matrices are summarized. Unitary matrices are reviewed. Finally, linear equations are discussed. The conditions under which a solution exists and the condition for the solution to be unique are also explained and demonstrated with examples.
This concise and rigorous textbook introduces students to the subject of continuum thermodynamics, providing a complete treatment of the subject with practical applications to material modelling.
Presents mathematical prerequisites and the foundations of continuum mechanics, taking the student step-by-step through the subject to allow full understanding of the theory.
Introduces more advanced topics such as theories for the investigation of material models, showing how they relate to real-world practical applications.
Numerous examples and illustrations, alongside end-of-chapter problems with helpful hints, help describe complex concepts and mathematical derivations.
This is the ideal, accessible introduction to continuum thermodynamics for senior undergraduate and graduate students in mechanical, aeronautical and civil engineering.