We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this introductory chapter, we will formally introduce the main variants of the traveling salesman problem, symmetric and asymmetric, explain a very useful graph-theoretic view based on Euler’s theorem, and describe the classical simple approximation algorithms: Christofides’ algorithm and the cycle cover algorithm.
We also introduce basic notation, in particular from graph theory, and some fundamental combinatorial optimization problems.
This opening chapter provides a historical perspective on the evolution of computing, tracing its journey from early computational methods to the emergence of networking and the advent of data-centric computing. The chapter sets out to inspire readers to develop a holistic understanding of the intricate interactions among hardware, software, and networking. It introduces the principle of hardware and software codesign as a critical approach in constructing efficient data management systems. The goal is to achieve high throughput and low latency in modern data processing, setting the stage for the detailed exploration that follows in subsequent chapters.
Chapter 1 discusses the motivation for the book and the rationale for its organization into four parts: preliminary considerations, evaluation for classification, evaluation in other settings, and evaluation from a practical perspective. In more detail, the first part provides the statistical tools necessary for evaluation and reviews the main machine learning principles as well as frequently used evaluation practices. The second part discusses the most common setting in which machine learning evaluation has been applied: classification. The third part extends the discussion to other paradigms such as multi-label classification, regression analysis, data stream mining, and unsupervised learning. The fourth part broadens the conversation by moving it from the laboratory setting to the practical setting, specifically discussing issues of robustness and responsible deployment.
We initially introduce the standard diffusion model solving the PDF of the Brownian motion/process, satisfying the normal scaling property. This happens through a new definition of the process increments, where they are no longer drawn from a normal distribution, leading to α-stable Lévy flights at the microscopic level and correspondingly an anomalous diffusion model with a fractional Laplacian at the macroscopic scale. Next, we show how the Riemann–Liouville fractional derivatives emerge in another anomalous diffusion model corresponding to the asymmetric α-stable Lévy flights at small scales. Subsequently, we introduce the notion of subdiffusion stochastic processes, in which the Caputo time-fractional derivative appears in the anomalous subdiffusion fractional model. We combine the previous two cases, and construct continuous-time random walks, where a space-time fractional diffusion model will solve the evolution of the probability density function of the stochastic process. Next, we motivate and introduce many other types of fractional derivatives that will code more complexity and variability at micro-to-macroscopic scales, including fractional material derivatives, time-variable diffusivity for the fractional Brownian motion, tempered/variable-order/distributed-order/vector fractional calculus, etc.
The book starts out with a motivating chapter to answer the question: Why is it worthwhile to develop system theory? To do so, we jump fearlessly in the very center of our methods, using a simple and straight example in optimization: optimal tracking. Although optimization is not our leading subject– which is system theory– it provides for one of the main application areas, namely the optimization of the performance of a dynamical system in a time-variant environment (for example, driving a car or sending a rocket to the moon). The chapter presents a recursive matrix algebra approach to the optimization problem, known as dynamic programming. Optimal tracking is based on a powerful principle called “dynamic programming,” which lies at the very basis of what ”dynamical” means.
Ellen Balka, Simon Fraser University, British Columbia,Ina Wagner, Universität Siegen, Germany,Anne Weibert, Universität Siegen, Germany,Volker Wulf, Universität Siegen, Germany
This chapter goes back to the feminist discourse on science/technology and gender, which started in the 1960s and 1970s and was led by women scientists. Feminists criticized the gender binary and other dualisms and brought forward an understanding of ‘scientific objectivity’ as being rooted in the multiplicity of experiences. Feminist criticism of science and technology was later enriched by queer theory and a focus on intersectionality. Of particular influence on a feminist approach to science and technology were feminist standpoint theory and, connected with it, Donna Haraway’s notion of ‘situated knowledge’. In an STS tradition, Cynthia Cockburn analyzed the gendering of technologies – or the mutual shaping of gender and technology. Researchers in the field of cultural studies have followed the STS tradition with empirical studies of how gender plays out in activities such as radio tinkering or in makerspaces. One of the important insights on the way to a gender/intersectional perspective on design is Faulkner’s work on engineers and her understanding that the gendering that occurs in engineering practices is complex and heterogeneous.
In this chapter we draw motivation from real-world networks and formulate random graph models for them. We focus on some of the models that have received the most attention in the literature, namely, Erdos–Rényi random graphs, inhomogeneous random graphs, configuration models, and preferential attachment models. We follow Volume 1, both for the motivation as well as for the introduction of the random graph models involved. Furthermore, we add some convenient additional results, such as degree-truncation for configuration models and switching techniques for uniform random graphs with prescribed degrees. We also discuss preliminaries used in the book, for example concerning power-law distributions.
In mathematics, it simply is not true that “you can’t prove a negative.” Many revolutionary impossibility theorems reveal profound properties of logic, computation, fairness, and the universe and form the mathematical background of new technologies and Nobel prizes. But to fully appreciate these theorems and their impact on mathematics and beyond, you must understand their proofs.
This book is the first to present complete proofs of these theorems for a broad, lay audience. It fully develops the simplest rigorous proofs found in the literature, reworked to contain less jargon and notation, and more background, intuition, examples, explanations, and exercises. Amazingly, all of the proofs in this book involve only arithmetic and basic logic – and are elementary, starting only from first principles and definitions.
Very little background knowledge is required, and no specialized mathematical training – all you need is the discipline to follow logical arguments and a pen in your hand.
This concise and self-contained introduction builds up the spectral theory of graphs from scratch, with linear algebra and the theory of polynomials developed in the later parts. The book focuses on properties and bounds for the eigenvalues of the adjacency, Laplacian and effective resistance matrices of a graph. The goal of the book is to collect spectral properties that may help to understand the behavior or main characteristics of real-world networks. The chapter on spectra of complex networks illustrates how the theory may be applied to deduce insights into real-world networks.
The second edition contains new chapters on topics in linear algebra and on the effective resistance matrix, and treats the pseudoinverse of the Laplacian. The latter two matrices and the Laplacian describe linear processes, such as the flow of current, on a graph. The concepts of spectral sparsification and graph neural networks are included.
In this chapter we provide an overview of data modeling and describe the formulation of probabilistic models. We introduce random variables, their probability distributions, associated probability densities, examples of common densities, and the fundamental theorem of simulation to draw samples from discrete or continuous probability distributions. We then present the mathematical machinery required in describing and handling probabilistic models, including models with complex variable dependencies. In doing so, we introduce the concepts of joint, conditional, and marginal probability distributions, marginalization, and ancestral sampling.
How certain can we be about projections of future climate change from computer models? In 1979, President Jimmy Carter asked the US National Academy of Science to address this question, and the quest for an answer laid the foundation for a new way of comparing and assessing computational models of climate change. My own work on climate models began with a similar question, and led me to investigate how climate scientists build and test their models. My research took me to climate modelling labs in five different countries, where I interviewed dozens of scientists. In this chapter, we will examine the motivating questions for that work, and explore the original benchmark experiment for climate models – known as Charney sensitivity – developed in response to President Carter’s question.
This chapter identifies systems where dispersed multiphase flow is important as well as the key fluid physics via important engineered and natural systems. This includes energy systems and propulsion systems, manufacturing, processing and transport systems, as well as environmental and biological systems. In addition, this chapter sets forth key terminology and assumptions for dispersed multiphase flow, the key velocity reference frames used for multiphase flow, and the assumption of continuum conditions.
In this chapter we introduce the Bayesian approach to inverse problems in which the unknown parameter and the observed data are viewed as random variables. In this probabilistic formulation, the solution of the inverse problem is the posterior distribution on the parameter given the data. We will show that the Bayesian formulation leads to a form of well-posedness: small perturbations of the forward model or the observed data translate into small perturbations of the posterior distribution. Well-posedness requires a notion of distance between probability measures. We introduce the total variation and Hellinger distances, giving characterizations of them, and bounds relating them, that will be used throughout these notes. We prove well-posedness in the Hellinger distance.