We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 1 discusses the motivation for the book and the rationale for its organization into four parts: preliminary considerations, evaluation for classification, evaluation in other settings, and evaluation from a practical perspective. In more detail, the first part provides the statistical tools necessary for evaluation and reviews the main machine learning principles as well as frequently used evaluation practices. The second part discusses the most common setting in which machine learning evaluation has been applied: classification. The third part extends the discussion to other paradigms such as multi-label classification, regression analysis, data stream mining, and unsupervised learning. The fourth part broadens the conversation by moving it from the laboratory setting to the practical setting, specifically discussing issues of robustness and responsible deployment.
Network science has exploded in popularity since the late 1990s. But it flows from a long and rich tradition of mathematical and scientific understanding of complex systems. We can no longer imagine the world without evoking networks. And network data is at the heart of it. In this chapter, we set the stage by highlighting network sciences ancestry and the exciting scientific approaches that networks have enabled, followed by a tour of the basic concepts and properties of networks.
This chapter provides a motivation for this book, outlining the interests of economists in artificial intelligence, describing who this book is aimed at, and laying out the structure of the book.
I introduce the problem of “dry active matter” more precisely, describing the symmetries (both underlying, and broken) of the state I wish to consider, and also discuss how shocking it is that such systems can exhibit long-ranged order – that is, all move together – even in d = 2.
In this chapter we draw motivation from real-world networks and formulate random graph models for them. We focus on some of the models that have received the most attention in the literature, namely, Erdos–Rényi random graphs, inhomogeneous random graphs, configuration models, and preferential attachment models. We follow Volume 1, both for the motivation as well as for the introduction of the random graph models involved. Furthermore, we add some convenient additional results, such as degree-truncation for configuration models and switching techniques for uniform random graphs with prescribed degrees. We also discuss preliminaries used in the book, for example concerning power-law distributions.
Stop. Take a moment to look around. What do you see? No matter where you are, you are likely perceiving a world consisting of things. Maybe you are reading this book in a coffee shop, and if so, you probably see people, cups, books, chairs, and so on. You see a world of objects with properties, yourself included: white cups are on wooden tables, people sitting in chairs are reading books and talking with one another. At the same time, you are a subject, responding to this world and actively bringing yourself and these objects into interrelation. And yet, the world of objects with properties that you are perceiving is but one slice of a complex reality.
This concise and self-contained introduction builds up the spectral theory of graphs from scratch, with linear algebra and the theory of polynomials developed in the later parts. The book focuses on properties and bounds for the eigenvalues of the adjacency, Laplacian and effective resistance matrices of a graph. The goal of the book is to collect spectral properties that may help to understand the behavior or main characteristics of real-world networks. The chapter on spectra of complex networks illustrates how the theory may be applied to deduce insights into real-world networks.
The second edition contains new chapters on topics in linear algebra and on the effective resistance matrix, and treats the pseudoinverse of the Laplacian. The latter two matrices and the Laplacian describe linear processes, such as the flow of current, on a graph. The concepts of spectral sparsification and graph neural networks are included.
In this chapter we provide an overview of data modeling and describe the formulation of probabilistic models. We introduce random variables, their probability distributions, associated probability densities, examples of common densities, and the fundamental theorem of simulation to draw samples from discrete or continuous probability distributions. We then present the mathematical machinery required in describing and handling probabilistic models, including models with complex variable dependencies. In doing so, we introduce the concepts of joint, conditional, and marginal probability distributions, marginalization, and ancestral sampling.
This chapter starts with an introductory survey on the physical background and historical events that lead to the emergence of the density matrix renormalization group (DMRG) and its tensor network generalization. We then briefly overview the major progress on the renormalization group methods of tensor networks and their applications in the past three decades. The tensor network renormalization was initially developed to solve quantum many-body problems, but its application field has grown constantly. It has now become an irreplaceable tool for investigating strongly correlated problems, statistical physics, quantum information, quantum chemistry, and artificial intelligence.
After a discussion of best programming practices and a brief summary of basic features of the Python programming language, chapter 1 discusses several modern idioms. These include the use of list comprehensions, dictionaries, the for-else idiom, as well as other ways to iterate Pythonically. Throughout, the focus is on programming in a way which feels natural, i.e., working with the language (as opposed to working against the language). The chapter also includes basic information on how to make figures using Matplotlib, as well as advice on how to effectively use the NumPy library, with an emphasis on slicing, vectorization, and broadcasting. The chapter is rounded out by a physics project, which studies the visualization of electric fields, and a problem set.
Hector Zenil, University of Cambridge,Narsis A. Kiani, Karolinska Institutet, Stockholm,Jesper Tegnér, King Abdullah University of Science and Technology, Saudi Arabia
Chapter 1 sets out the conceptual framework through which the book examines research evaluation and names the key players and processes involved. It begins by outlining The Evaluation Game’s key contention that research evaluation is a manifestation of a broader technology which the book refers to as “evaluative power.” Next, it describes how the evaluative power comes to be legitimized and how it introduces one of its main technologies: research evaluation systems. The chapter then defines games as top-down social practices and, on the basis of this conceptual framework, presents the evaluation game as a reaction to or resistance against the evaluative power. Overall, the chapter shows how the evaluation of both institutions and knowledge produced by researchers working in them have, unavoidably, become an integral element of the research process itself.
In this chapter, we describe the main goal of the book, its organization, course outline, and suggestions for instructions and self-study. The textbook material is aimed for a one-semester undergraduate/graduate course for mathematics and computer science students. The course might also be recommended for students of physics, interested in networks and the evolution of large systems, as well as engineering students, specializing in telecommunication. Our textbook aims to give a gentle introduction to the mathematical foundations of random graphs and to build a platform to understand the nature of real-life networks. The text is divided into three parts and presents the basic elements of the theory of random graphs and networks. To help the reader navigate through the text, we have decided to start with describing in the preliminary part (Part I) the main technical tools used throughout the text. Part II of the text is devoted to the classic Erdős–Rényi–Gilbert uniform and binomial random graphs. Part III concentrates on generalizations of the Erdős–Rényi–Gilbert models of random graphs whose features better reflect some characteristic properties of real-world networks.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.