To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bridge the gap between theoretical concepts and their practical applications with this rigorous introduction to the mathematics underpinning data science. It covers essential topics in linear algebra, calculus and optimization, and probability and statistics, demonstrating their relevance in the context of data analysis. Key application topics include clustering, regression, classification, dimensionality reduction, network analysis, and neural networks. What sets this text apart is its focus on hands-on learning. Each chapter combines mathematical insights with practical examples, using Python to implement algorithms and solve problems. Self-assessment quizzes, warm-up exercises and theoretical problems foster both mathematical understanding and computational skills. Designed for advanced undergraduate students and beginning graduate students, this textbook serves as both an invitation to data science for mathematics majors and as a deeper excursion into mathematics for data science students.
The present volume features contributions from the 2022 BIRS-CMO workshop 'Moduli, Motives and Bundles – New Trends in Algebraic Geometry' held at the Casa Matemática Oaxaca (CMO), in partnership with the Banff International Research Station for Mathematical Innovation and Discovery (BIRS). The first part presents overview articles on enumerative geometry, moduli stacks of coherent sheaves, and torsors in complex geometry, inspired by related mini course lecture series of the workshop. The second part features invited contributions by experts on a diverse range of recent developments in algebraic geometry, and its interactions with number theory and mathematical physics, offering fresh insights into this active area. Students and young researchers will appreciate this text's accessible approach, as well as its focus on future research directions and open problems.
The sixth chapter provides a deeper exploration of probabilistic models, building upon concepts encountered earlier in the text. The chapter illustrates how to construct diverse models, particularly by employing the notion of conditional independence. It also outlines standard methods for estimating parameters and hidden states, as well as techniques for sampling. The chapter concludes by discussing and implementing applications such as Kalman filtering and Gibbs sampling. The chapter covers a range of topics, including parametric families of probability distributions, maximum likelihood estimation, modeling complex dependencies using conditional independence and marginalization, and applications such as linear-Gaussian models and Kalman filtering.
This chapter introduces the mathematics of data through the example of clustering, a fundamental technique in data analysis and machine learning. The chapter begins with a review of essential mathematical concepts, including matrix and vector algebra, differential calculus, optimization, and elementary probability, with practical Python examples. The chapter then delves into the k-means clustering algorithm, presenting it as an optimization problem and deriving Lloyd's algorithm for its solution. A rigorous analysis of the algorithm's convergence properties is provided, along with a matrix formulation of the k-means objective. The chapter concludes with an exploration of high-dimensional data, demonstrating through simulations and theoretical arguments how the "curse of dimensionality" can affect clustering outcomes.
This chapter explores the behavior of random walks on graphs, framed within the broader context of Markov chains. It introduces finite-state Markov chains, explaining key concepts such as transition matrices, the Markov property, and the computation of stationary distributions. The chapter then discusses the long-term behavior of Markov chains, including the convergence to equilibrium under conditions of irreducibility and aperiodicity. The chapter delves into the application of random walks on graphs, particularly in the context of PageRank, a method for identifying central nodes in a network. It also discusses Markov chain Monte Carlo (MCMC) methods, specifically the Metropolis–Hastings algorithm and Gibbs sampling, which are used to generate samples from complex probability distributions. The chapter concludes by illustrating the application of Gibbs sampling to generate images of handwritten digits using a restricted Boltzmann machine.