To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces the mathematics of data through the example of clustering, a fundamental technique in data analysis and machine learning. The chapter begins with a review of essential mathematical concepts, including matrix and vector algebra, differential calculus, optimization, and elementary probability, with practical Python examples. The chapter then delves into the k-means clustering algorithm, presenting it as an optimization problem and deriving Lloyd's algorithm for its solution. A rigorous analysis of the algorithm's convergence properties is provided, along with a matrix formulation of the k-means objective. The chapter concludes with an exploration of high-dimensional data, demonstrating through simulations and theoretical arguments how the "curse of dimensionality" can affect clustering outcomes.
This chapter focuses on the core concepts of optimization theory and its application in data science and AI. It begins with a review of differentiable functions of several variables, including the gradient and Hessian matrices, and key results like the Chain Rule and the Mean Value Theorem. The chapter then introduces optimality conditions for unconstrained optimization, explaining first-order and second-order conditions, and the role of convexity in ensuring global optimality. A detailed discussion of the gradient descent algorithm is provided, including its convergence analysis under different assumptions. The chapter concludes with an application to logistic regression, demonstrating how gradient descent is used to optimize the cross-entropy loss function in a supervised learning context. Practical Python examples are integrated throughout to illustrate the theoretical concepts.
After a discussion of best programming practices and a brief summary of basic features of the Python programming language, chapter 1 discusses several modern idioms. These include the use of list comprehensions, dictionaries, the for-else idiom, as well as other ways to iterate Pythonically. Throughout, the focus is on programming in a way which feels natural, i.e., working with the language (as opposed to working against the language). The chapter also includes basic information on how to make figures using Matplotlib, as well as advice on how to effectively use the NumPy library, with an emphasis on slicing, vectorization, and broadcasting. The chapter is rounded out by a physics project, which studies the visualization of electric fields, and a problem set.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.