To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Learn to program more effectively, faster, with better results… and enjoy both the learning experience and the benefits it ultimately brings. This undergraduate-level textbook is motivated by Formal Methods, encouraging habits that lead to correct and concise computer programs; but its informal approach sidesteps reliance on Formal Logic that programmers are sometimes led to believe is required. Instead, a straightforward and intuitive use of simple 'What's true here' comments encourages precision of thought without prescription of notation. Drawing on decades of the author's experience in teaching/industry, the text's careful presentation concentrates on key principles of structuring and reasoning about programs, applying them first to small, understandable algorithms. Then students can concentrate on turning those reliably into their corresponding –and correct– program source-codes. The text includes over 200 exercises, with full solutions available online for instructors' use, plus mini-projects and automated quizzes to support instructors in building their own courses.
Aimed at advanced undergraduate and graduate-level students, this textbook covers the core topics of quantum computing in a format designed for a single-semester course. It will be accessible to learners from a range of disciplines, with an understanding of linear algebra being the primary prerequisite. The textbook introduces central concepts such as quantum mechanics, the quantum circuit model, and quantum algorithms, and covers advanced subjects such as the surface code and topological quantum computation. These topics are essential for understanding the role of symmetries in error correction and the stability of quantum architectures, which situate quantum computation within the wider realm of theoretical physics. Graphical representations and exercises are included throughout the book and optional expanded materials are summarized within boxed 'Remarks'. Lecture notes have been made freely available for download from the textbook's webpage, with instructors having additional online access to selected exercise solutions.
This book bridges the gap between theoretical machine learning (ML) and its practical application in industry. It serves as a handbook for shipping production-grade ML systems, addressing challenges often overlooked in academic texts. Drawing on their experience at several major corporations and startups, the authors focus on real-world scenarios, guiding practitioners through the ML lifecycle, from planning and data management to model deployment and optimization. They highlight common pitfalls and offer interview-based case studies from companies that illustrate diverse industrial applications and their unique challenges. Multiple pathways through the book allow readers to choose which stage of the ML development process to focus on, as well as the learning strategy ('crawl,' 'walk,' or 'run') that best suits the needs of their project or team.
This tutorial guide introduces online nonstochastic control, an emerging paradigm in control of dynamical systems and differentiable reinforcement learning that applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. In optimal control, robust control, and other control methodologies that assume stochastic noise, the goal is to perform comparably to an offline optimal strategy. In online control, both cost functions and perturbations from the assumed dynamical model are chosen by an adversary. Thus, the optimal policy is not defined a priori and the goal is to attain low regret against the best policy in hindsight from a benchmark class of policies. The resulting methods are based on iterative mathematical optimization algorithms and are accompanied by finite-time regret and computational complexity guarantees. This book is ideal for graduate students and researchers interested in bridging classical control theory and modern machine learning.
The core topics at the intersection of human-computer interaction (HCI) and US law -- privacy, accessibility, telecommunications, intellectual property, artificial intelligence (AI), dark patterns, human subjects research, and voting -- can be hard to understand without a deep foundation in both law and computing. Every member of the author team of this unique book brings expertise in both law and HCI to provide an in-depth yet understandable treatment of each topic area for professionals, researchers, and graduate students in computing and/or law. Two introductory chapters explaining the core concepts of HCI (for readers with a legal background) and U.S. law (for readers with an HCI background) are followed by in-depth discussions of each topic.
What defines a correct program? What education makes a good programmer? The answers to these questions depend on whether programs are seen as mathematical entities, engineered socio-technical systems or media for assisting human thought. Programmers have developed a wide range of concepts and methodologies to construct programs of increasing complexity. This book shows how those concepts and methodologies emerged and developed from the 1940s to the present. It follows several strands in the history of programming and interprets key historical moments as interactions between five different cultures of programming. Rooted in disciplines such as mathematics, electrical engineering, business management or psychology, the different cultures of programming have exchanged ideas and given rise to novel programming concepts and methodologies. They have also clashed about the nature of programming; those clashes remain at the core of many questions about programming today. This title is also available as Open Access on Cambridge Core.
Artificial intelligence is transforming industries and society, but its high energy demands challenge global sustainability goals. Biological intelligence, in contrast, offers both good performance and exceptional energy efficiency. Neuromorphic computing, a growing field inspired by the structure and function of the brain, aims to create energy-efficient algorithms and hardware by integrating insights from biology, physics, computer science, and electrical engineering. This concise and accessible book delves into the principles, mechanisms, and properties of neuromorphic systems. It opens with a primer on biological intelligence, describing learning mechanisms in both simple and complex organisms, then turns to the application of these principles and mechanisms in the development of artificial synapses and neurons, circuits, and architectures. The text also delves into neuromorphic algorithm design, and the unique challenges faced by algorithmic researchers working in this area. The book concludes with a selection of practice problems, with solutions available to instructors online.
When you see a paper crane, what do you think of? A symbol of hope, a delicate craft, The Karate Kid? What you might not see, but is ever present, is the fascinating mathematics underlying it. Origami is increasingly applied to engineering problems, including origami-based stents, deployment of solar arrays in space, architecture, and even furniture design. The topic is actively developing, with recent discoveries at the frontier (e.g., in rigid origami and in curved-crease origami) and an infusion of techniques and algorithms from theoretical computer science. The mathematics is often advanced, but this book instead relies on geometric intuition, making it accessible to readers with only a high school geometry and trigonometry background. Through careful exposition, more than 160 color figures, and 49 exercises all completely solved in an Appendix, the beautiful mathematics leading to stunning origami designs can be appreciated by students, teachers, engineers, and artists alike.
Students will develop a practical understanding of data science with this hands-on textbook for introductory courses. This new edition is fully revised and updated, with numerous exercises and examples in the popular data science tool R, a new chapter on using R for statistical analysis, and a new chapter that demonstrates how to use R within a range of cloud platforms. The many practice examples, drawn from real-life applications, range from small to big data and come to life in a new end-to-end project in Chapter 11. New 'Data Science in Practice' boxes highlight how concepts introduced work within an industry context and many chapters include new sections on AI and Generative AI. A suite of online material for instructors provides a strong supplement to the book, including lecture slides, solutions, additional assessment material and curriculum suggestions. Datasets and code are available for students online. This entry-level textbook is ideal for readers from a range of disciplines wishing to build a practical, working knowledge of data science.
Discover the foundations of classical and quantum information theory in the digital age with this modern introductory textbook. Familiarise yourself with core topics such as uncertainty, correlation, and entanglement before exploring modern techniques and concepts including tensor networks, quantum circuits and quantum discord. Deepen your understanding and extend your skills with over 250 thought-provoking end-of-chapter problems, with solutions for instructors, and explore curated further reading. Understand how abstract concepts connect to real-world scenarios with over 400 examples, including numerical and conceptual illustrations, and emphasising practical applications. Build confidence as chapters progressively increase in complexity, alternating between classic and quantum systems. This is the ideal textbook for senior undergraduate and graduate students in electrical engineering, computer science, and applied mathematics, looking to master the essentials of contemporary information theory.
Students will develop a practical understanding of data science with this hands-on textbook for introductory courses. This new edition is fully revised and updated, with numerous exercises and examples in the popular data science tool Python, a new chapter on using Python for statistical analysis, and a new chapter that demonstrates how to use Python within a range of cloud platforms. The many practice examples, drawn from real-life applications, range from small to big data and come to life in a new end-to-end project in Chapter 11. New 'Data Science in Practice' boxes highlight how concepts introduced work within an industry context and many chapters include new sections on AI and Generative AI. A suite of online material for instructors provides a strong supplement to the book, including lecture slides, solutions, additional assessment material and curriculum suggestions. Datasets and code are available for students online. This entry-level textbook is ideal for readers from a range of disciplines wishing to build a practical, working knowledge of data science.
Emphasizing how and why machine learning algorithms work, this introductory textbook bridges the gap between the theoretical foundations of machine learning and its practical algorithmic and code-level implementation. Over 85 thorough worked examples, in both Matlab and Python, demonstrate how algorithms are implemented and applied whilst illustrating the end result. Over 75 end-of-chapter problems empower students to develop their own code to implement these algorithms, equipping them with hands-on experience. Matlab coding examples demonstrate how a mathematical idea is converted from equations to code, and provide a jumping off point for students, supported by in-depth coverage of essential mathematics including multivariable calculus, linear algebra, probability and statistics, numerical methods, and optimization. Accompanied online by instructor lecture slides, downloadable Python code and additional appendices, this is an excellent introduction to machine learning for senior undergraduate and graduate students in Engineering and Computer Science.
• To understand the working principle of support vector machine (SVM).
• To comprehend the rules for identification of correct hyperplane.
• To understand the concept of support vectors, maximized margin, positive and negative hyperplanes.
• To apply an SVM classifier for a linear and non-linear dataset.
• To understand the process of mapping data points to higher dimensional space.
• To comprehend the working principle of the SVM Kernel.
• To highlight the applications of SVM.
10.1 Support Vector Machines
Support vector machines (SVMs) are supervised machine learning (ML) models used to solve regression and classification problems. However, it is widely used for solving classification problems. The main goal of SVM is to segregate the n-dimensional space into labels or classes by defining a decision boundary or hyperplanes. In this chapter, we shall explore SVM for solving classification problems.
10.1.1 SVM Working Principle
SVM Working Principle | Parteek Bhatia, https://youtu.be/UhzBKrIKPyE
To understand the working principle of the SVM classifier, we will take a standard ML problem where we want a machine to distinguish between a peach and an apple based on their size and color.
Let us suppose the size of the fruit is represented on the X-axis and the color of the fruit is on the Y-axis. The distribution of the dataset of apple and peach is shown in Figure 10.1.
To classify it, we must provide the machine with some sample stock of fruits and label each of the fruits in the stock as an “apple” or “peach”. For example, we have a labeled dataset of some 100 fruits with corresponding labels, i.e., “apple” or “peach”. When this data is fed into a machine, it will analyze these fruits and train itself. Once the training is completed, if some new fruit comes into the stock, the machine will classify whether it is an “apple” or a “peach”.
Most of the traditional ML algorithms would learn by observing the perfect apples and perfect peaches in the stock, i.e., they will train themselves by observing the ideal apples of stock (apples which are very much like apples in terms of their size and color) and the perfect peaches of stock (peaches which are very much like peaches in terms of their size and color). These standard samples are likely to be found in the heart of stock. The heart of the stock is shown in Figure 10.2.
• To define machine learning (ML) and discuss its applications.
• To learn the differences between traditional programming and ML.
• To understand the importance of labeled and unlabeled data and its various usage for ML.
• To understand the working principle of supervised, unsupervised, and reinforcement learnings.
• To understand the key terms like data science, data mining, artificial intelligence, and deep learning.
1.1 Introduction
In today’s data-driven world, information flows through the digital landscape like an untapped river of potential. Within this vast data stream lies the key to unlocking a new era of discovery and innovation. Machine learning (ML), a revolutionary field, acts as the gateway to this wealth of opportunities. With its ability to uncover patterns, make predictive insights, and adapt to evolving information, ML has transformed industries, redefined technology, and opened the door to limitless possibilities. This book is your gateway to the fascinating realm of ML—a journey that empowers you to harness the power of data, enabling you to build intelligent systems, make informed decisions, and explore the boundless possibilities of the digital age.
ML has emerged as the dominant approach for solving problems in the modern world, and its wide-ranging applications have made it an integral part of our lives. Right from search engines to social networking sites, everything is powered by ML algorithms. Your favorite search engine uses ML algorithms to get you the appropriate search results. Smart home assistants like Alexa and Siri use ML to serve us better. The influence of ML in our day-to-day activities is so much that we cannot even realize it. Online shopping sites like Amazon, Flipkart, and Myntra use ML to recommend products. Facebook is using ML to display our feed. Netflix and YouTube are using ML to recommend videos based on our interests.
Data is growing exponentially with the Internet and smartphones, and ML has just made this data more usable and meaningful. Social media, entertainment, travel, mining, medicine, bioinformatics, or any field you could name uses ML in some form.
To understand the role of ML in the modern world, let us first discuss the applications of ML.
• To understand the concept of artificial neural network (ANN).
• To comprehend the working of the human brain as an inspiration for the development of neural network.
• To understand the mapping of human brain neurons to an ANN.
• To understand the working of ANN with case studies.
• To understand the role of weights in building ANN.
• To perform forward and backward propagation to train the neural networks.
• To understand different activation functions like threshold function, sigmoid function, rectifier linear unit function, and hyperbolic tangent function.
• To find the optimized value of weights for minimizing the cost function by using the gradient descent approach and stochastic gradient descent algorithm.
• To understand the concept of the mini-batch method.
16.1 Introduction to Artificial Neural Network
Neural networks and deep learning are the buzzwords in modern-day computer science. And, if you think that these are the latest entrants in this field, you probably have a misconception. Neural networks have been around for quite some time, and they have only started picking up now, putting up a huge positive impact on computer science.
Artificial neural network (ANN) was invented in the 1960s and 1970s. It became a part of common tech talks, and people started thinking that this machine learning (ML) technique would solve all the complex problems that were challenging the researchers during that time. But sooner, the hopes and expectations died off over the next decade.
The decline could not be attributed to some loopholes in neural networks, but the major reason for the decline was the “technology” itself. The technology back then was not up to the right standard to facilitate neural networks as they needed a lot of data for training and huge computation resources for building the model. During that time, both data and computing power were scarce. Hence, the resulting neural network remained only on paper rather than taking centerstage of the machine to solve some real-world problems.
Later on, at the beginning of the 21st century, we saw a lot of improvements in storage techniques resulting in reduced cost per gigabyte of storage. Humanity witnessed a huge rise in big data due to the Internet boom and smartphones.
• To implement the k-means clustering algorithm in Python.
• To determining the ideal number of clusters by implementing its code.
• To understand how to visualize clusters using plots.
• To create the dendrogram and find the optimal number of clusters for agglomerative hierarchical clustering.
• To compare results of k-means clustering with agglomerative hierarchical clustering.
• To implement clustering through various case studies.
13.1 Implementation of k-means Clustering and Hierarchical Clustering
In the previous chapter, we discussed various clustering algorithms. We learned that clustering algorithms are broadly classified into partitioning methods, hierarchical methods, and density-based methods. The k-means clustering algorithm follows partitioning method; agglomerative and divisive algorithms follow the hierarchical method, while DBSCAN is based on density-based clustering methods.
In this chapter, we will implement each of these algorithms by considering various case studies by following a step-by-step approach. You are advised to perform all these steps on your own on the mentioned databases stated in this chapter.
The k-means algorithm is considered a partitioning method and an unsupervised machine learning (ML) algorithm used to identify clusters of data items in a dataset. It is one of the most prominent ML algorithms, and its implementation in Python is quite straightforward. This chapter will consider three case studies, i.e., customers shopping in the mall dataset, the U.S. arrests dataset, and a popular Iris dataset. We will understand the significance of k-means clustering techniques to implement it in Python through these case studies. Along with the clustering of data items, we will also discuss the ways to find out the optimal number of clusters. To compare the results of the k-means algorithm, we will also implement hierarchical clustering for these problems.
We will kick-start the implementation of the k-means algorithm in Spyder IDE using the following steps.
Step 1: Importing the libraries and the dataset—The dataset for the respective case study would be downloaded, and then the required libraries would be imported.
Step 2: Finding the optimal number of clusters—We will find the optimal number of clusters by the elbow method for the given dataset.
Step 3: Fitting k-means to the dataset—A k-means model will be prepared by training the model over the acquired dataset.
Step 4: Visualizing the clusters—The clusters formed by the k-means model would then be visualized in the form of scatter plots.
• To comprehend the concept of association mining and its applications.
• To understand the role of support, confidence, and lift.
• To understand the naive algorithm for finding association mining rules, its limits, and improvements.
• To learn about different ways to store transaction database storage.
• To understand and apply the Apriori algorithm to identify the association mining rules.
14.1 Introduction to Association Rule Mining
Association rule mining is a rule-based technique to discover the relation between the attributes of a dataset. It is used to find the relation between the sales of item X and item Y. It is often called a “market basket” analysis, as shown in Figure 14.1. Here, the market analyst examines the items that consumers often purchase together to find the relation between the sale of item X and item Y.
In other words, when customers visit a store, they may buy a certain type of items together during a shopping trip. For example, as shown in Figure 14.1, a database of customer’s transactions (e.g., shopping baskets) is shown where each transaction consists of a set of items (e.g., products) purchased during a visit, machine learning (ML) engineers can use association mining for finding out a group of items which are frequently purchased together (customers purchasing behavior). This is also referred to as an analysis of customer purchasing behavior. For example, “IF one buys bread, THEN there is a high probability of buying butter with it”, as it is common that people who buy bread often buy butter with it. The store manager can use this information and arrange the items accordingly to increase sales and the overall efficiency of the store.
Let us consider a situation where the store manager feels that there is a lot of rush and customers always complain about the slow working of his store. He is exploring different ways to improve the efficiency of his store. He performed an association analysis and prepared a list of associated items like bread and butter. He may decide to put all these associated items together on the same shelf or near each other so that customers can find them quickly, reducing their shopping time. It will also improve the overall efficiency of the store and the sale of the products. To further improve the shopping experience of his customers, he can create different combos and put sales over these combos.
• To know the inspiration behind the genetic algorithm.
• To understand the concept of natural selection, recombination, and mutation.
• To understand the correlation between nature and genetic algorithm.
• To formulate the mathematic representation of genes and fitness theory.
• To implement natural selection through roulette wheel.
• To implement recombination or crossover.
• To implement the process of mutation.
• To understand the elitism and its implementation.
• To discuss the advantages and disadvantages of genetic algorithms.
22.1 Intuition of Genetic Algorithm
Genetic algorithm (GA) is inspired by nature, and it plays a vital role in the field of machine learning (ML). It selects the best-optimized solution from all available possible solutions or candidates. As nature selects the best possible candidates using the theory of evolution, in the same way, the GA selects the best possible solution from the available solutions.
One of the applications of GAs in ML is to select the global minima from all possible (local) minima by using natural selection. In earlier chapters, we learned that during the training of an artificial neural network, the main goal is to obtain the weights with a minimum cost function value. The gradient descent algorithm is commonly used to find the local minima of the cost function. But, we must find the global minima to reach the optimal weights. A GA can be used to find the global minima out of all available local minima or possible solutions. In this case, the set of possible local minima becomes the population containing possible candidates.
In this chapter, we will discuss inspiration from nature which is the main driving concept in working of GAs and their implementation. To get a good idea about the GA, we will discuss the basics of natural selection by revisiting the theory of evolution in the next section.
22.2 The Inspiration behind Genetic Algorithm
The concepts discussed in this chapter are also available in the form of the free online Udemy Course, Genetic Algorithm for Machine Learning by Parteek Bhatia,
The GA is one of the first and most well-regarded evolutionary algorithms in computer science literature. John Holland, a researcher at the University of Michigan, gave this algorithm in the 1970s, but it became popular in the ‘90s.