To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces unsupervised learning, where algorithms analyze data without predefined labels or target outcomes. It covers three main clustering approaches: agglomerative clustering (bottom-up approach merging similar data points) and divisive clustering (top-down approach, exemplified by k-means algorithm that partitions data into k groups by minimizing distances to centroids).
The chapter explains Expectation Maximization (EM) algorithm for handling incomplete data and finding maximum likelihood parameters in statistical models. It includes a section on reinforcement learning, where agents learn optimal actions through trial-and-error interactions with environments to maximize rewards.
Key topics include distance matrices, dendrograms, cluster evaluation metrics (AIC, BIC), and practical applications. The chapter emphasizes the artistic nature of unsupervised learning, requiring careful design decisions about thresholds, cluster numbers, and technique selection. Hands-on R examples demonstrate each method using real datasets.
Regular inspections of civil structures and infrastructure, performed by professional inspectors, are costly and demanding in terms of time and safety requirements. Additionally, the outcome of inspections can be subjective and inaccurate as they rely on the inspector’s expertise. To address these challenges, autonomous inspection systems offer a promising alternative. However, existing robotic inspection systems often lack adaptive positioning capabilities and integrated crack labelling, limiting detection accuracy and their contribution to long-term dataset improvement. This study introduces a fully autonomous framework that combines real-time crack detection with adaptive pose adjustment, automated recording and labelling of defects, and integration of RGB-D and LiDAR sensing for precise navigation. Damage detection is performed using YOLOv5, a widely used detection model, which analyzes the RGB image stream to detect cracks and generates labels for dataset creation. The robot autonomously adjusts its position based on confidence feedback from the detection algorithm, optimizing its vantage point for improved detection accuracy. Experiment inspections showed an average confidence gain of 18% (exceeding 20% for certain crack types), a reduction in size estimation error from 23.31% to 10.09%, and a decrease in the detection failure rate from 20% to 6.66%. While quantitative validation during field testing proved challenging due to dynamic environmental conditions, qualitative observations aligned with these trends, suggesting its potential to reduce manual intervention in inspections. Moreover, the system enables automated recording and labeling of detected cracks, contributing to the continuous improvement of machine learning models for structural health monitoring.
The chapter begins with a discussion on standard mechanisms for training spiking neural networks ranging from – (a) unsupervised spike-timing-dependent plasticity, (b) backpropagation through time (BPTT) using surrogate gradient techniques, and (c) conversion techniques from conventional analog non-spiking networks. Subsequently, various local learning algorithms with different degrees of locality are discussed that have the potential to replace computationally expensive global learning algorithms such as BPTT. The chapter concludes with pointers to several emerging research directions in the neuromorphic algorithms domain ranging from stochastic computing, lifelong learning, and dynamical system-based approaches, among others. Finally, we also underscore the need for looking at hybrid neuromorphic algorithm design combining principles of conventional deep learning along with forging stronger connections with computational neuroscience.
The chapter introduces fundamental principles of deep learning. We discuss supervised learning of feedforward neural networks by considering a binary classification problem. Gradient descent techniques and backpropagation learning algorithms are introduced as means of training neural networks. The impact of neuron activations and convolutional and residual network architectures on the learning performance are discussed. Finally, regularization techniques such as batch normalization and dropout are introduced for improving the accuracy of trained models. The chapter is essential to connect advances in conventional deep learning algorithms to neuromorphic concepts.
This chapter explores the fundamentals of data in data science, covering data types (structured vs. unstructured), collection sources (open data, social media APIs, multimodal data, synthetic data), and storage formats (CSV, TSV, XML, RSS, JSON). It emphasizes the critical importance of data pre-processing, including data cleaning (handling missing values, smoothing noisy data, data munging), integration, transformation, reduction, and discretization. Through hands-on examples, the chapter demonstrates how to systematically prepare "dirty" real-world data for analysis by addressing inconsistencies, outliers, and missing information. The chapter highlights that data preparation is often half the battle in data science, requiring both technical skills and careful attention to data quality and bias.
This chapter introduces machine learning as a subset of artificial intelligence that enables computers to learn from data without explicit programming. It defines machine learning using Tom Mitchell’s formal framework and explores practical applications like self-driving cars, optical character recognition, and recommendation systems. The chapter focuses on regression as a fundamental machine learning technique, explaining linear regression for modeling relationships between variables. A key section covers gradient descent, an optimization algorithm that iteratively finds the best model parameters by minimizing error functions. Through hands-on Python examples, students learn to implement both linear regression and gradient descent algorithms, visualizing how models improve over iterations. The chapter emphasizes practical considerations for choosing appropriate algorithms, including accuracy, training time, linearity assumptions, and the number of parameters, preparing students for more advanced supervised and unsupervised learning techniques.
This chapter focuses on data collection methods, analysis approaches, and evaluation techniques in data science. It covers various data collection methods including surveys (with different question types like multiple-choice, Likert scales, and open-ended questions), interviews, focus groups, diary studies, and user studies in lab and field settings.
The chapter distinguishes between quantitative methods (using numerical measurements and statistical analysis) and qualitative methods (observing behaviors, attitudes, and opinions through techniques like grounded theory and constant comparison). It also discusses mixed-method approaches that combine both methodologies.
For evaluation, the chapter explains model comparison metrics including precision, recall, F-measure, ROC curves, AIC, and BIC. It covers validation techniques like training-testing splits, A/B testing, and cross-validation methods. The chapter emphasizes that data science involves pre-data collection planning and post-analysis evaluation, not just data processing.
This chapter introduces Python as a powerful yet beginner-friendly programming language essential for data science. It covers getting access to Python through direct installation or integrated development environments like Anaconda and Spyder. The chapter teaches fundamental programming concepts including basic operations, data types, and key data structures (lists, tuples, dictionaries, sets, and DataFrames). Students learn to write control structures using if-else statements and while/for loops, create reusable functions, and make programs interactive through user input. The chapter also explains how to install and use Python packages, which extend the language’s capabilities for specialized tasks. Throughout, practical examples demonstrate concepts like leap year calculations, temperature categorization, and sales data analysis. The chapter emphasizes Python’s accessibility, extensive package ecosystem, and suitability for data science applications, positioning it as an ideal tool for solving computational and data analysis problems.
This chapter covers unsupervised learning, where algorithms analyze data without known true labels or outcomes. Unlike supervised learning, the goal is to discover hidden patterns and structures in data.
The chapter explores three main techniques: Agglomerative clustering works bottom-up, starting with individual data points and merging similar ones into larger clusters. Divisive clustering (including k-means) takes a top-down approach, splitting data into smaller groups. Both methods use distance matrices and dendrograms to visualize cluster relationships.
Expectation Maximization (EM) handles incomplete data by iteratively estimating missing parameters using maximum likelihood estimation. Model quality is assessed using AIC and BIC criteria.
The chapter also introduces reinforcement learning, where agents learn optimal actions through trial-and-error interactions with environments, receiving rewards or penalties. Applications include robotics, gaming, and autonomous systems. Throughout, the chapter emphasizes the creative, interpretive nature of unsupervised learning compared to more structured supervised approaches.
The chapter focuses on the network and architecture layers of the design stack building up from device and circuit concepts introduced in Chapters 3 and 4. Architectural advantages like address-event representation stemming from neuromorphic models by leveraging spiking sparsity are discussed. Near-memory and in-memory architectures using CMOS implementations are first discussed followed by several emerging technologies, namely, correlated electron semiconductor-based devices, filamentary devices, organic devices, spintronic devices, and photonic neural networks.