To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Knowledge-infused learning directly confronts the opacity of current 'black-box' AI models by combining data-driven machine learning techniques with the structured insights of symbolic AI. This guidebook introduces the pioneering techniques of neurosymbolic AI, which blends statistical models with symbolic knowledge to make AI safer and user-explainable. This is critical in high-stakes AI applications in healthcare, law, finance, and crisis management. The book brings readers up to speed on advancements in statistical AI, including transformer models such as BERT and GPT, and provides a comprehensive overview of weakly supervised, distantly supervised, and unsupervised learning methods alongside their knowledge-enhanced variants. Other topics include active learning, zero-shot learning, and model fusion. Beyond theory, the book presents practical considerations and applications of neurosymbolic AI in conversational systems, mental health, crisis management systems, and social and behavioral sciences, making it a pragmatic reference for AI system designers in academia and industry.
In recent years, speech recognition devices have become central to our everyday lives. Systems such as Siri, Alexa, speech-to-text, and automated telephone services, are built by people applying expertise in sound structure and natural language processing to generate computer programmes that can recognise and understand speech. This exciting new advancement has led to a rapid growth in speech technology courses being added to linguistics programmes; however, there has so far been a lack of material serving the needs of students who have limited or no background in computer science or mathematics. This textbook addresses that need, by providing an accessible introduction to the fundamentals of computer speech synthesis and automatic speech recognition technology, covering both neural and non-neural approaches. It explains the basic concepts in non-technical language, providing step-by-step explanations of each formula, practical activities and ready-made code for students to use, which is also available on an accompanying website.
The chapter begins with discussion of intelligence in simple unicellular organisms followed by that of animals with complex nervous systems. Surprisingly, even organisms that do not have a central brain can navigate their complex environments, forage, and learn. In organisms with central nervous system, neurons and synapses in the brain provide elementary basis of intelligence and memory. Neurons generate action potentials that represent information. Synapses hold memory and control the signal transmission between neurons. A key feature of biological neural circuits is plasticity, that is, their ability to modify the circuit properties based both on stimuli and time intervals between them. This represents one form of learning. The biological brain is not static but continuously evolves based on the experience. The field of AI seeks to learn from biological neural circuitry, emulate aspects of intelligence and learning and attempts to build physical devices and algorithms that can demonstrate features of animal intelligence. Neuromorphic computing therefore requires a paradigm shift in design of semiconductors as well as algorithm foundations that are not necessarily built for perfection, rather for learning.
This chapter offers an in-depth discussion of various nanoelectronic and nanoionic synapses along with the operational mechanisms, capabilities and limitations, and directions for further advancements in this field. We begin with overarching mechanisms to design artificial synapses and learning characteristics for neuromorphic computing. Silicon-based synapses using digital CMOS platforms are described followed by emerging device technologies. Filamentary synapses that utilize nanoscale conducting pathways for forming and breaking current shunting routes within two-terminal devices are then discussed. This is followed by ferroelectric devices wherein polarization states of a switchable ferroelectric layer are responsible for synaptic plasticity and memory. Insulator–metal transition-based synapses are described wherein a sharp change in conductance of a layer due to external stimulus offers a route for compact synapse design. Organic materials, 2D van der Waals, and layered semiconductors are discussed. Ionic liquids and solid gate dielectrics for multistate memory and learning are presented. Photonic and spintronic synapses are then discussed in detail.
The chapter introduces key codesign principles across multiple layers of the design stack highlighting the need for cross-layer optimizations. Mitigation of various non-idealities stemming from emerging devices such as device-to-device variations, cycle-to-cycle variations, conductance drift, and stuck-at-faults through algorithm–hardware codesign are discussed. Further, inspiration from the brain’s self-repair mechanism is utilized to design neuromorphic systems capable of autonomous self-repair. Finally, an end-to-end codesign approach is outlined by exploring synergies of event-driven hardware and algorithms with event-driven sensors, thereby leveraging maximal benefits of brain-inspired computing.
This chapter provides a selection of problems relevant to the field of neuromorphic computing that intersects materials science, electrical engineering, computer science, neural networks, and device design for realizing AI in hardware and algorithms. The emphasis on interdisciplinary nature of neuromorphic computing is apparent.
The chapter discusses concepts in plasticity that go beyond memory. Several examples are discussed starting with the complexity of dendritic structure in biological neurons, nonlinear summation of signals from synapses by neurons and the vast range of plasticity that has been discovered in biological brain circuits. Learning and memory are commonly assigned to synapses; however, non-synaptic changes are important to consider for neuromorphic hardware and algorithms. The distinction between bioinspired and bio-realistic designs of hardware for AI is discussed. While synaptic connections can undergo both functional and structural plasticity, emulating such concepts in neuromorphic computing will require adaptive algorithms and semiconductors that can be dynamically reprogrammed. The necessity for close collaboration between neuroscience and neuromorphic engineering community is highlighted. Methods to implement lifelong learning in algorithms and hardware are discussed. Gaps in the field and directions for future research and development are discussed. The prospects for energy-efficient neuromorphic computing with disruptive brain-inspired algorithms and emerging semiconductors are discussed.
The chapter begins with physics and mathematical description of the nonlinear dynamics seen in biological neurons and their adaptation into neuromorphic hardware. Various abstractions of the Hodgkin–Huxley model of squid neuron have been studied in neuromorphic computing. Filamentary threshold switches that can act as neurons are discussed. The combination of ionic and electronic relaxation pathways offers unique abilities to design low-power artificial neurons. Ferroelectric, insulator–metal transition, 2D materials, and organic semiconductor-based neurons are discussed wherein modulation of long-range transport and/or bound charge displacement are utilized for neuron function. Besides electron transport, light and spin state can also be effectively utilized to create photonic and spintronic neurons respectively. The chapter should provide the reader a comprehensive insight into design of artificial neurons that can generate action potentials, spanning various classes of inorganic and organic semiconductors, and different stimuli for input and readout of signals such as voltage, light, spin current, and ionic currents.
The chapter begins with a discussion on standard mechanisms for training spiking neural networks ranging from – (a) unsupervised spike-timing-dependent plasticity, (b) backpropagation through time (BPTT) using surrogate gradient techniques, and (c) conversion techniques from conventional analog non-spiking networks. Subsequently, various local learning algorithms with different degrees of locality are discussed that have the potential to replace computationally expensive global learning algorithms such as BPTT. The chapter concludes with pointers to several emerging research directions in the neuromorphic algorithms domain ranging from stochastic computing, lifelong learning, and dynamical system-based approaches, among others. Finally, we also underscore the need for looking at hybrid neuromorphic algorithm design combining principles of conventional deep learning along with forging stronger connections with computational neuroscience.
The chapter introduces fundamental principles of deep learning. We discuss supervised learning of feedforward neural networks by considering a binary classification problem. Gradient descent techniques and backpropagation learning algorithms are introduced as means of training neural networks. The impact of neuron activations and convolutional and residual network architectures on the learning performance are discussed. Finally, regularization techniques such as batch normalization and dropout are introduced for improving the accuracy of trained models. The chapter is essential to connect advances in conventional deep learning algorithms to neuromorphic concepts.
The chapter focuses on the network and architecture layers of the design stack building up from device and circuit concepts introduced in Chapters 3 and 4. Architectural advantages like address-event representation stemming from neuromorphic models by leveraging spiking sparsity are discussed. Near-memory and in-memory architectures using CMOS implementations are first discussed followed by several emerging technologies, namely, correlated electron semiconductor-based devices, filamentary devices, organic devices, spintronic devices, and photonic neural networks.
Artificial intelligence is transforming industries and society, but its high energy demands challenge global sustainability goals. Biological intelligence, in contrast, offers both good performance and exceptional energy efficiency. Neuromorphic computing, a growing field inspired by the structure and function of the brain, aims to create energy-efficient algorithms and hardware by integrating insights from biology, physics, computer science, and electrical engineering. This concise and accessible book delves into the principles, mechanisms, and properties of neuromorphic systems. It opens with a primer on biological intelligence, describing learning mechanisms in both simple and complex organisms, then turns to the application of these principles and mechanisms in the development of artificial synapses and neurons, circuits, and architectures. The text also delves into neuromorphic algorithm design, and the unique challenges faced by algorithmic researchers working in this area. The book concludes with a selection of practice problems, with solutions available to instructors online.
Designed for educators, researchers, and policymakers, this insightful book equips readers with practical strategies, critical perspectives, and ethical insights into integrating AI in education. First published in Swedish in 2023, and here translated, updated, and adapted for an English-speaking international audience, it provides a user-friendly guide to the digital and AI-related challenges and opportunities in today's education systems. Drawing upon cutting-edge research, Thomas Nygren outlines how technology can be usefully integrated into education, not as a replacement for humans, but as a tool that supports and reinforces students' learning. Written in accessible language, topics covered include AI literacy, source awareness, and subject-specific opportunities. The central role of the teacher is emphasized throughout, as is the importance of thoughtful engagement with technology. By guiding the reader through the fastevolving digital transformation in education globally, it ultimately enables students to become informed participants in the digital world.