To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The chapter introduces key codesign principles across multiple layers of the design stack highlighting the need for cross-layer optimizations. Mitigation of various non-idealities stemming from emerging devices such as device-to-device variations, cycle-to-cycle variations, conductance drift, and stuck-at-faults through algorithm–hardware codesign are discussed. Further, inspiration from the brain’s self-repair mechanism is utilized to design neuromorphic systems capable of autonomous self-repair. Finally, an end-to-end codesign approach is outlined by exploring synergies of event-driven hardware and algorithms with event-driven sensors, thereby leveraging maximal benefits of brain-inspired computing.
This chapter provides a selection of problems relevant to the field of neuromorphic computing that intersects materials science, electrical engineering, computer science, neural networks, and device design for realizing AI in hardware and algorithms. The emphasis on interdisciplinary nature of neuromorphic computing is apparent.
The chapter discusses concepts in plasticity that go beyond memory. Several examples are discussed starting with the complexity of dendritic structure in biological neurons, nonlinear summation of signals from synapses by neurons and the vast range of plasticity that has been discovered in biological brain circuits. Learning and memory are commonly assigned to synapses; however, non-synaptic changes are important to consider for neuromorphic hardware and algorithms. The distinction between bioinspired and bio-realistic designs of hardware for AI is discussed. While synaptic connections can undergo both functional and structural plasticity, emulating such concepts in neuromorphic computing will require adaptive algorithms and semiconductors that can be dynamically reprogrammed. The necessity for close collaboration between neuroscience and neuromorphic engineering community is highlighted. Methods to implement lifelong learning in algorithms and hardware are discussed. Gaps in the field and directions for future research and development are discussed. The prospects for energy-efficient neuromorphic computing with disruptive brain-inspired algorithms and emerging semiconductors are discussed.
The chapter begins with physics and mathematical description of the nonlinear dynamics seen in biological neurons and their adaptation into neuromorphic hardware. Various abstractions of the Hodgkin–Huxley model of squid neuron have been studied in neuromorphic computing. Filamentary threshold switches that can act as neurons are discussed. The combination of ionic and electronic relaxation pathways offers unique abilities to design low-power artificial neurons. Ferroelectric, insulator–metal transition, 2D materials, and organic semiconductor-based neurons are discussed wherein modulation of long-range transport and/or bound charge displacement are utilized for neuron function. Besides electron transport, light and spin state can also be effectively utilized to create photonic and spintronic neurons respectively. The chapter should provide the reader a comprehensive insight into design of artificial neurons that can generate action potentials, spanning various classes of inorganic and organic semiconductors, and different stimuli for input and readout of signals such as voltage, light, spin current, and ionic currents.
The chapter begins with a discussion on standard mechanisms for training spiking neural networks ranging from – (a) unsupervised spike-timing-dependent plasticity, (b) backpropagation through time (BPTT) using surrogate gradient techniques, and (c) conversion techniques from conventional analog non-spiking networks. Subsequently, various local learning algorithms with different degrees of locality are discussed that have the potential to replace computationally expensive global learning algorithms such as BPTT. The chapter concludes with pointers to several emerging research directions in the neuromorphic algorithms domain ranging from stochastic computing, lifelong learning, and dynamical system-based approaches, among others. Finally, we also underscore the need for looking at hybrid neuromorphic algorithm design combining principles of conventional deep learning along with forging stronger connections with computational neuroscience.
The chapter introduces fundamental principles of deep learning. We discuss supervised learning of feedforward neural networks by considering a binary classification problem. Gradient descent techniques and backpropagation learning algorithms are introduced as means of training neural networks. The impact of neuron activations and convolutional and residual network architectures on the learning performance are discussed. Finally, regularization techniques such as batch normalization and dropout are introduced for improving the accuracy of trained models. The chapter is essential to connect advances in conventional deep learning algorithms to neuromorphic concepts.
The chapter focuses on the network and architecture layers of the design stack building up from device and circuit concepts introduced in Chapters 3 and 4. Architectural advantages like address-event representation stemming from neuromorphic models by leveraging spiking sparsity are discussed. Near-memory and in-memory architectures using CMOS implementations are first discussed followed by several emerging technologies, namely, correlated electron semiconductor-based devices, filamentary devices, organic devices, spintronic devices, and photonic neural networks.
Artificial intelligence is transforming industries and society, but its high energy demands challenge global sustainability goals. Biological intelligence, in contrast, offers both good performance and exceptional energy efficiency. Neuromorphic computing, a growing field inspired by the structure and function of the brain, aims to create energy-efficient algorithms and hardware by integrating insights from biology, physics, computer science, and electrical engineering. This concise and accessible book delves into the principles, mechanisms, and properties of neuromorphic systems. It opens with a primer on biological intelligence, describing learning mechanisms in both simple and complex organisms, then turns to the application of these principles and mechanisms in the development of artificial synapses and neurons, circuits, and architectures. The text also delves into neuromorphic algorithm design, and the unique challenges faced by algorithmic researchers working in this area. The book concludes with a selection of practice problems, with solutions available to instructors online.
Extreme fluctuations in oil prices (such as the dramatic fall from mid-2014 into 2015) raise important strategic questions for both importers and exporters. In this volume, specialists from the US, the Middle East, Europe and Asia examine the rapidly evolving dynamic in the energy landscape, including renewable and nuclear power, challenges to producers including the shale revolution, and legal issues. Each chapter provides in-depth analysis and clear policy recommendations.
Nonequilibrium steady states arise if a system is driven in a time-independent way. This can be realized through contact with particle reservoirs at different (electro)chemical potential for enzymatic reactions and for transport through quantum dot structures. For molecular motors, an applied external force contributes to such an external driving. Formally, such systems are described by a master equation with time-independent transition rates that are constrained by the local detailed balance relation. Characteristic of such systems are persistent probability currents. This stationary state is unique and can be obtained either through a graph-theoretic method or as an eigenvector of the generator. These systems have a constant rate of entropy production. Moreover, this entropy production fulfills a detailed fluctuation theorem. The thermodynamic uncertainty relation provides a lower bound on entropy production in terms of the mean and dispersion of any current in the system. An important classification distinguishes unicyclic from multicyclic systems. In particular for the latter, the concept of cycles and their affinities are introduced and related to macroscopic or physical affinities driving an engine. In the linear response regime, Onsager coefficients are proven to obey a symmetry.
This chapter starts with a discussion of simple univariate chemical reactions networks emphasizing the need to impose thermodynamically consistent reaction rates. For a linear reaction scheme, the stationary distribution is given analytically as a Poisson distribution. Nonlinear schemes can lead to bistability. For large systems, the stationary solution can be expressed by an effective potential. Two types of Fokker–Planck descriptions are shown to fail in certain regimes. In the thermodynamic limit, the dynamics can be described by a simple rate equation. Entropy production is discussed on the various levels of description. A simple two-dimensional scheme, the Brusselator, can lead to persistent oscillations. Heat and entropy production are identified for an individual reaction event of a general multivariate reaction scheme.
Rare or extreme fluctuations beyond the Gaussian regime are treated through large deviation theory for the nonequilibrium steady state of discrete systems and of systems with Langevin dynamics. For both classes, we first develop the spectral approach that yields the scaled cumulant-generating function for state observables and currents in terms of the largest eigenvalue of the tilted generator. Second, we introduce the rate function of level 2.5 that can be determined exactly. Contractions then lead to bounds on the rate function for state observables or currents. Specialized to equilibrium, explicit results are obtained. As a general result, the rate function for any current is shown to be bounded by a quadratic function which implies the thermodynamic uncertainty relation.
The efficiency of classical heat engines is bounded by the Carnot efficiency leading to vanishing power. Efficiency at maximum power is often related to the Curzon–Ahlborn efficiency. As a paradigm for a periodic stochastic heat engine, a Brownian particle in a harmonic potential is sequentially coupled to two heat baths. For a simple steady-state heat engine, a two-state model coupled permanently to two heat baths leads to transport against an external force or against an imposed electrochemical potential. Affinities and Onsager coefficients in the linear response regime are determined. The identification of exchanged heat in the presence of particle transport is shown to be somewhat ambiguous.