Skip to main content Accessibility help
×
Hostname: page-component-68c7f8b79f-xmwfq Total loading time: 0 Render date: 2026-01-01T00:36:01.737Z Has data issue: false hasContentIssue false

7 - Neuromorphic Algorithms

Published online by Cambridge University Press:  aN Invalid Date NaN

Shriram Ramanathan
Affiliation:
Rutgers University, New Jersey
Abhronil Sengupta
Affiliation:
Pennsylvania State University
Get access

Summary

The chapter begins with a discussion on standard mechanisms for training spiking neural networks ranging from – (a) unsupervised spike-timing-dependent plasticity, (b) backpropagation through time (BPTT) using surrogate gradient techniques, and (c) conversion techniques from conventional analog non-spiking networks. Subsequently, various local learning algorithms with different degrees of locality are discussed that have the potential to replace computationally expensive global learning algorithms such as BPTT. The chapter concludes with pointers to several emerging research directions in the neuromorphic algorithms domain ranging from stochastic computing, lifelong learning, and dynamical system-based approaches, among others. Finally, we also underscore the need for looking at hybrid neuromorphic algorithm design combining principles of conventional deep learning along with forging stronger connections with computational neuroscience.

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2026

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Bi, G. Q. and Poo, M. M., 1998. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience, 18(24), pp. 10464–10472.10.1523/JNEUROSCI.18-24-10464.1998CrossRefGoogle ScholarPubMed
Lu, S. and Sengupta, A., 2022, June. Hybrid neuromorphic systems: An algorithm-application-hardware-neuroscience co-design perspective: Invited special session paper. In 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS) (pp. 210–213). IEEE.10.1109/AICAS54282.2022.9869925CrossRefGoogle Scholar
Woodin, M. A., Ganguly, K. and Poo, M. M., 2003. Coincident pre-and postsynaptic activity modifies GABAergic synapses by postsynaptic changes in Cl − Transporter activity. Neuron, 39(5), pp. 807–820.10.1016/S0896-6273(03)00507-5CrossRefGoogle ScholarPubMed
Serrano-Gotarredona, T., Masquelier, T., Prodromakis, T., Indiveri, G. and Linares-Barranco, B., 2013. STDP and STDP variations with memristors for spiking neuromorphic learning systems. Frontiers in Neuroscience, 7, p. 2.10.3389/fnins.2013.00002CrossRefGoogle ScholarPubMed
Diehl, P. U. and Cook, M., 2015. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in Computational Neuroscience, 9, p. 99.10.3389/fncom.2015.00099CrossRefGoogle ScholarPubMed
Lee, C., Srinivasan, G., Panda, P. and Roy, K., 2018. Deep spiking convolutional neural network trained with unsupervised spike-timing-dependent plasticity. IEEE Transactions on Cognitive and Developmental Systems, 11(3), pp. 384–394.Google Scholar
Werbos, P. J., 1990. Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), pp. 1550–1560.10.1109/5.58337CrossRefGoogle Scholar
Lin, J., Lu, S., Bal, M. and Sengupta, A., 2024. Benchmarking spiking neural network learning methods with varying locality. arXiv preprint arXiv:2402.01782.10.1109/ACCESS.2025.3582564CrossRefGoogle Scholar
Bellec, G., Salaj, D., Subramoney, A., Legenstein, R. and Maass, W., 2018. Long short-term memory and learning-to-learn in networks of spiking neurons. Advances in Neural Information Processing Systems, 31, pp. 1–11.Google Scholar
Neftci, E. O., Mostafa, H. and Zenke, F., 2019. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6), pp. 51–63.10.1109/MSP.2019.2931595CrossRefGoogle Scholar
Diehl, P. U., Neil, D., Binas, J., Cook, M., Liu, S. C. and Pfeiffer, M., 2015, July. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International Joint Conference on Neural Networks (IJCNN) (pp. 1–8). IEEE.10.1109/IJCNN.2015.7280696CrossRefGoogle Scholar
Sengupta, A., Ye, Y., Wang, R., Liu, C. and Roy, K., 2019. Going deeper in spiking neural networks: VGG and residual architectures. Frontiers in Neuroscience, 13, p. 95.10.3389/fnins.2019.00095CrossRefGoogle ScholarPubMed
Rathi, N., Srinivasan, G., Panda, P. and Roy, K., 2020. Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation. arXiv preprint arXiv:2005.01807.Google Scholar
Lu, S. and Sengupta, A., 2022. Neuroevolution guided hybrid spiking neural network training. Frontiers in Neuroscience, 16, p. 838523.10.3389/fnins.2022.838523CrossRefGoogle ScholarPubMed
Lillicrap, T. P., Cownden, D., Tweed, D. B. and Akerman, C. J., 2016. Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications, 7(1), p. 13276.10.1038/ncomms13276CrossRefGoogle ScholarPubMed
Baldi, P., Sadowski, P. and Lu, Z., 2017. Learning in the machine: The symmetries of the deep learning channel. Neural Networks, 95, pp. 110–133.CrossRefGoogle ScholarPubMed
Lillicrap, T. P., Santoro, A., Marris, L., Akerman, C. J. and Hinton, G., 2020. Backpropagation and the brain. Nature Reviews Neuroscience, 21(6), pp. 335–346.10.1038/s41583-020-0277-3CrossRefGoogle ScholarPubMed
Bellec, G., Scherr, F., Subramoney, A., Hajek, E., Salaj, D., Legenstein, R. and Maass, W., 2020. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11(1), p. 3625.10.1038/s41467-020-17236-yCrossRefGoogle Scholar
Kaiser, J., Mostafa, H. and Neftci, E., 2020. Synaptic plasticity dynamics for deep continuous local learning (DECOLLE). Frontiers in Neuroscience, 14, p. 424.10.3389/fnins.2020.00424CrossRefGoogle ScholarPubMed
Gerstner, W., Lehmann, M., Liakoni, V., Corneil, D. and Brea, J., 2018. Eligibility traces and plasticity on behavioral time scales: Experimental support of neoHebbian three-factor learning rules. Frontiers in Neural Circuits, 12, p. 53.10.3389/fncir.2018.00053CrossRefGoogle ScholarPubMed
Cortes, C., Mohri, M. and Rostamizadeh, A., 2012. Algorithms for learning kernels based on centered alignment. The Journal of Machine Learning Research, 13, pp. 795–828.Google Scholar
Li, Y., Kim, Y., Park, H. and Panda, P., 2023. Uncovering the representation of spiking neural networks trained with surrogate gradient. arXiv preprint arXiv:2304.13098.Google Scholar
Maass, W., 2015. To spike or not to spike: That is the question. Proceedings of the IEEE, 103(12), pp. 2219–2224.10.1109/JPROC.2015.2496679CrossRefGoogle Scholar
Ardakani, A., Ardakani, A. and Gross, W. J., 2021, October. Fault-tolerance of binarized and stochastic computing-based neural networks. In 2021 IEEE Workshop on Signal Processing Systems (SiPS) (pp. 52–57). IEEE.10.1109/SiPS52927.2021.00018CrossRefGoogle Scholar
Roy, K., Sengupta, A. and Shim, Y., 2018. Perspective: Stochastic magnetic devices for cognitive computing. Journal of Applied Physics, 123(21), pp. 210901-1–210901-11.10.1063/1.5020168CrossRefGoogle Scholar
Roy, D., Chakraborty, I. and Roy, K., 2019, July. Scaling deep spiking neural networks with binary stochastic activations. In 2019 IEEE International Conference on Cognitive Computing (ICCC) (pp. 50–58). IEEE.10.1109/ICCC.2019.00020CrossRefGoogle Scholar
Sengupta, A., Parsa, M., Han, B. and Roy, K., 2016. Probabilistic deep spiking neural systems enabled by magnetic tunnel junction. IEEE Transactions on Electron Devices, 63(7), pp. 2963–2970.10.1109/TED.2016.2568762CrossRefGoogle Scholar
Islam, A. N. M., Saha, A., Jiang, Z., Ni, K. and Sengupta, A., 2023. Hybrid stochastic synapses enabled by scaled ferroelectric field-effect transistors. Applied Physics Letters, 122(12), pp. 123701-1–123701-7.10.1063/5.0132242CrossRefGoogle Scholar
Bagheri, A., Simeone, O. and Rajendran, B., 2018, April. Training probabilistic spiking neural networks with first-to-spike decoding. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2986–2990). IEEE.10.1109/ICASSP.2018.8462410CrossRefGoogle Scholar
Jiang, Y., Lu, S. and Sengupta, A., 2024. Stochastic spiking neural networks with first-to-spike coding. arXiv preprint arXiv:2404.17719.CrossRefGoogle Scholar
Yang, K., Gm, D. P. and Sengupta, A., 2023. Leveraging probabilistic switching in superparamagnets for temporal information encoding in neuromorphic systems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 42(10), pp. 3464–3468.10.1109/TCAD.2022.3233926CrossRefGoogle Scholar
Bengio, Y., Léonard, N. and Courville, A., 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432.Google Scholar
McCloskey, M. and Cohen, N.J., 1989. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of Learning and Motivation 24, pp. 109–165.10.1016/S0079-7421(08)60536-8CrossRefGoogle Scholar
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A. and Hassabis, D., 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13), pp. 3521–3526.10.1073/pnas.1611835114CrossRefGoogle ScholarPubMed
Zhang, H. T., Park, T. J., Islam, A. N., Tran, D. S., Manna, S., Wang, Q., Mondal, S., Yu, H., Banik, S., Cheng, S. and Zhou, H., 2022. Reconfigurable perovskite nickelate electronics for artificial intelligence. Science, 375(6580), pp. 533–539.10.1126/science.abj7943CrossRefGoogle ScholarPubMed
Ernst, A., Alkass, K., Bernard, S., Salehpour, M., Perl, S., Tisdale, J., Possnert, G., Druid, H. and Frisén, J., 2014. Neurogenesis in the striatum of the adult human brain. Cell, 156(5), pp. 1072–1083.10.1016/j.cell.2014.01.044CrossRefGoogle ScholarPubMed
Arzate, D. M. and Covarrubias, L., 2020. Adult neurogenesis in the context of brain repair and functional relevance. Stem Cells and Development, 29(9), pp. 544–554.10.1089/scd.2019.0208CrossRefGoogle ScholarPubMed
Parisi, G. I., Kemker, R., Part, J. L., Kanan, C. and Wermter, S., 2019. Continual lifelong learning with neural networks: A review. Neural Networks, 113, pp. 54–71.10.1016/j.neunet.2019.01.012CrossRefGoogle ScholarPubMed
Stanley, K. O., Clune, J., Lehman, J. and Miikkulainen, R., 2019. Designing neural networks through neuroevolution. Nature Machine Intelligence, 1(1), pp. 24–35.10.1038/s42256-018-0006-zCrossRefGoogle Scholar
Marsland, S., Shapiro, J. and Nehmzow, U., 2002. A self-organising network that grows when required. Neural Networks, 15(8–9), pp. 1041–1058.10.1016/S0893-6080(02)00078-3CrossRefGoogle ScholarPubMed
Bal, M. and Sengupta, A., 2024, May. Equilibrium-based learning dynamics in spiking architectures. In 2024 IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 1–5). IEEE.10.1109/ISCAS58744.2024.10558225CrossRefGoogle Scholar
Hinton, G. E., 2002. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8), pp. 1771–1800.10.1162/089976602760128018CrossRefGoogle ScholarPubMed
Berkes, P., Orbán, G., Lengyel, M. and Fiser, J., 2011. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science, 331(6013), pp. 83–87.10.1126/science.1195870CrossRefGoogle ScholarPubMed
Scellier, B. and Bengio, Y., 2017. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Frontiers in Computational Neuroscience, 11, p. 24.CrossRefGoogle Scholar
Xiao, M., Meng, Q., Zhang, Z., Wang, Y. and Lin, Z., 2021. Training feedback spiking neural networks by implicit differentiation on the equilibrium state. Advances in Neural Information Processing Systems, 34, pp. 14516–14528.Google Scholar
Ernoult, M., Grollier, J., Querlioz, D., Bengio, Y. and Scellier, B., 2019. Updates of equilibrium prop match gradients of backprop through time in an RNN with static input. Advances in Neural Information Processing Systems, 32, pp. 1–11.Google Scholar
Pineda, F., 1987. Generalization of back propagation to recurrent and higher order neural networks. In Neural Information Processing Systems.10.1103/PhysRevLett.59.2229CrossRefGoogle Scholar
Martin, E., Ernoult, M., Laydevant, J., Li, S., Querlioz, D., Petrisor, T. and Grollier, J., 2021. EqSpike: Spike-driven equilibrium propagation for neuromorphic implementations. Iscience, 24(3), pp. 102222-1–102222-12.10.1016/j.isci.2021.102222CrossRefGoogle ScholarPubMed
Bal, M. and Sengupta, A., 2022. Sequence learning using equilibrium propagation. arXiv preprint arXiv:2209.09626.10.24963/ijcai.2023/329CrossRefGoogle Scholar
Ramsauer, H., Schäfl, B., Lehner, J., Seidl, P., Widrich, M., Adler, T., Gruber, L., Holzleitner, M., Pavlović, M., Sandve, G. K. and Greiff, V., 2020. Hopfield networks is all you need. arXiv preprint arXiv:2008.02217.Google Scholar
Krotov, D. and Hopfield, J., 2018. Dense associative memory is robust to adversarial inputs. Neural Computation, 30(12), pp. 3151–3167.10.1162/neco_a_01143CrossRefGoogle ScholarPubMed
Lin, J., Bal, M. and Sengupta, A., 2024. Scaling SNNs trained using equilibrium propagation to convolutional architectures. arXiv preprint arXiv:2405.02546.10.1109/ICONS62911.2024.00054CrossRefGoogle Scholar
Bai, S., Kolter, J. Z. and Koltun, V., 2019. Deep equilibrium models. Advances in Neural Information Processing Systems, 32, pp. 1–12.Google Scholar
Bal, M. and Sengupta, A., 2024, March. SpikingBERT: Distilling BERT to train spiking language models using implicit differentiation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 10, pp. 10998–11006).10.1609/aaai.v38i10.28975CrossRefGoogle Scholar
Bal, M., Jiang, Y. and Sengupta, A., 2024. Exploring extreme quantization in spiking language models. arXiv preprint arXiv:2405.02543.10.1109/ICONS62911.2024.00047CrossRefGoogle Scholar
Lu, S. and Sengupta, A., 2022, June. Hybrid neuromorphic systems: An algorithm-Application-Hardware-Neuroscience Co-Design perspective: Invited special session paper. In 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS) (pp. 210–213). IEEE.10.1109/AICAS54282.2022.9869925CrossRefGoogle Scholar
Lee, C., Panda, P., Srinivasan, G. and Roy, K., 2018. Training deep spiking convolutional neural networks with STDP-based unsupervised pre-training followed by supervised fine-tuning. Frontiers in Neuroscience, 12, p. 435.10.3389/fnins.2018.00435CrossRefGoogle ScholarPubMed
Feoktistov, V., 2006. Differential evolution. New York: Springer US.Google Scholar
Lu, S. and Sengupta, A., 2022. Neuroevolution guided hybrid spiking neural network training. Frontiers in Neuroscience, 16, p. 838523.10.3389/fnins.2022.838523CrossRefGoogle ScholarPubMed
Singh, S., Sarma, A., Jao, N., Pattnaik, A., Lu, S., Yang, K., Sengupta, A., Narayanan, V. and Das, C. R., 2020, May. Nebula: A neuromorphic spin-based ultra-low power architecture for SNNs and ANNs. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA) (pp. 363–376). IEEE.10.1109/ISCA45697.2020.00039CrossRefGoogle Scholar
Pei, J., Deng, L., Song, S., Zhao, M., Zhang, Y., Wu, S., Wang, G., Zou, Z., Wu, Z., He, W. and Chen, F., 2019. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 572(7767), pp. 106–111.10.1038/s41586-019-1424-8CrossRefGoogle ScholarPubMed
Aimone, J. B., 2021. A roadmap for reaching the potential of brain‐derived computing. Advanced Intelligent Systems, 3(1), p. 2000191.10.1002/aisy.202000191CrossRefGoogle Scholar
Han, Z., Islam, A. N. and Sengupta, A., 2023, June. Astromorphic self-repair of neuromorphic hardware systems. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 6, pp. 7821–7829).10.1609/aaai.v37i6.25947CrossRefGoogle Scholar
London, M. and Häusser, M., 2005. Dendritic computation. Annual Review of Neuroscience, 28(1), pp. 503–532.10.1146/annurev.neuro.28.061604.135703CrossRefGoogle ScholarPubMed
Plagge, M., Cardwell, S. G. and Chance, F. S., 2024, April. Expressive dendrites in spiking networks. In 2024 Neuro Inspired Computational Elements Conference (NICE) (pp. 1–8). IEEE.10.1109/NICE61972.2024.10548485CrossRefGoogle Scholar
Goan, E. and Fookes, C., 2020. Bayesian neural networks: An introduction and survey. Case Studies in Applied Bayesian Data Science: CIRM Jean-Morlet Chair, Fall 2018, pp. 45–87.10.1007/978-3-030-42553-1_3CrossRefGoogle Scholar

Accessibility standard: WCAG 2.2 A

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The PDF of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and meets the basic (A) level of WCAG compliance, addressing essential accessibility barriers.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×