Section 5.1
• How do artificial neural network models complement the various neuroscientific techniques for studying the brain?
• Are artificial neural networks a good enough approximation of real neural networks to be useful to cognitive scientists?
Section 5.2
• Why is training such an integral part of neural network modeling?
• Is it significant that the perceptron convergence rule diverges from the Hebbian learning rule?
Section 5.3
• How biologically plausible are artificial neural networks?
• How does backpropagation differ from the learning algorithms discussed in section 5.2?
Section 5.4
• What are the principal features of artificial neural networks? How do they differ from the physical symbol system model of information-processing?
• Do you think that artificial neural networks represent a new way of thinking about information processing?
5.1 Neurally Inspired Models of Information Processing
Connectionism (entry from the Internet Encyclopedia of Philosophy)
Connectionism (entry from the Stanford Encyclopedia of Philosophy)
Connectionism: An introduction (entry from The Mind Project)
Artificial neural networks: A tutorial (paper by Jain and Mao, 1996, in Computer)
What are artificial neural networks? (paper by Krogh, 2008, in Nature Biotechnology)
Letting structure emerge: Connectionist and dynamical systems approaches to cognition (paper by McClelland et al., 2010, in Trends in Cognitive Sciences)
Neuroscience-Inspired Artificial Intelligence (review paper by Hassabis et al., 2017, in Neuron)
Cognitive computational neuroscience (paper by Kriegeskorte and Douglas, 2018, in Nature Neuroscience)
For more links see the online resources for section 3.3.
5.2 Single-Layer Networks and Boolean Functions
The Mathematics of Boolean Algebra (entry from the Stanford Encyclopedia of Philosophy)
Perceptrons (chapter by Du and Swamy, 2019, in Neural Networks and Statistical Learning)
McCulloch-Pitts Neurons (entry from The Mind Project)
5.3 Training Multilayer Networks
Learn to design your own neural networks (from AISpace)
Neural Networks: A Systematic Introduction (book by Raul Rojas, 1996)
How the backpropagation algorithm works (chapter 2 of the online book “Neural Networks and Deep Learning” by Michael Nielsen)
Backpropagation and the brain (paper by Lillicrap et al., 2020, in Nature Review Neuroscience)
On the biological plausibility of grandmother cells: Implications for neural network theories in psychology and neuroscience (paper by Bowers, 2009, in Psychological Review)
Locating object knowledge in the brain: Comment on Bowers’s (2009) attempt to revive the grandmother cell hypothesis (paper by Plaut and McClelland, 2010, in Psychological Review)
5.4 Information Processing in Neural Networks: Key Features
Distributed representations (book chapter from “Parallel distributed processing (Vol. 1)” by Rumelhart, McClelland, & PDP Research Group, 1988)
Connectionist modelling in psychology: A localist manifesto (paper by Page, 2000, in Behavioral and Brain Sciences)