Hostname: page-component-76fb5796d-zzh7m Total loading time: 0 Render date: 2024-04-29T10:47:25.193Z Has data issue: false hasContentIssue false

Explaining Neural Transitions through Resource Constraints

Published online by Cambridge University Press:  24 May 2022

Colin Klein*
Affiliation:
School of Philosophy, The Australian National University, Canberra, Australia

Abstract

One challenge in explaining neural evolution is the formal equivalence of different computational architectures. If a simple architecture suffices, why should more complex neural architectures evolve? The answer must involve the intense competition for resources under which brains operate. I show how recurrent neural networks can be favored when increased complexity allows for more efficient use of existing resources. Although resource constraints alone can drive a change, recurrence shifts the landscape of what is later evolvable. Hence organisms on either side of a transition boundary may have similar cognitive capacities but very different potential for evolving new capacities.

Type
Symposia Paper
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aaronson, Scott. 2015. “Why Philosophers Should Care about Computational Complexity.” In Computability: Gödel, Turing, Church, and Beyond, edited by Copeland, B. Jack, Posy, Carl J., and Shagrir, Oron, 261327. Cambridge, MA: MIT Press.Google Scholar
Bechtel, William, and Richardson, Robert C.. 2010. Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Birch, Jonathan, Ginsburg, Simona, and Jablonka, Eva. 2020. “Unlimited Associative Learning and the Origins of Consciousness: A Primer and Some Predictions.” Biology and Philosophy 35 (6):123.CrossRefGoogle ScholarPubMed
Brown, Rachael L. 2014. “What Evolvability Really Is.” British Journal for the Philosophy of Science 65 (3):549–72.CrossRefGoogle Scholar
Calcott, Brett. 2011. “Alternative Patterns of Explanation for Major Transitions.” In The Major Transitions in Evolution Revisited, edited by Calcott, Brett and Sterelny, Kim, 3552. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Calcott, Brett. 2014. “Engineering and Evolvability.” Biology and Philosophy 29 (3):293313.CrossRefGoogle Scholar
Calcott, Brett, and Sterelny, K., eds. 2011. The Major Transitions in Evolution Revisited. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Cherniak, Christopher, Mokhtarzada, Zekeria, Rodriguez-Esteban, Raul, and Changizi, Kelly. 2004. “Global Optimization of Cerebral Cortex Layout.” Proceedings of the National Academy of Sciences of the United States of America 101 (4):1081–86.CrossRefGoogle ScholarPubMed
Craver, Carl F. 2007. Explaining the Brain. New York: Oxford University Press.CrossRefGoogle Scholar
Ginsburg, Simona, and Jablonka, Eva. 2019. The Evolution of the Sensitive Soul: Learning and the Origins of Consciousness. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Goldschlager, Leslie M., and Parberry, Ian. 1986. “On the Construction of Parallel Computers from Various Bases of Boolean Functions.” Theoretical Computer Science 43:4358.CrossRefGoogle Scholar
Hochreiter, S. 1991. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut für Informatik, Lehrstuhl Prof. Brauer, Technische Universität München. Advisor: J. Schmidhuber.Google Scholar
Hornik, Kurt, Stinchcombe, Maxwell, and White, Halbert. 1989. “Multilayer Feedforward Networks Are Universal Approximators.” Neural Networks 2 (5):359–66.CrossRefGoogle Scholar
Klein, Colin. 2018. “Mechanisms, Resources, and Background Conditions.” Biology and Philosophy 33 (36):114.CrossRefGoogle Scholar
Knoll, Andrew H., and Hewitt, David. 2011. “Phylogenetic, Functional, and Geological Perspectives on Complex Multicellularity.” In The Major Transitions in Evolution Revisited, edited by Calcott, Brett and Sterelny, K., 251–70. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
McShea, Daniel W., and Simpson, Carl. 2011. “The Miscellaneous Transitions in Evolution.” In The Major Transitions in Evolution Revisited, edited by Calcott, B. and Sterelny, K., 1934. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Pigliucci, Massimo. 2008. “Is Evolvability Evolvable?” Nature Reviews Genetics 9 (1):7582.CrossRefGoogle Scholar
Raichle, Marcus E., and Mintun, Mark A.. 2006. “Brain Work and Brain Imaging.” Annual Review of Neuroscience 29:449–76.CrossRefGoogle ScholarPubMed
Savage, John E. 1972. “Computational Work and Time on Finite Machines.” Journal of the ACM 19 (4):660–74.CrossRefGoogle Scholar
Schmidhuber, Jürgen. 2015. “Deep Learning in Neural Networks: An Overview.” Neural Networks 61:85117.CrossRefGoogle ScholarPubMed
Šíma, Jirí, and Orponen, Pekka. 2003. “General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results.” Neural Computation 15 (12):2727–78.CrossRefGoogle ScholarPubMed
Simon, Herbert A. 1996. The Sciences of the Artificial. 3rd ed. Cambridge, MA: MIT Press.Google Scholar
Smith, John Maynard, and Szathmáry, Eörs. 1997. The Major Transitions in Evolution. New York: Oxford University Press.CrossRefGoogle Scholar
Sterling, Peter, and Laughlin, Simon. 2015. Principles of Neural Design. Cambridge, MA: MIT Press.Google Scholar
Zilberstein, Shlomo, and Russell, Stuart. 1996. “Optimal Composition of Real-Time Systems.” Artificial Intelligence 82 (1–2):181213.CrossRefGoogle Scholar