Overview
The last chapter explored the theory behind the neural networks approach to information processing. We saw how information processing works in single-unit networks and then looked at how the power of neural networks increases when hidden units are added. At the end of the chapter we considered some of the fundamental differences between artificial neural networks and the sort of computational systems to which the physical symbol system hypothesis applies. In particular, we highlighted the following three differences.
Representation in neural networks is distributed across the units and weights, whereas representations in physical symbol systems are encoded in discrete symbol structures.
There are no clear distinctions in neural networks either between information storage and information processing or between rules and representations.
Neural networks are capable of sophisticated forms of learning. This makes them very suitable for modeling how cognitive abilities are acquired and how they evolve.
In this chapter we will explore how these differences in information processing give us some very different ways of thinking about certain very basic and important cognitive abilities. We will focus in particular on language learning and object perception. These are areas that have seen considerable attention from neural network modelers – and also that have seen some of the most impressive results.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
There are no purchase options available for this title.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.