Our brains consist of billions of neurons in densely-interconnected networks, and they control every aspect of our behavior, our movements, thoughts, and feelings. To explore their functioning, theorists have proposed neural network models in which every unit is connected to every other unit. Activity in one will spread to the others, depending on the strength of their connections. A key assumption is that when units are active simultaneously, their connection is strengthened; one formula used to calculate such changes is the delta rule, which is almost identical to the fomula of the Rescorla-Wagner model. These simple networks prove to be surprisingly powerful; they can account for many features of conditioning, concept learning, and memory. One recent development has been deep learning models that incorporate hidden units, between input and output units. This seemingly small innovation has dramatically increased the ability of these models to carry out sophisticated tasks; examples include beating world champions at chess and diagnosing skin cancer. One problem is that learning is slow, and new learning can result in the loss of older information (catastrophic interference). Whatever their ultimate fate, these models have demonstrated the power of even simple networks to perform tasks of astonishing sophistication.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.