Abstract
Aphasia (acquired disorders of language) have a special place not only in the history of language sciences but also the neurosciences – as they have often been used as a vehicle for testing theoretical approaches to understanding the nature and mechanisms underpinning human higher cognitive functions and their neural bases. Aphasia offers a fascinating set of challenges to be solved including the contrastive yet graded variations amongst patients, the patterns of partial recovery and how this is underpinned by changes in the damage brain. This history of aphasia, going all the way back to its neurological pioneers, Wernicke, Meynert and Broca, is full of various notions and ideas about the nature of aphasia and the mechanisms of recovery (and thus, in turn, approaches for speech and language therapy). However, many of notions are actually contradictory and puzzling; indeed the mechanistic-computational bases have not been elucidated, implemented or tested. The rise of neural networks and AI modelling offers a new approach where mechanisms of language can be explicitly implemented and testing – thus providing a form of “open science” theories. By damaging such models, it is possible to assess them as a formal account of different kinds of aphasia; and long-term recovery can be viewed as a gradual re-optimisation of the remaining systems. Moreover, by constraining the models’ architecture with macro-level neuroanatomy (i.e., key brain regions and their interconnections) it is possible to bridge formally between brain and mind – in both their intact and damaged forms.


