Published online by Cambridge University Press: 14 June 2025
Introduction
In the evolving discourse on artificial intelligence (AI), the quest for strong AI (Searle, 1980) remains paramount. This chapter seeks to elucidate the foundational assumptions that underpin philosophers’ and computer scientists’ assertions about the prerequisites for achieving strong AI. The current trajectory in AI, notably within computer science, is increasingly influenced by Google Brain's white paper on the Transformer architecture (Vaswani et al. 2017), a groundbreaking deep-learning model that reshaped the scientific landscape of Natural Language Processing (NLP) and Machine Learning (ML). The Transformer architecture forms the backbone behind the development of Large Language Models (LLMs) such as OpenAI's ChatGPT and Google's BERT. The Transformer architecture emphasizes the importance of handling sequential data through self-attention mechanisms, reflecting the human ability to emphasize certain textual elements over others based on context. In this chapter, I refer to these LLMs as ‘Statistical AI’, since Transformer architecture relies heavily on statistical methods when modelling these self-attention mechanisms. LLMs have not only achieved immediate commercial success and cultural impact for the tech industry, but the world's leading computer scientists like Ilya Sutskever also believe that they could be ‘slightly conscious’ (Sutskever 2022).
Contrasting this prevalent view, the chapter turns to the critiques offered by philosophers Noam Chomsky and Robert Brandom. Chomsky's internalist critique underscores the limitations of purely statistical models in capturing authentic human linguistic practices. Statistical models of reasoning have great practical use cases but are irrelevant to science. They are not proper models of reasoning, for human linguistic practices do not require agents to look up probability tables of what word should be used in an utterance (Chomsky et al. 2023). Conversely, Brandom's externalist perspective emphasizes the need for AI to engage in autonomous discursive practices (ADPs). ADPs are practices that are regulated by norms that are implicit within the practice itself and not regulated by external factors (i.e. training data in an AI context). ADPs involve making inferences when deployed within the context of communication or actions with other agents, while also allowing for self-correction when compared to the norms the ADP is grounded on and even correcting norms themselves (Brandom 2006).
To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.