Abstract
The starting point of the project, `Exploring novel figurative language to conceptualise Large Language Models’, funded by Cambridge Language Sciences Incubator Fund, was the inadequacies in descriptions of systems based on LLMs, such as ChatGPT. While some commentators talked about ‘reasoning’, ‘intelligence’, and even ‘AGI’, others used ‘autocomplete’ and ‘pattern-matching’, implying very limited capabilities. We argued that understanding the technology and its implications, both at expert and non-expert level, required better conceptualizations, reflected in better terminology. Despite the vast and rapidly-developing literature on LLMs, the situation has not improved. The conceptualization issues are multi-faceted, and our highly interdisciplinary exploration has reflected this. In this poster, we discuss alternative framings of LLM technology and argue that the historical context of LLM systems, deriving from Natural Language Processing (NLP), may provide a better conceptualization than the language associated with AI. We also outline some of the themes we have addressed and the ways in which the project has explored ways of explaining the technology.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)