The term ‘Artificial Intelligence’ (AI) was introduced in 1956 by American mathematician and computer scientist John McCarthy, who used it in a funding proposal for a summer research project at Dartmouth College (McCarthy et al., 1955). AI is myriad technologies that mutate and evolve, yet both the academic field devoted to its study and the industry that aims to realize it in its multiplicity are often said to have the nebulous aim, intimated already in McCarthy’s proposal, of making machines capable of displaying human-level, intelligent activity in all or most domains.1 The history of AI is narrated as a sequence of alternating ‘springtimes’ and ‘winters’ in which an approach is supposedly made to this vague goal, then falters (e.g., Bostrom 2014, pp. 6–8; Kurzweil, 2005, 263–266). In 2022, the year chatbots based on large language models (LLMs) seized the world’s attention, and OpenAI’s ChatGPT became for many identical to all AI, a massive springtime broke out, with a small cohort of companies securing unprecedented funding while their leaderships told stories about how general machine intelligence at the human level or, to use the current jargon, artificial general intelligence (AGI), was just around the corner.