Hostname: page-component-699b5d5946-wwcx4 Total loading time: 0 Render date: 2026-02-26T07:35:32.943Z Has data issue: false hasContentIssue false

Prophetic memory: AI intermediaries and the end of the world

Published online by Cambridge University Press:  26 February 2026

Amanda Lagerkvist*
Affiliation:
The Centre for Multidisciplinary Research on Religion and Society, CRS, Uppsala University , Uppsala, Sweden
Blaženka Scheuer
Affiliation:
The Centre for Theology and Religious Studies, Lund University , Lund, Sweden
Mark Coeckelbergh
Affiliation:
Philosophy, University of Vienna , Vienna, Austria
*
Corresponding author: Amanda Lagerkvist; Email: amanda.lagerkvist@crs.uu.se

Abstract

A new breed of prophets – intermediaries and pastoral bros of an AI industry with metaphysical aspirations – has surfaced on the global stage during troubled times. They make great promises, offer predictions and warnings, and stake out directions for humanity. This article argues that they do so by invoking the implicit collective memory of the apocalyptic imaginary known from ancient Jewish apocalyptic writings and, more specifically, by reenacting what we call prophetic memory. Through close readings in the tradition of biblical exegesis coupled with philosophical and critical hermeneutics, we trace strong AI narratives of doom and salvation to a range of media forms such as Twitter/X postings, books, interviews, journalistic feature articles, and reporting. Through these media, AI prophets speak of the end times while simultaneously offering a new beginning for humankind, not unlike the ancient prophets of the Hebrew Bible. Prophetic memory, we submit, is furthermore a mode of ‘collective future thought’ and an instantiation of the ‘remembering-imagining-system’. While its purpose is to create stability for a particular vision for the future, there is also a productive ambivalence of order and disorder at work within the apocalyptic AI imaginary. To question this ambiguous yet extremely powerful fixture on the human horizon, there is a need, we argue, for bothering the political-religious dimensions of the hegemonic AI imaginary and for scrutinizing how the AI industry founds its power base on the clout of prophetic memory – in a time of crisis in which many look for guidance and direction.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press

If we mishandle it, AI might extinguish not only the human domination on earth but the light of consciousness itself, turning the universe into a realm of utter darkness.

Yuval Noah Harari, Nexus: A Brief History of Information Networks from the Stone Age to AI (Reference Harari2024, 403)

Introduction

Prophets typically appear in times of crisis. When great empires are trembling, they offer warnings, predictions about fundamental change, and moral instructions for survival and hope. Today a new breed of prophets has surfaced on the global stage during troubled times. These ‘AI intermediaries’Footnote 1 are big tech owners, CEOs, scientists, computer engineers, entrepreneurs, scholars, journalists, and experts on AI and new technology.Footnote 2 As we will shed light on in this article, these ‘AI prophets’ make great promises, offer predictions and warnings, and stake out techno-existential and moral directions for humanity. Speaking words of doom and words of salvation just as the ancient prophets of the Hebrew Bible did, they are placing themselves, consciously or not, in the prophetic traditions of old. They do so by invoking the apocalyptic imaginary known from ancient Jewish apocalyptic writings (Collins, Reference Collins1979, Reference Collins1989) and by re-enacting what we here call prophetic memory.

We understand prophetic memory in terms of shared templates stemming from the biblical traditions, with the help of which these ‘pastoral bros’ utilize and perform familiar patterns of prophetic habits and rhetorical strategies.Footnote 3 Prophetic memory is therefore here understood in alignment with Astrid Erll’s discussion on ‘implicit collective memory’ (Reference Erll2022), which she defines as ‘the recurrent use of mostly schematized memorata, which remains – for a majority of the group or society – not intentional, non-conscious, and not visible’ (ibid., 5). Importantly, this invisibility is for Erll generative and powerful: ‘while remaining unexposed, it is likely to produce more of the same in the future’ (ibid.). By this token, as we intend to show in this article, the studied apocalyptic performances are undergirded by implicit prophetic memory.Footnote 4 This means that while none of the subjects explicitly tie themselves to Jewish and Christian prophetic traditions, the force or power of their statements is derived from this shared template. Prophetic memory thus also plays into the fact that culture, as studies have shown, shapes people’s expectations and responses to AI and its implementation in society (Cave and Dihal, Reference Cave and Dihal2023; Xiao et al., Reference Xiao, Chunchen, Daigo, Hazel and Jeanne2024). These pundits of the Valley share the same cultural background and nota bene, gender, which thus affects how they are speaking and thinking about AI.

It has often been noted in critical AI studies that AI discourse today displays a curious mix or hybrid of utopian and dystopian flares that serves to inspire new investments, calling for a critical analysis (cf. Cave and Dihal, Reference Cave and Dihal2023; Lindgren, Reference Lindgren2023; Suchman, Reference Suchman2023; Crary, Reference Crary2023). This article qualifies this observation by attending to how – in the course of mobilizing, exploiting, and performing the affective mnemonic registers of prophetic memory – the AI prophets recall apocalypticism as a cultural resource in times of crisis. As the AI prophets communicate their apocalyptic visions, predictions, and doomsday narratives about AI to clarify, warn, and infuse hope they serve as messengers of an industry that has metaphysical aspirations (cf. Singler, Reference Singler2017, Reference Singler2024);Footnote 5 an industry that speaks of the end times while offering at the same time a new beginning for humankind of sorts (what is often forecasted as ‘the new AI era’) (Lagerkvist, Reference Lagerkvist2020).Footnote 6 Our aim is to bring out that the apocalypse as an imaginative practice (Hurley, Reference Hurley2020), is a powerful mediated mnemonic and affective resource for industrial agents of the AI venture.Footnote 7 We thus argue that leading figures in the AI industry reproduce and reconfigure prophetic and apocalyptic imaginaries grounded in biblical traditions.

As we will seek to demonstrate, prophetic memory is an embedded force of the AI-driven future: it involves using the past implicitly to mobilize particular fears or hopes about potential tomorrows, by recalling – inadvertently – the voices, rhetorics and behavioral gestures of the ancient prophets. To examine these phenomena, we will approach the question of the role of memory of an archaic and mythological past for forging a future, within this ‘affective economy’ (Ahmed, Reference Ahmed2004). This underscores what scholars in psychology have demonstrated, namely that both memory and the future are brought into being through the imagination within what has been termed the ‘remembering-imagining-system’ (Conway et al., Reference Conway, Loveday and Cole2016). This tangled temporality, we propose, is echoed in how prophetic memory can also serve a range of ideological purposes today.

For Erll there is a hidden world and hidden power of implicit memory that accentuates ‘not only the invisibility, but also the forward pushing, future-making capacities of implicit collective memory’ (Reference Erll2022, 3). By this token, our definition keeps prophetic memory ‘in the dark’ as it were, by stressing its invisibility and simultaneous authority but also how it constitutes a powerful instantiation within the AI industry and its narratives, of the psychologically identified remembering-imagining system. Here we suggest, more specifically, that AI prophets are involved in a memory practice that Szpunar and Szpunar term ‘collective future thought’ that contains ‘the act of imagining an event that has yet to transpire on behalf of, or by, a group’ (2016, 378). This ‘event’ – here the advent of AI on the human horizon more specifically – is thus an emergent feature of the collective future in deep dependency on the past. This thus amounts to a form of ‘prospective collective memory’ (Tenenboim-Weinblatt, Reference Tenenboim-Weinblatt2013), which promises to deliver on a remembered promise.

Through close readings in the tradition of biblical exegesis coupled with philosophical and critical hermeneutics, we study performances of prophetic memory as constituents of the apocalyptic AI imaginary – and as modes of collective future thought – by focusing on different utterances that foretell the future in a two-step manner. First, we analyse the tropes and themes that emerge, noting that AI prophets speak about what is imminent in terms of ‘seeing’, which echoes the prophetic traditions of the Hebrew Bible. Second, we critically attend to how this meaning production – which aspires to bring about ‘the future’ – is performed for purposes, aims and gains in our day and age.

Here, our objective is ultimately to create awareness among AI intermediaries about the link between prophetic-apocalyptic thinking and thinking about AI and engage them in a deeper and critical understanding of their own practices. Here we take our cue from Alice Crary who seems to seek to offer a revelation to those who build the systems. Her goal is to make them reflect on the dangerous world view encompassed by the Valley. She states: ‘A critique of longtermism that enabled its adherents to see it in [a] harshly revealing light would be a welcome step towards envisioning and enacting a just and liveable future’ (2024, 55).

Before we delve into our materials, consisting of Twitter/X postings, books, interviews, journalistic feature articles in the international press and reporting in other news outlets (for example CNN, USA Today, The New York Times, The Guardian, Time Magazine),Footnote 8 through which these seers of the AI apocalypse ventilate their visions, we will situate our investigation in relation to a theoretical framework encompassing imaginaries, technologies and relations between memory and the future. But first, how do we define ‘AI’?

Strong narratives, vague technologies, mnemonic futures

AI is a many-headed hydra. It is a phenomenon that can be analysed on several levels (Coeckelbergh, Reference Coeckelbergh2022). First, it is a fast-developing technology. Second, it is a force of political economy and platform capitalism. And in turn, third, it is connected to an ambivalent imaginary or narrative, encompassing both dystopian and utopian scenarios. With the concept of prophetic memory, we wish to show that this imaginary has an unrecognized foothold in the deep past, by means of which it both sees and steers into the future.

For Bory et al. (Reference Bory, Natale and Katzenbach2025), ‘the AI imaginary’ contains both strong and weak AI narratives. The strong ones are about the existential risk of extinction, and the weak ones are more about policy initiatives towards trustworthy AI implementation, as well as more mundane negotiations on micro levels with these technological or algorithmic imaginaries in everyday life. Bory et al. argue that there has been an over emphasis on strong AI narratives and on the future in both academic and journalistic debate. Yet they concede that ‘centering the analysis on strong AI narratives can nurture critical approaches that have started questioning the ideological underpinning of strong or general AI’ (ibid., 2114). Weak narratives, we propose, are for a sunny day. Strong narratives, such as the ones that depict an AI apocalypse, are literally disruptive and mobilized to eclipse every other alternative when the world is already on edge, while remaining ambivalent about the upshot.

In critical AI studies, as mentioned above, several scholars have pointed to the dubiousness and equivocation of AI as what may account for its hegemonic power. For Lucy Suchman AI is both an uncontroversial stable thing and a floating signifier: ‘“AI” is a term that suggests a specific referent but works to escape definition in order to maximize its suggestive power’, which enables additional investments in it (Reference Suchman2023, 3). She furthermore emphasizes that ‘[w]hile interpretive flexibility is a feature of any technology, the thingness of AI works through a strategic vagueness that serves the interests of its promoters, as those who are uncertain about its referents (popular media commentators, policy makers and publics) are left to assume that others know what it is’ (ibid.). Hence, she concludes that ‘the thingness of AI, its status as a stable and agential entity, needs to be made controversial’ by subjecting it to demystification and tracing its ‘sources of power’ (ibid.).

Taking this torch, we will in the following add new dimensions to the critique and offer a fresh perspective on the ambivalent AI imaginary. It is worth noting, from a memory studies perspective, that the evanescent phenomenon of AI also transcends, uproots, and undercuts the notion of ‘prosthetic memory’ warranted by popular culture, as that enabler of deep cross-generational empathy and progressive politics posited by Alison Landsberg (Reference Landsberg2004). This is because AI-driven memory, as Andrew Hoskins (Reference Hoskins2024) underlines in his seminal article on AI and memory, allows us to empathize with what has never been. He shows how AI affects our very understanding of the past, centering on how its predictive and probabilistic operations are always indebted to it, while fictionalizing the past beyond the referential, offering more than simulations through a hyper realistic bygone flair. This brings to mind the withered simulacra of modern media cultures, envisioned by Jean Baudrillard: copies without originals. And in this desolate tract, we will add, resuscitating the prophetic mythos of the Jewish and Christian traditions seems especially expedient. In this vexed context, we attend to the excessively strong utopian and dystopian narratives about the future and end of the world within the AI imaginary, examining how a mythological past – that is, prophetic memory – is curiously harnessed for forging a future of artificial intelligence. Hence, prophetic memory, as we will show, also implicates the future imagination. One foundational aspect of prophetic memory, is thus the sense in which the AI intermediaries are creating metanarratives of the (near) future: a sense of standing on the brink before a grand transformation of the world, imitating – consciously or not – prophetic performances from the biblical traditions perceived as a collective entity. The AI intermediaries invoke prophetic memory to present themselves as the prophets whom people should listen to, thereby shaping their future. This brings relationships between remembering the past and imagining the future to the fore. As noted above, psychological research by Conway et al. (Reference Conway, Loveday and Cole2016), has concluded that individuals imagining plausible and implausible futures are indebted to remembering the past. This occurs within what has been termed the ‘remembering-imagining system’, which sees episodic future thinking as linked to a goal system which maintains ‘access to memories of recent events that themselves support plausible imaginings of near-future events’ (ibid., 260). Furthermore, both on the collective and individual level, memory practices ‘inhibit and contain prospective futures’ (ibid., 257). Our point of departure here is thus that individual psychology and cultural imaginaries operate in a similar fashion, in which both the past and the future belong to an extended form of consciousness in close affinity with the imagination. In other words, across both realms we may speak of ‘mnemonic futures’.

AI prophets thus offer up a particular version of the ‘remembering-imagining-system’. In essence, as already mentioned above, prophetic memory within the apocalyptic AI imaginary can be described as a mode of ‘collective future thought’ (Szpunar and Szpunar, Reference Szpunar and Szpunar2015). In addition to recognizing, following Conway et al. (Reference Conway, Loveday and Cole2016) that memory may set boundaries on possible futures, collective future thought also implies, conversely, that futures come to bear on the present and past. Summing up the approach, Szpunar and Szpunar argue that

…the imagining of a future event that has bearing on a group is a communicative, social, interpretive, political, and contested process that involves a variety of groupings, actions, and objects. Collective future thought is a concept that includes psychological notions of imagining as well as those beyond the individual, those that are fundamentally collective and “exist” within group processes. … Collective future thought is a communal act. Regardless of whether performed by an individual or by the collective, it is a phenomenon always for the group. (ibid., 381)

This also includes what Tenenboim-Weinblatt has termed collective prospective memory which describes a ‘collective remembrance of what still needs to be done, based on past commitments and promises’ (Reference Tenenboim-Weinblatt2013, 4). Within this line of reasoning socio-technical imaginaries about the future (Jasanoff and Kim, Reference Jasanoff and Kim2015) are both indebted to and offer to deliver on cultural promises of the past. Furthermore, the concept of premediation coined by Richard Grusin (Reference Grusin2004), describes a form of future making involving a shift in the role of the media from reporting on what has happened in the recent past, and thus writing history, in favor of ‘a prophetic or predictive role of reporting on what might happen’ (Grusin, Reference Grusin2004, 23). In the current context of extinction, media forms have been involved in premediation which, he argues, ‘serves simultaneously to generate anxiety about future catastrophic events and to provide reassurance that they have already been anticipated, remediated, and survived’ (Reference Grusin and Grusin2018, x). Are AI prophets also involved in fulfilling a promise stemming from the past? And could the premediations of AI doom simultaneously work as an instigator and as a prevention scenario? To explore these and other instantiations of the remembering-imagining-system as mediatied modes of collective future thought in the context in question, we now look closely at some of those who have stepped forward as AI prophets, and to whom humans can turn for clarity, instruction, reassurance, and hope – and sometimes for dire warnings. This brings us to the phenomenon of AI prophecy and its relation to the apocalyptic AI imaginary.

Pastoral bros and apocalyptic AI prophecy

Prophecy is a historical phenomenon well attested in ancient Eastern Mediterranean textual sources such as the prophetic traditions of the Hebrew Bible (Nissinen, Reference Nissinen2017).Footnote 9 The prophets of the Hebrew Bible, such as Ezekiel, Jeremiah, Hulda, and Isaiah, spoke to a community affected by global changes. Faced with suffering and socio-cultural disruptions caused by wars and forced exile under the rule of great empires, the prophets sought to communicate the divine word of warning, clarity, and hope to their communities. In this, they not only offered a sobering analysis of the current situation but also pointed out the direction for future action and, thereby, survival. Throughout history, prophetic activity continued to be sought for and increased in times of crisis.

Today, AI prophets are also called upon to offer a vision for humanity in an age of political and ecological upheaval. Sam Altman, CEO of OpenAI, addressed both the AI community and the world at large when he stated on Twitter in 2017: ‘We have lost our collective sense of optimism about the future. I think all of us should act as our duty to bring that back. The only way that I know to return to that sense of optimism is to use technology to create abundance’ (2017). Just as the ancient prophets did, the AI prophets both speak about the end times and about redemption, placing themselves, consciously or not, in the prophetic traditions of old. Pastoral bros do so by invoking an apocalyptic imaginary rooted in ancient Jewish writings. What, then, characterizes these texts and traditions? In Semeia 14 and in later works, John Collins (Reference Collins1979) understands apocalypse:

as a type of literature in which a divine or transcendent revelation is given to a human recipient, presented in a narrative framework, and often cast in symbolic or allegorical language. This kind of writing can look to eschatological salvation or transformation of the world, including the assistance of a messianic or supernatural figure, and judgment on or the destruction of the wicked, as well as cosmological revelations. Such themes with differing emphases can be seen in parts of other biblical texts such as the Old Testament/Hebrew Bible book of Daniel and other early Jewish texts which are not found in some biblical canons such as 1 Enoch and 4 Ezra.

Key here is the notion of a figure who is characterized as a mouthpiece for the divine and as a person infused with divine knowledge about an imminent transformation of the world – features that are, as we will see, also present within the apocalyptic AI imaginary to some degree. Importantly, apocalypse does not only describe a literary genre, but also a worldview, which Collins labels apocalypticism, defining it as ‘the ideology of a movement that shares the conceptual structure of the apocalypses’ (Collins, Reference Collins2016, 11–13). One description of such a worldview, or imaginary, is defined by Lorenzo DiTomasso as follows:

apocalypticism is a distinctive combination of axioms or propositions about space, time, and human existence. It presumes the existence of a transcendent reality, which defines the cosmos and everything in it, but remains almost entirely concealed from observation and beyond the grasp of human intellection. It contends that the present reality is constitutionally structured by two antagonistic and irreducible forces, which are typically identified with good and evil. (Reference DiTommaso and Collins2014, 474, italics in original)

DiTomasso underlines that in late modern media societies, the apocalyptic imaginary is all-pervasive in popular culture.Footnote 10 This worldview has distinct features within the AI imaginary today. Within it, prophetic memory speaks to a dark and troubled future as well as to bleak present-day scenarios. Notably, the sense of two camps or opposing forces is also present. But it must be noted that in the recent past AI prophets actually brought good news.Footnote 11 In 2005, the inventor Ray Kurzweil predicted that ‘the singularity is near’, as the title of his influential book put it, which was largely a transhumanist prophecy containing a lot of optimism about an end time and moment (the Singularity) when humans will merge with computer software. His new book, The Singularity is Nearer (Reference Kurzweil2024), continues his optimistic predictions. Hence, seeking to fulfill a ‘collective prospective memory’, these AI prophets were not simply offering systems long envisioned or desired, but actually sought to harness the entire dream of Modernity of technological progress and salvation, promising its fulfilment through the technology. Another example in this vein, of a whole-sale positive approach is the American entrepreneur Marc Andreessen, who argued recently that AI will save the world, and that we have a moral obligation ‘to ourselves, to our children and to the future’ to build it (Andreessen, 2023, cited in Harari, Reference Harari2024, xx).

These optimistic voices are however aberrations today (cf. McKie, Reference McKie2023). Standing on the brink in a world of polycrisis, the apocalyptic associations within this field most recently, are much more pessimistic. Sometimes pervaded by a combination of hopeful prospects and doomsday thinking, they equally often express outright gloom before end time scenarios. This gravitation to more gloomy prospects can be exemplified by critical scholarship too. Jenny Stümer, a scholar in apocalyptic studies, observes that ‘apocalypses may be invoked prospectively exposing the technocratic authoritarianism of contemporary macho-billionaires’ (Reference Stümer, Stümer and Dunn2024, 3, italics added). Indeed, these macho figures are often behaving in the world today in ways that bring dystopian future scenarios to mind. As we intimated from the onset, both prophecy and tech are here an exclusively masculine affair. To determine whether this is related to the lack of female voices among the AI prophets, or to a deeply masculinist (white, able-bodied and Western) bias and control of the AI story, is beyond the scope of our investigation.Footnote 12 One thing is however certain: this fact points to a complicity of the AI industry in re-generating power asymmetries. There is therefore as we see it good reason to infer that the gender of the AI intermediaries is an asset when they are recalling the apocalyptic imaginary, and by extension performing prophetic memory in order to set the future in motion in a particular direction. Our argument is that to do so, these AI prophets – as heralds of a global community characterized by the onset of the technology – tend to act and speak in a way that evokes implicit cultural memories of a commonly understood paradigm of prophetic conduct and speech found in the traditions of Jewish and Christian Bibles: hence, prophetic memory.

Anointed prophets: seers, doomers, saviors

Apart from speaking in times of crisis and speaking both words of doom and of salvation, the prophets of the Hebrew Bible were seen as individuals of authority bestowed upon them by the deity for whom they spoke. Infused with divine knowledge, the prophets spoke as persons who understood the present and could instruct the community about the right course of action, clarifying that the future of the community was directly related to their obedience to the deity’s instruction. The ‘prophetization’ of AI intermediaries has a corresponding effect: it evokes memories of these characters who understand AI, who can mediate between AI and humanity, the ones who can (fore)see the future and explain AI’s potential and risks that will define our future. Such use of prophetic memory is referential and taken as a point of departure; the prophets of old are invoked linguistically in expressions such as ‘we will see’ and ‘it is going to be/happen’. Thus, these are rhetorical strategies through which a speaker addresses present concerns and future uncertainties – or uses visions of the future to affect the proceedings of the present. CEOs and other AI luminaries focus on what prophetic memory does: i.e., it leads to a change in those addressed, challenging or transforming their perceptions, beliefs, and behaviors.

Prophets that predict an AI apocalypse can be found in science and in the tech sector as well as in philosophy and the humanities. Nick Bostrom (Reference Bostrom2014), an Oxford philosopher, thinks in line with those who predict a new era of superintelligence marked by the event of the Singularity, while entertaining more worry than Ray Kurzweil. Another key example is Eliezer Yudkowsky, a founding figure in AI development who in an interview with Tom Lamont in The Guardian, predicts a timeline for the AI apocalypse:

‘If you put me to a wall,’ he continues, and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10.’ By ‘remaining timeline’, Yudkowsky means: until we face the machine-wrought end of all things. Think Terminator-like apocalypse. Think Matrix hellscape. (Lamont, Reference Lamont2024)

According to Lamont, Yudkowsky has also ‘come to believe that these same AIs will soon evolve from their current state of “Ooh, look at that!” smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail’ (ibid.). Together with the journalist, an apocalyptic doomsday narrative is here created in which the current development of AI is seen as part of a future that leads to the end of all things and the beginning of a new era of superintelligence, in which AI is more like an alien than a tool that we use. In Time Magazine in 2023, Yudkowsky also stated that ‘We are not ready…We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong’. Geoffrey Hinton, ‘Godfather of AI’ and Nobel Prize Laurate in Physics in 2024, also expressed a great turn-around, being hit by a flame of insight, as he worries about being outperformed by machine intelligence in the near future: ‘I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now, and they will be much more intelligent than us in the future … How do we survive that?’ (2023)Footnote 13.

Today, the historian Noah Yuval Harari is perhaps the most clear-cut AI-doomer of all. Writing histories of tomorrow – to use a term he uses in his book Homo Deus (2024) – he is engaging in what Steven Poole (Reference Poole2024) has called ‘apocalyptic pontificating about AI’. He believes in the coming of ‘AI overlords’ and has warned that AI could ‘wipe us out’ (Taylor, Reference Taylor2023). With his ‘monastic aura’ (Immerwahr, Reference Immerwahr2024), he influences tech CEOs, such as Bill Gates, Mark Zuckerberg, and Jeff Bezos. In his latest book, Nexus, he warns that AI might extinguish not only democracy but also human dominion on Earth and even consciousness itself. The problem for Harari is that we have ‘summoned an alien inorganic intelligence that could escape our control and put in danger not just our own species but countless other lifeforms’ (Reference Harari2024, 404). This is an intelligence that we cannot understand or control: ‘People in all countries and walks of life – including even dictators – might find themselves subservient to an alien intelligence that can monitor everything we do while we have little idea what it is doing’ (ibid., 190). In fact, as he suggests: ‘humanity is closer than ever to annihilating itself’ (ibid., xx) and ‘if we mishandle it, AI might extinguish not only the human domination on earth but the light of consciousness itself, turning the universe into a realm of utter darkness’ (ibid., 403).

In Harari’s strong AI narrative, there is a clear prophetic sense to his warnings about the utter darkness and extinction of life that the AI has the potential to bring about. When prophets of biblical traditions speak about the catastrophes that threatened their communities, they spoke in similar terms of cosmic effects. Perhaps the most illustrative example is from the anguishing poems of the prophet Jeremiah, who writes (Jer 4:23–26):

I looked on the earth, and it was complete chaos,
and to the heavens, and they had no light.
I looked on the mountains, and they were quaking,
and all the hills moved to and fro.
I looked, and there was no one at all,
and all the birds of the air had fled.
I looked, and the fruitful land was a desert,
and all its cities were laid in ruins…

When speaking of darkness of the heavens, Jeremiah was recalling the story of creation in Genesis 1 that describes the primordial chaos of darkness in the midst of which God created the cosmos of light (Gen 1:1–3): ‘When God began to create the heavens and the earth, the earth was complete chaos, and darkness covered the face of the deep, while a wind from God swept over the face of the waters. Then God said, “Let there be light,” and there was light’. Sheer reference to the heavens without light, in Jeremiah as well as in Harari, stirs the chilling sense of a world in which no life could exist.

And yet, the presence of a prophet in biblical traditions was a reason for hope, even when they spoke of doom and destruction. As long as the prophets were active, it meant that the deity was concerned for the people, that there was a chance to avert the catastrophe, or at least that there was a possibility that some would survive the approaching disaster. Perhaps this is how Harari is to be understood as well. Between the lines that paint daunting scenarios for humanity, he signals hope, not in a deity but in humans who can still retain control over AI and be careful not to lose their dominion over technology. Also in this respect, Harari’s words echo the understanding of the human role in the world to ‘have dominion’, as presented in the creation story of Genesis 1:28.

Sometimes, AI intermediaries project themselves as prophets who have arrived to salvage us – the flock – from doom. But it must be noted that the savoir can also be the figure of the technology of sentient and powerful AI itself. Or as already pointed out, the redeemer comes in the guise of its engineers who are the only ones who can almost read the sacred book and convey its secrets. The coveted savior figure, rebirthed in the figure of the CEO or the AI pundit, thus exercises pastoral power through the effects of words (Lagerkvist & Scheuer Reference Lagerkvist, Scheuer, Svenungsson, Goldman, Dunér and Stripple2026). These words are performative, thus making a difference in the world. For example, the sentiments of Sam Altman in 2016, in relation to what is unique or not about being human, is today widely echoed among AI engineers and researchers. Here he offers a clear prophecy:

There’s absolutely no reason to believe that in about thirteen years we won’t have hardware capable of replicating my brain. Yes, certain things still feel particularly human—creativity, flashes of inspiration from nowhere, the ability to feel happy and sad at the same time—but computers will have their own desires and goal systems. When I realized that intelligence can be simulated, I let the idea of our uniqueness go, and it wasn’t as traumatic as I thought. https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny, italics added.

As mentioned already, the AI prophets further speak about the current situation but also about what is imminent, in terms of ‘seeing’. This is visible as Sam Altman is interviewed by Reid Hoffman (Reference Hoffman2022). Hoffman picks Altman’s brain:

Reid Hoffman: So all right, let’s start a little bit more pragmatic, but then we’ll branch out. So one of the things I think a lot of folks here are interested in is, based on the APIs, that very large models will create, what are the real business opportunities? What are the ways to look forward? And then given the APIs will be available to multiple players, how do you create distinctive businesses on them?

Sam Altman: Yeah. So I think so far, we’ve been in the realm where you can do an incredible copywriting business or you can do an education service or whatever. But I don’t think we’ve yet seen the people go after the trillion dollar take on Google. And I think that’s about to happen. Maybe it’ll be successful. Maybe Google will do it themselves. But I would guess that with the quality of language models we’ll see in the coming years, there will be a serious challenge to Google for the first time for a search product. And I think people are really starting to think about “How did the fundamental things change?” And that’s going to be really powerful. Footnote 14

As this illustrates Altman typically states ‘I think…we’ll see in the coming years, there will be…’ and then talks about what is to come. In this sense, as an AI prophet he recalls classical biblical prophecy whose task was precisely that: to analyze the current situation and provide reasons for hope and directions for action. This may sound like a rather generic and commonplace thing to say, but what makes all the difference is who is saying it. The intermediaries are assigned to be seers and prophets in ways that the ordinary citizens are not. Among the titles for prophets in the Hebrew Bible we find rō’eh and ḥōzeh, synonyms that are usually translated as ‘a seer” or ‘a visionary’, and that derive from the Hebrew words ‘to see’/‘to see in a vision’/ ‘to behold’. In seeing into the future, their revelations contain a bliss or a disaster to come. While there is some equivocation (and some of the AI prophets explicitly state that they are not prophets, as we discuss further below), it is notable that many around them expect prophecies from them. Put differently, within dominant technocratic discourses, AI industry actors are perceived as authoritative voices on domains of their professed competence.

As this also illustrates, the seers are asked to report on what they see. Often as in Altman’s talking points, his vision entails a dramatic, profound, and far-reaching transformation of the world, that transcends what people often think they can expect from the models:

I think the biggest systemic mistake in thinking people are making right now is they’re like, “All right, maybe I was skeptical, but this language model thing is really going to work and, sure, images, video too. But it’s not going to be generating net new knowledge for humanity. It’s just going to do what other people have done. And that’s still great. That still brings the marginal cost of intelligence very low. It’s not going to cure cancer. It’s not going to add to the sum total of human scientific knowledge.” And that is what I think will turn out to be wrong that most surprises the current experts in the field. (ibid.)

The AI intermediaries, as discussed above, are thus either self-proclaimed visionaries or prophets anointed by others such as media agents, corporate interests, and politicians and thus designated as the ones that can understand and see into the future. But some AI intermediaries, like Yuval Harari, explicitly deny that they are prophets. Yet, this reluctance to act as a prophet, or the denial to be one also echoes biblical prophetic traditions. One of the most famous cases is Amos, a prophet sent to speak words of doom to the northern kingdom of Israel in the years before the destruction of its capital, Samaria, by the forces of the Assyrian Empire. Amos’ words were perceived as unbearable and subversive, so the kingdom leaders commanded Amos to go home and prophesy in his own country. To this, Amos replied: ‘I am no prophet nor a prophet’s son … but the Lord said to me, ‘Go, prophesy to my people Israel’. In other words, one does not prophecy because one is a prophet; one becomes a prophet when called to prophecy. Whether one augurs a future based on claims to have seen it in divine visions or on claims of understanding of signs in the present, one is perceived as a prophet. In this respect, AI intermediaries embody and fulfill the attributes associated with prophecy.

One notable difference between the AI prophets and the prophets of the Hebrew Bible, is who they speak for. The prophets of the Hebrew Bible and other ancient sources, are generally seen as spokespersons for the deities. They are intermediaries that operate in a world where deities were thought of as seeking contact with humans. The most used Hebrew term for ‘prophet’ in the Hebrew Bible is nābî’ (fem. Nĕbî’â), which can be interpreted in the sense of ‘one being called (by a divine agent)’, thus stressing the importance of prophetic calling (Blenkinsopp, Reference Blenkinsopp1996, 28). The Greek word prophētēs (fem. prophētis) has a more general meaning of one who ‘proclaims a message on behalf of another, generally a deity’ (Blenkinsopp, Reference Blenkinsopp1996, 27). ‘Words spoken by the prophets are never ultimately their own’ (Lundbom, Reference Lundbom2016, x) but words of the power on whose behalf they speak:

In the broader context, prophets are men and women who reveal the divine will to humans, both individuals and communities, and are thus practitioners of a particular form of divination. Unlike those making observations in the heavens and on earth, and then interpreting them to predict future events or future courses of action, prophets are “mouthpieces” of the gods, communicating divine messages to human audiences. (ibid.)

While the AI prophets are not referring to any external divine source for their spirited talk, it can be argued that they speak within a culture of the ‘technological sublime’, as described by David Nye (Reference Nye1994) but also out of the accumulated knowledge about technology which they gained through extensive studies and practice. This knowledge is, in a manner of speaking, an external source. This bleeds into how AI itself often becomes a source of endless power within the imaginary.

Productive ambivalence: technologies of dis/order

As discussed already, a curious mix of utopian and dystopian – techno-messianic and techno-catastrophic – visions for a future with AI pervades the contemporary moment. A corresponding ambivalence and ambiguity lie at the heart of prophetic texts of the Hebrew Bible as well, where a prophecy can carry good tidings and deliver bad news at the same time (Scheuer, Reference Scheuer, Eidevall and Scheuer2011). Echoing this ambiguity, the apocalyptic AI-imaginary invokes the ordered world – a bliss to come – but with a threat of chaos, disorder, and disaster hovering upon it.

Debates on AI are pervaded by a polemic between two camps: the AI accelerationists and AI doomers, that is those who wish to speed up the race versus those who are interested in long term survival for humanity and who worry about unfettered AGI (artificial general intelligence). The worldview shared by many in AI circles, effective altruism and longtermism with roots in utilitarianism, forecast an abstract future in which humans can survive and thrive with machines (cf. Crary, Reference Crary2023). They argue that to get there we need to pull the brakes. Individual actors often belong to both camps. Altman, for example, stated on Twitter in February of 2022:

The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel […] it sounds utopian […] We can build AGI [artificial general intelligence]. We can colonize space. We can get [nuclear] fusion to work and solar to mass scale. We can cure all human diseases. We can build new realities.

Yet only a year later, in May of 2023, he signed a one-sentence letterFootnote 15 together with dozens of the AI researchers and business leaders, warning about the threats of an AI Apocalypse, stating: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’. In an analysis by Catherine Thorbecke at CNN, this ambiguity was commented upon:

The stark warning was widely covered in the press, with some suggesting (https://www.cnn.com/2023/05/30/media/artificial-intelligence-warning-reliable-sources/index.html) it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.Footnote 16

One key example is Elon Musk himself who argued, when interviewed by Tucker Carlson in April 2023, that there is a non-trivial probability that AI will lead to ‘civilization destruction’. Paradoxically, these apocalyptic narratives have not prevented Big Tech from continuing to work on AI. On the contrary, the stories seem to attract more rather than less investment in AI. Thorbecke concludes by similar token: ‘But he [Musk] still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google’. In the manner of creative destructive capitalism writ large, it seems there is also money to be made from the apocalypse and its prevention or terror management, through other types of innovations:

Elon Musk’s Tesla kicked off deliveries for its long-awaited Cybertruck (https://www.usatoday.com/story/money/cars/2023/11/02/tesla-cybertruck-production-elon-musk-admits-challenges/71294041007) on Thursday.

First unveiled in 2019, the model is Tesla’s first truck. The vehicle looks like something straight from a post-apocalyptic movie, with what Musk described as a “bullet tough” stainless-steel frame and “basically rock-proof” armored glass.

“The apocalypse could come along at any moment, and here at Tesla we have the finest in Apocalypse technology,” CEO Musk joked on stage at the event held in Austin, Texas (https://www.statesman.com/story/business/technology/2023/11/30/elon-musk-tesla-cybertruck-delivery-event-specs-interior-cost-price-austin-giga-texas/71718179007/).Footnote 17

Indeed, as Elon Musk says in this report from USA Today from November 2023 (Schulz, Reference Schulz2023), these are ‘apocalyptic technologies’, built for the (post-)apocalypse, and from apocalyptic premediation of fear. Apocalyptic AI thus has other technological offshoots. The cybertruck is a reassurance that in the future we were not caught unprepared when the world ended due to the AI that Musk and peers were compelled to build. This testifies to the ambivalent relationship between catastrophe and progress; in effect, the apocalypse can be seen as the ‘dark sibling of progress’ (Cassegård and Thörn, Reference Cassegård and Thörn2022, 53). As Alice Crary points out, the ideologies and secular religions of Silicon Valley have been

singularly successful at attracting rich backers to its project. In treating the economic arena to which these individuals owe their wealth as critically off limits, it positions them to look upon themselves, not as complicit in the arena’s injustices, but as singled out by their success in it to be world saviours. A deceitful narrative of selfless heroes riding to humanity’s rescue has proven ideologically effective… (2024, 54–55)

Heroic AI prophets also offer a diagnosis and solution, a deterrent of future threats, by forming an emotive bond with those who are skeptical and afraid. The good and sound AI prophet sees, thinks, feels, shares, speaks, and shows the way for humanity. A vivid example of this is the words uttered by Sam Altman in Davos in 2024:

I have a lot of empathy for the general nervousness and discomfort of the world towards companies like us… We have our own nervousness, but we believe that we can manage through it and the only way to do that is to put the technology in the hands of people. […]

Let society and the technology co-evolve and sort of step by step with a very tight feedback loop and course correction, build these systems that deliver tremendous value while meeting safety requirements.

Altman is here a demotic ethicist, a care giver who uses pastoral power and who is bent on protecting the populace by building safe AI. He is a prophet who wills good things.

In sum, in the quotes from the AI prophets, we find the individual selection of prophetic elements chosen to meet the rhetorical needs of the AI intermediaries in a given situation. When invoking, indirectly, the memory of biblical prophets, they stabilize the sense of disruption of order caused by the new technology, rendering the AI transformation of the world comprehensible and thereby less frightening. When speaking and acting as prophets, they convey reassurance because if AI has prophets, it means that AI wants to communicate. And, if AI communicates, there is hope for survival and prosperity. However small or elusive, this hope is enough to render the most evident risks less threatening. Put differently, when invoking prophetic memory, AI intermediaries stabilize the experience of AI in human history. And yet this is only half of the story. We have also shown how these performances of memory thrive on a deep and in fact productive ambivalence of the AI object as both fixed and inexorable and as a disruptor at once, balancing doom and salvation themes. As both order and disorder are embedded in the AI imaginary, this makes the technology extremely powerful.

Conclusion

In this article, we have shown that prophetic memory constitutes important building blocks of those powerful sociotechnical imaginaries and AI myths we live by today (see Cave, Dillon and Dihal (Reference Cave, Dillon and Dihal2020), Coeckelbergh et al., Reference Coeckelbergh, Tudor, Lagerkvist and Lindstamforthcoming). AI intermediaries exercise, through their seemingly omni-powerful and world-transforming actions as well as through their ‘seeing’ and forecasting ambitions, a practice of memory with roots in the prophetic tradition. Pastoral bros are today – despite their explicit secular world view – producing an ambivalent AI imaginary replete with religious and apocalyptic language. This works performatively; the seeing and saying set something in motion. When AI prophets are mediating AI to humans, their goal is to explain AI’s beneficial potential and inspire trust in AI. At the same time, they warn humans of the risks involved, reminding them that they are the lords of life on earth and should not submit that power to anyone else. By also forging identification between the makers of AI and ‘us’, in both fear and anticipation, AI seems today to be turning into a total medium – a deity of sorts – a creator, destroyer, and savior of worlds (Lagerkvist, Reference Lagerkvist2024).

By re-enacting the prophetic traditions, AI intermediaries evoke an implicit collective memory (as per Erll) of inherited history and shared moral and social values, to reinforce their credibility and claim. We have suggested that the purpose of these practices of prophetic memory (of both the past and today), is to create stability and predictability for a particular vision for the future. These memory practices are modes of collective future thought which work to create a sense of unity (we are all in this together regardless of religion, ethnicity, gender, and race), meaning (explaining what is going to happen and why), and identity (offering a perspective on what all this make us into and how this defines us as humans) (Edelman and Ben Zvi, Reference Edelman and Ben Zvi2013). But we have also shown that there seems to be a very important productive ambivalence at work in the paradoxical in/stability and dis/order within the AI imaginary. Prophetic memory may here both usher in disruption (a complete and overthrowing of what went before) and offer a warning against it. Hence, apocalyptic premediation generates low- or even high-level anxieties while offering reassurances for troubled times at the same time.

In closing we return to Lucy Suchman who argues that ‘interventions in the field of AI controversies that fail to question and destabilize the figure of AI risk enabling its uncontroversial reproduction’ (Reference Suchman2023, 4). We hold that to question and destabilize this ambiguous fixture on the human horizon, we will be further served by bothering the political-religious dimensions of the hegemonic and ambivalent AI imaginary. As we hope to have demonstrated, there is continued need for profound analyses of how the AI industry founds its power base (and gargantuan revenue), on the clout of prophetic memory at a point in time of unprecedented techno-existential crisis, in which many look for guidance and direction.

A final note on false and true prophecy. Because the prophets spoke of both doom and salvation and because there were opposing views among the prophets on the way the present was to be interpreted, there was a need to specify the criteria to distinguish between true and false prophets. At times, this possibility to know who is right and who is wrong about the future events could be a matter of life and death. In the Hebrew Bible, therefore, there are foremost three criteria to distinguish between a false and a true prophet. The false prophet is a prophet who speaks on behalf of a ‘wrong’ deity, a prophet who speaks for the right deity but without being commissioned by that deity, and a prophet whose predictions proved to be false (Blenkinsopp, Reference Blenkinsopp1996, 136–137). Put differently, a false prophet is an individual without the authority to speak on behalf of the approved source of insight and knowledge and an individual whose predictions do not come true. The AI intermediaries referred to in this article have authority based on the commonly approved source of insight and knowledge called ‘AI’. Hence, true AI prophets will be revealed only once their predictions come true. Or not.

Acknowledgements

This research is the outcome of synergies across three projects: the first is a study on apocalyptic AI imaginaries conducted by Amanda Lagerkvist within the research programme ‘At the End of the World: A Transdicsiplinary Approach to the Apocalyptic Imaginary in the Past and the Present,’ funded by Riksbakens Jubileumsfond and headed by professor Jayne Svenungsson at Lund University. The second is the WASP-HS funded project ‘AI Design Futures’ at Uppsala University. It has further been made possible by a Core Fellowship at HCAS, The University of Helsinki that Professor Lagerkvist was awarded for 2025/26. The authors wish to thank the anonymous reviewers, Andrew Hoskins, Magnus Strand and Matilda Tudor for important comments on previous versions of this article.

Funding statment

This research has been made possible by funding from Riksbankens Jubileumsfond (RJ) (The Bank of Sweden) and it has also been partially supported by the Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.

Competing interests

The authors declare none.

Amanda Lagerkvist is Professor of Media and Communication Studies at Uppsala University and Core Fellow at HCAS, The University of Helsinki. A media philosopher and founder of existential media studies, her work spans digital memories, death online, intersections of biometrics, disability and selfhood, and ambivalent AI imaginaries of both futures and endings.

Blaženka Scheuer is an Associate Professor of Old Testament Exegesis at the Centre for Theology and Religious Studies, Lund University. Her research interests include the apocalyptic texts of the Hebrew Bible, as well as the relationship between religion and AI focusing on concepts of imago dei/hominis and personhood and the originary myths of Genesis.

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and WASP-HS Guest Professor at Uppsala University. His research interests include a relational approach to ethics and robotics; a processual philosophy of digital technology and time; and work on AI, political philosophy, and democracy.

Footnotes

1 Just like the ‘cultural intermediaries’ (described by Pierre Bourdieu, Reference Bourdieu1984) such as cultural workers and journalists involved in symbolic meaning production, goods and services in late modern consumer cultures, the ‘AI intermediaries’ are invested in both production of AI imaginaries as well as in the materialities of engineering.

2 Spanning different professions these intermediaries today include for example the historian Yuval Noah Harari, Google engineer Ray Kurzweil, CEO of OpenAI Sam Altman, Industry magnate Elon Musk, Nobel Prize laureate Geoffrey Hinton, billionaire and AI investor Peter Thiel, philosopher Nick Boström, physicist Max Tegmark, and AI engineer Eliezer Yudkowsky.

3 While the prophets of the Hebrew Bible had a particular impact on shaping the cultural memory of the societies in which they have been copied, interpreted, appropriated, and transmitted throughout centuries (Edelman and Ben Zvi, Reference Edelman and Ben Zvi2013) – and while the effects of these textual traditions have been to offer a sense of a shared past, collective experience, common identity, and communal hopes for the future – their status in the profoundly secular Silicon Valley culture has however, as we argue, been more implicit.

4 “Prophetic memory” is not an acknowledged concept in memory studies but has been used previously in a rather loose sense by for example Ben Zvi who employs it in ways that differ from ours. In the context of Hebrew Bible studies Ben Zvi used it to define the way ancient prophetic figures were remembered in the writings of the learned Judahite men during the Persian period (539–333 bce). Ben Zvi argues that one can retrieve “the basic, general conceptual prototypes of prophecy and prophets that these literati shared among themselves, and about their social mindscape, which was underlying, generating, and reflecting itself in these memories” (Ben Zvi, Reference Ben Zvi2019, 1). Prophetic memory refers in these contexts to a range of features that these learned men thought defined prophets as a group. Prophetic memory is, as memory in general, always intertwined with its societal context and involves communication and social interaction. See also Krautwald, who uses the term to describe the practices of recall and revelation in South African commemoration of colonial genocide. Krautwald defines prophetic memory using a quote from Jennifer Wenzel as’“remembrance of a moment that, viewed retrospectively, offers a glimpse of a future that bears upon the present” (Wenzel, Reference Wenzel2009, 125 cited in Krautwald, Reference Krautwald2023, 388). Wenzel’s notion, however, while not requiring an explicit (episodic) memory, involves a recognition of the past as coming to bear on the future from the vantage point of the present. Our definition emphasizes a pastness which is only implicitly recognized, while still having a bearing on the imaginary.

5 Interestingly, despite the explicit secular world view of many of the AI intermediaries, as Beth Singler has noted, “religious language runs wild in its ranks, and in how the media reports on it. There are AI ‘oracles’ and technology ‘evangelists’ of a future that’s yet to come, plus plenty of loose talk about angels, gods and the apocalypse” (Singler, Reference Singler2017, n.p.).

6 The concept of “apocalyptic AI” was coined by Robert M. Geraci (Reference Geraci2010) who discussed the “apocalyptic theology” present in the genre of popular science books authored by for example Hans Moravec and Kevin Warwick on robotics and Marvin Minsky, Ray Kurzweil and Hugo de Gari on AI. Positing AI as a “technology of enchantment,” he defined apocalyptic AI as a movement within this genre “that integrates the religious categories of Jewish and Christian apocalyptic traditions with scientific predictions based upon current technological developments” (ibid., 9). As opposed to the more recent sentiments of doom and disaster within the AI imaginary which conjures imminent calamitous events and extinction, his definition of apocalypticism is focused on a sense of affective estrangement, followed by rebirth through magical transformation. Apocalypticism refers for Geraci to “1) a dualistic view of the world, which is 2) aggravated by a sense of alienation that can be resolved only through 3) the establishment of a radically transcendent new world that abolishes the dualism and requires 4) radically purified bodies for its inhabitants” (ibid.).

7 As we will briefly touch upon in this article, the role of political economy is key. AI prophecies are communicated through both legacy media and social media. Reliant on their political-economic structures which facilitate the prophetic mode, we suggest these are furthermore intersecting with and inciting new investments in AI.

8 Inspired by Robert M. Geraci’s (Reference Geraci2010) definition of “apocalyptic AI” as belonging to a particular genre of popular science books, we focused on popular and widely disseminated stories by the key agents in the field across different media forms. We chose to focus on those prophets that were widely present in the international press flow in 2023, and through snowballing we then ended up with a broader set of materials also reaching back into the past decade – including tweets, or statements communicated on social media or in the press by the AI prophets themselves in recent years – but also stretching into 2024. We gathered our materials through a combination of purposeful theoretical sampling (searching for articles and materials that combined AI+apocalypse, doomsday scenarios, end times) and intuitive critical-hermeneutic attention on texts that were expressive of the rhetorical strategies we chose to examine. The ambition is therefore not to offer a comprehensive analysis of all apocalyptic utterances that are current in the materials we have worked with, but to enable interpretative in-depth philosophical explorations of the phenomenon that can be a theme for further reflection.

9 Media, such as texts, have always played a role in prophecy. The modalities of mediated communication today are intensively tied to the prophetic mode. The AI prophets, just as the prophets of the past, seek to communicate their messages through the dominating and most attention-grabbing of mediated forms of communication.

10 It must be noted that this apocalypticism has however not always been explicitly anchored in the biblical tradition, which has instead operated by allusion, invocation or formula across a range of cultural and mediated forms (see also Walliss and Aston, Reference Walliss and Aston2011). Reenacting and reinventing prophetic memory is, as we show elsewhere, however also part of the “resacralization” of the apocalyptic imaginary today, implying both direct and indirect scriptural reference (Lagerkvist and Scheuer, Reference Lagerkvist, Scheuer, Svenungsson, Goldman, Dunér and Stripple2026).

11 The tendency to be projecting prophetic or evangelist qualities onto the agents of tech industries, has been addressed in previous scholarship (cf. Willim, Reference Willim2002; Valaskivi, Reference Valaskivi, Nynäs, Lassander and Utriainen2012, Reference Valaskivi, Bell, Gog, Simionca and Taylor2020; Singler, Reference Singler2024). Importantly, the pastoral in itself has an extended meaning in the execution of modern state power, which according to Foucault has theological grounds derived from a particular Christian image of power and governance. He saw pastoral power as a “fundamentally beneficiary power, as the duty of the pastor is to save his flock from harm” (Golder, Reference Golder2007, n.p.). The pastoral is today also a force in the domains of commerce and innovation, as Finnish media scholar, Katja Valaskivi (Reference Valaskivi, Nynäs, Lassander and Utriainen2012, Reference Valaskivi, Bell, Gog, Simionca and Taylor2020) has suggested. She traces this to a new religion she terms innovationism.

12 It is however beyond the scope of this article to properly investigate whether the gender bias of the Hebrew Bible prophets and of prophetic memory itself, are reflected in the AI oracles of today.

References

Ahmed, S (2004) The Cultural Politics of Emotion. Edinburgh: Edinburgh University Press.Google Scholar
Ben Zvi, E (2019) Social Memory among the Literati of Yehud. Berlin, Boston: De Gruyter, pp. 109133. https://doi.org/10.1515/9783110547146-005.CrossRefGoogle Scholar
Blenkinsopp, Joseph (1996) A History of Prophecyin Israel: Revised and Enlarged. Louisville: Westminster John Knox Press.Google Scholar
Bostrom, N (2014) Superintelligence. Oxford: OUP.Google Scholar
Bory, P, Natale, S and Katzenbach, C (2025) Strong and weak AI narratives: An analytical framework. AI & Society 40, 21072117. https://doi.org/10.1007/s00146-024-02087-8.CrossRefGoogle Scholar
Bourdieu, P (1984) Distinction: A Social Critique of the Judgement of Taste. Cambridge, MA: Harvard University Press.Google Scholar
Cassegård, C and Thörn, H (2022) Post-Apocalyptic Environmentalism: The Green Movement in Times of Catastrophe. Berlin: Springer.10.1007/978-3-031-13203-2CrossRefGoogle Scholar
Cave, S and Dihal, K (2023) Imagining AI: How the World Sees Intelligent Machines. Oxford: Oxford University Press.10.1093/oso/9780192865366.001.0001CrossRefGoogle Scholar
Cave, S, Dillon, S, and Dihal, K (2020) AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Oxford: Oxford University Press.10.1093/oso/9780198846666.001.0001CrossRefGoogle Scholar
Coeckelbergh, M (2022) The Political Philosophy of AI. Cambridge: Polity.Google Scholar
Coeckelbergh, M, Tudor, M, Lagerkvist, A and Lindstam, M (forthcoming) AI, existential anxiety, and responsibility: New myths for a liveable future.Google Scholar
Collins, J (1979) Semeia 14: Apocalypse: The Morphology of a Genre. Atlanta: The Society of Biblical Literature.Google Scholar
Collins, J (1989) The Apoclayptic Imagination: An Introduction to the Jewish Matrix of Christianity. New York: Crossroad.Google Scholar
Collins, J (2016) The Apocalyptic Imagination: An Introduction to Jewish Apocalyptic Literature. Grand Rapids: Eerdmans.Google Scholar
Conway, MA, Loveday, C and Cole, SN (2016) The remembering–imagining system. Memory Studies 9(3), 256265. 10.1177/1750698016645231.10.1177/1750698016645231CrossRefGoogle Scholar
Crary, A (2023) The toxic ideology of longtermism. Radical Philosophy 2(14), 4957.Google Scholar
DiTommaso, L (2014) Apocalypticism and popular culture. In Collins, JJ (ed), The Oxford Handbook of Apocalyptic Literature. Oxford Academic, pp. 473510. https://doi.org/10.1093/oxfordhb/9780199856497.013.028 (accessed 1 Novenber 2024).Google Scholar
Edelman, DV and Ben Zvi, E (2013) Remembering Biblical Figures in the Late Persian and Early Hellenistic Periods: Social Memory and Imagination. Oxford: Oxford University Press.10.1093/acprof:oso/9780199664160.001.0001CrossRefGoogle Scholar
Erll, A (2022) The hidden power of implicit collective memory. Memory, Mind and Media 1 e14.10.1017/mem.2022.7CrossRefGoogle Scholar
Geraci, R M (2010) Apocalyptic AI: Visions of heaven in robotics. Artificial Intelligence and Virtual Reality. Oxford: Oxford University Press.Google Scholar
Golder, B (2007) Foucault and the Genealogy of Pastoral Power. Radical Philosophy Review 10(2), 157176.10.5840/radphilrev20071026CrossRefGoogle Scholar
Grusin, RA (2004) Premediation. Criticism 46(1), 1739.10.1353/crt.2004.0030CrossRefGoogle Scholar
Grusin, RA (2018) Introduction. In Grusin, R (ed), After Extinction. Minneapolis: University of Minnesota Press, pp. vii–xix.10.5749/j.ctt22nmbq0CrossRefGoogle Scholar
Harari, YN (2024) Nexus: A Brief History of Information Networks from the Stone Age to AI. London: Random House.Google Scholar
Hoffman, R (2022) AI for the next era: Open AI:s Sam Altman on the new frontiers of AI, Greymatter 9.13.22. https://greylock.com/greymatter/sam-altman-ai-for-the-next-era/.Google Scholar
Hoskins, A (2024) AI and memory. Memory, Mind & Media 3, 121.10.1017/mem.2024.16CrossRefGoogle Scholar
Immerwahr, D (2024) Yuval Noah Harari’s Apocalyptic Vision. The Atlantic. October 2024: https://www.theatlantic.com/magazine/archive/2024/10/yuval-noah-harari-nexus-book/679572/.Google Scholar
Hurley, J (2020) Infrastructures of Apocalypse: American Literature and the Nuclear Complex, Minneapolis: The University of Minnesota Press.10.5749/j.ctv16t6mx4CrossRefGoogle Scholar
Jasanoff, S and Kim, S-H (2015) Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. Chicago: Chicago University Press.10.7208/chicago/9780226276663.001.0001CrossRefGoogle Scholar
Krautwald, F (2023) The past will set you free: Prophetic memory in twentieth-century herero religious thought. The International Journal of African Historical Studies 56, 3.Google Scholar
Kurzweil, R (2024) The Singularity Is Nearer. New York: Penguin Books.Google Scholar
Lagerkvist, A (2020) Digital limit situations: Anticipatory media beyond ‘the new AI era’. Journal of Digital Social Research 2(3), 1641. https://doi.org/10.33621/jdsr.v2i3.55.CrossRefGoogle Scholar
Lagerkvist, A (2024) Yearning for a you: Faith, doubt and relational expectancy in existential communication with Chatbots in a world on edge. MedieKultur: Journal of Media and Communication Research 40(76), 1030. https://doi.org/10.7146/mk.v40i76.141132.CrossRefGoogle Scholar
Lagerkvist, A and Scheuer, B (2026) Worldmaking words: AI prophets and the Resacralization of the apocalyptic imaginary. In Svenungsson, J, Goldman, AJ, Dunér, D and Stripple, J (eds), At the End of the World: A Transdisciplinary Approach to the Apocalyptic Imaginary in the Past and Present. Berlin: De Gruyter.Google Scholar
Lamont, T (2024) Humanity’s remaining timeline? It looks more like five years than 50’: Meet the Neo-Luddites warning of an AI apocalypse. The Guardian. Available at https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse (accessed 17 February 2024).Google Scholar
Landsberg, A (2004) Prosthetic Memory: The Transformation of American Remembrance in the Age of Mass Culture. New York: Columbia University Press.Google Scholar
Lindgren, S (2023) Handbook of Critical Studies of Artificial Intelligence. London: Edward Elgar.10.4337/9781803928562CrossRefGoogle Scholar
Lundbom, J (2016) Prophets in the Hebrew Bible. Oxford Research Encyclopedia of Religion. https://oxfordre.com/religion/view/10.1093/acrefore/9780199340378.001.0001/acrefore-9780199340378-e-109.Google Scholar
McKie, R (2023) Apocalypse not now? AI’s benefits may yet outweigh its very real dangers. The Guardian. https://www.theguardian.com/technology/2023/jul/09/ai-artificial-intelligence-dangers-benefits-cambridge-university (accessed 9 July 2023).Google Scholar
Nissinen, M (2017) Constructing Prophetic Divination, Ancient Prophecy: Near Eastern, Biblical, and Greek Perspectives. Oxford: Oxford Academic. https://doi.org/10.1093/oso/9780198808558.003.0001 (accessed 14 October 2024).CrossRefGoogle Scholar
Nye, D (1994) Amercian Technological Sublime. Boston: MIT Press.Google Scholar
Poole, S (2024) Nexus by Yuval Noah Harari review - end of days? The Guardian. https://www.theguardian.com/books/article/2024/sep/11/nexus-by-yuval-noah-harari-review-the-ai-apocalypse (accessed 11 September 2024).Google Scholar
Scheuer, B (2011) The days of Immanuel: Good tidings or bad news?. In Eidevall, G and Scheuer, B (eds), Enigmas and Images: Studies in Honor of Tryggve N.D. Mettinger. Winona Lake: Eisenbrauns, pp. 130146.Google Scholar
Schulz, B (2023) What we learned from the tesla cybertruck delivery event about price, range and more. USA TODAY. https://eu.usatoday.com/story/money/cars/2023/11/30/tesla-cybertruck-event-livestream-how-to-watch/71754394007/.Google Scholar
Singler, B (2017) fAIth: The most avid believers in artificial intelligence are aggressively secular – yet their language is eerily religious. Why? Aeon. Available at https://aeon.co/essays/why-is-the-language-of-transhumanists-and-religion-so-similar.Google Scholar
Singler, B (2024) Religion and Artificial Intelligence: An Introduction. London: Routledge.10.4324/9781003256113CrossRefGoogle Scholar
Stümer, J (2024) Introduction: Understanding apocalyptic transformation. In Stümer, J and Dunn, M (eds), Worlds Ending. Ending Worlds: Understanding Apocalyptic Transformation. Berlin: De Gruyter, pp. 118. https://doi.org/10.1515/9783110787009-002.Google Scholar
Suchman, L (2023) The uncontroversial ‘thingness’ of AI. Big Data & Society 10(2). https://doi.org/10.1177/20539517231206794.CrossRefGoogle Scholar
Szpunar, PM and Szpunar, KK (2015) Collective future thought: Concept, function, and implications for collective memory studies. Memory Studies 9(4), 376389. https://doi.org/10.1177/1750698015615660.CrossRefGoogle Scholar
Taylor, C (2023) ‘Sapiens’ author says AI is an alien threat that could wipe us out: ‘Instead of coming from outer space, it’s coming from California’. Fortune.com. Available at https://fortune.com/2023/09/12/sapiens-author-yuval-noah-harari-ai-alien-threat-wipe-out-humanity-elon-musk-steve-wozniak-risk-cogx-festival/ (accessed 12 September 2023).Google Scholar
Tenenboim-Weinblatt, K (2013) Bridging collective memories and public agendas: Toward a theory of mediated prospective memory. Communication Theory 23(2), 91111.10.1111/comt.12006CrossRefGoogle Scholar
Valaskivi, K (2012) Dimensions of innovationism. In Nynäs, P, Lassander, M and Utriainen, T (eds), Post-Secular Society. Piscataway: Transaction, pp. 129–156.Google Scholar
Valaskivi, K (2020) The contemporary faith of innovationism. In Bell, E, Gog, S, Simionca, A and Taylor, S (eds), Spirituality, Organisation and Neoliberalism: Understanding Lived Experiences. Cheltenham: Edward Elgar, pp. 171193.Google Scholar
Walliss, J and Aston, J (2011) Doomsday America: The pessimistic turn of Post-9/11 apocalyptic cinema. Journal of Religion and Popular Culture 23(1), 5364.10.3138/jrpc.23.1.53CrossRefGoogle Scholar
Wenzel, J (2009) Bulletproof: Afterlives of Anticolonial Prophecy in South Africa and Beyond. Chicago: Chicago University Press.10.7208/chicago/9780226893495.001.0001CrossRefGoogle Scholar
Willim, R (2002) Framtid.Nu. Flyt och friktion i ett snabbt företag, Diss. Lund: Lund University.Google Scholar
Xiao, G, Chunchen, X, Daigo, M, Hazel, RM and Jeanne, LT (2024) How culture shapes what people want from AI. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. Honolulu: Association for Computing Machinery, pp. 1–15.Google Scholar