1 Introduction: Patterns behind Market Prices, an Old Obsession
John Maynard Keynes, the renowned economist, was also an outstanding mathematician – long before it became a ‘norm’ in the former discipline. Between the 1920s and 1930s, he amassed a mountain of data on securities trading in capital markets and set himself the task of identifying patterns among them. He believed there was a hidden law governing the movement of stock prices, a sentiment shared by many economists grounded in the classical tradition. However, to his disappointment, he found that the movements were almost entirely random. The behaviour of markets could not be predicted precisely. This only apparently prosaic discovery led to one of the greatest revolutions in twentieth-century economics; non-probabilistic uncertainty was elevated to the centre of economic analysis (Skidelsky, Reference Skidelsky2005). The simple yet powerful idea is that, since the future cannot be anticipated, uncertainty conditions economic agents to hold money instead of making spending decisions. Uncertainty becomes the primary reason for fluctuations in investment and liquidity preference, in a game driven by agents’ expectations.1
Nearly 100 years later, the penetration of artificial intelligence (AI) into financial markets continues to echo this same issue. Despite abundant critiques coming from non-conventional economics and social sciences, the hidden patterns of markets and their mathematical regularities continue to be obsessively sought, often without success. Increasingly sophisticated pricing and forecasting models were developed only to, despite their advances, prove limited and fail again, as the tragic cases of Long-Term Capital Management (LTCM) in 1998 (Quiggin, Reference Quiggin2010, pp. 56–58) and the great financial crisis of 2008 (Tooze, Reference Tooze2019) didactically demonstrate.2
But what explains this relentless fight against contingency, this inglorious quest for control, at any cost, over unpredictability in the markets? Why would the verification of non-probabilistic uncertainty not have been enough to appease those who would continue this search? And why was the systemic coordination of markets with the aid of the state, something that achieved relative success during some decades of the twentieth century, also insufficient? This chapter contextualizes these questions within our digital age, exploring AI as a financial infrastructure. The objective is to underscore the persistent connections it has with a volatile, crisis-prone system, challenging the commonly espoused notion of AI as a stabilizing force in finance.
Since the 2000s, the data-centric digital circuits in global finance have increasingly steered financial markets towards a comprehensive algorithmic mode of management (Campbell-Verduyn, Goguen, and Porter, Reference Campbell-Verduyn, Goguen and Porter2017). Enthusiastically, AI is said to be the new frontier of this process. Following the meso-level ‘infrastructural gaze’ proposed by Westermeier, Campbell-Verduyn, and Brandl (this volume),3 this chapter interrogates the penetration of AI in capital markets as a combination of both continuity and change in finance. In doing so, it sheds light on the complexity arising from the interconnections of its micro and macro dimensions. Building on the Science and Technology Studies (STS) definition of infrastructures – as evolving socio-technical systems that combine human and non-human elements (Bernards and Campbell-Verduyn, Reference Bernards and Campbell-Verduyn2019) – and on the political economy of digitalized finance (Paraná, Reference Paraná2019) – I reflect on AI as an ‘emergent’ financial infrastructure.
The increasingly infrastructural dimension of AI in capital markets comes from, first, an evolution from algorithmic trading and algorithmic financial governance, and secondly, from the rise of AI as a ‘general-purpose technology’ within the financial domain. Following this rationale, I draw on insights from Danielsson, Macrae, and Uthemann (Reference Danielsson, Macrae and Uthemann2022) and Borch (Reference Borch2022) regarding AI and systemic risks, hypothesizing about the consequences of the ‘infrastructuralization’ of AI in capital markets, considering the micro–macro tension typical of capital accumulation, inequality, and crisis dynamics.
By ‘infrastructuralization’, that is, ‘becoming infrastructural’, I understand the remixing or re-establishing of socio-technical relations that position a combination of processes and artefacts as crucial for economic reproduction. Importantly, infrastructures constantly (re-)emerge, particularly during stress events (Scholz, Schauer, and Latzenhofer, Reference Scholz, Schauer and Latzenhofer2022). In depicting financial AI as ‘emergent’, I want to highlight that we should not perceive it as entirely ‘new’. Instead, it is linked to a longer history of developments in big data, algorithms, and even earlier forms of digitalization, all indicating this continuous ‘re-emergence’. By focusing on the sedimentation of somewhat but not entirely different socio-technical relations, an infrastructure perspective enables us to comprehend AI within a broader and more nuanced historical context.
Pursuing this argument, the chapter first delves into an overview of AI’s current applications in financial markets. At this point, it problematizes AI as a possible emergent financial infrastructure. It then proceeds to provide historical background around the dynamics and underlying principles of digitalized finance (Paraná, Reference Paraná2019), elaborating on the relations of finance and technology more broadly, and on the uptake of models, computation, and algorithms in the financial domain in particular. The chapter concludes by drawing the consequences (unpredictability, operational inefficiency, complexity, further concentration) and (systemic) risks arising from such an emergence of AI as a ‘new’ financial infrastructure, particularly those related to biases in data and data commodification, lack of explanation of underlying models, algorithmic collusion, and network effects. It does so by exploring the micro–macro tension, and the fallacy of composition, in financial AI.
2 AI as a Financial Infrastructure?
In addressing the infrastructural aspect of financial AI, it is necessary to understand that the effects it produces – be they good or bad – do not occur solely through the development and dissemination of certain technical artefacts. Here, we can start by unpacking the concept of financial infrastructure, brought forth by the Social Studies of Finance (SSF) literature in dialogue with that of STS and extensively developed throughout this volume.
Infrastructure can be a slippery and somewhat controversial concept (Silvast and Virtanen, in press). For the most part, infrastructures are viewed as social-technical relations and networks that enable or disable functions/actions (such as banking, payments, insurance, and trading), or, as stated, ‘systems through which basic but crucial enabling functions are carried out, but that tend to be taken for granted and assumed’ (Bernards and Campbell-Verduyn, Reference Bernards and Campbell-Verduyn2019, p. 776). This means that infrastructures include not only physical technologies but also protocols, standards, and ideas (Edwards, Reference Edwards, Misa, Brey and Feenberg2003), encompassing both ‘hardware’ and ‘software’. Infrastructures bridge different scales of action, from micro to macro. This relational understanding of infrastructure focuses attention on multiple forms of agency in key choices and possibilities in both stable and unstable times. Building from Edwards (Reference Edwards, Misa, Brey and Feenberg2003), Star (Reference Star1999), and Hanseth and Monteiro (Reference Hanseth and Monteiro1998), as compiled by Bernards and Campbell-Verduyn (Reference Bernards and Campbell-Verduyn2019) and Silvast and Virtanen (in press), STS-inspired definitions of infrastructure point to the following features: facilitation (infrastructures have a supporting function), openness (they are shared by large communities and tend to be open to new entrants), durability (they tend to persist, evolving and extending, over time), centrality (they establish how core functions are deployed), and some level of obscurity (as they are constituted of ecologies of networks operating in the background, they can seem like black boxes for users).
AI surely exhibits some of these characteristics when it comes to financial markets – such as facilitation, openness, and obscurity – while, in a sense, it is still to be seen whether it is going to fully meet other strategic requirements, such as durability and centrality. However, if we take AI (mostly machine learning(ML)) as a second-generation development of automated trading, as argued by Borch (Reference Borch2021),4 it is on the verge of achieving such a point of no return and coercive adoption on the part of various finance actors. Be this the case or not, the intellectual exercise I propose here, following the insights of Danielsson, Macrae, and Uthemann (Reference Danielsson, Macrae and Uthemann2022), suggests that, once and if AI becomes pervasive to the point of becoming such an infrastructure for capital markets, as indications suggest, the outcomes should be the ones I will delineate. For that, my infrastructural gaze, while maintaining an eye on the details underpinning macro-level issues, focuses on emergent systemic concerns.
AI engines have been increasingly implemented in markets in recent years. This implementation takes place through different techniques and in different application sectors. The objective, in all cases, is to respond to the financial market factors popularized through the acronym VUCA – volatility, uncertainty, complexity, and ambiguity. The expansion of production and access to financial data of all kinds, coming from different sources and locations, the increase in computational-processing capacity, the expansion of the reach and speed of connectivity, the expressive profits of funds and financial companies based on AI, and intense competition in this information-sensitive sector have led AI to appear as a general-purpose pervasive technology in the functioning of markets,5 with different levels and forms of application. In Srnicek’s (Reference Srnicek2019) terms, ‘AI can become a utility, like electricity, like gas.’ AI, in sum, is being pitched as a potential ‘general condition of production’ (Dyer-Witheford, Kjosen, and Steinhoff, Reference Dyer-Witheford, Kjosen and Steinhoff2019, p. 31) for financial activities; in other words, a financial infrastructure.
Among the most used AI techniques are expert systems, genetic algorithms, fuzzy logic, neural networks, and ML (Milana and Ashta, Reference Milana and Ashta2021). These techniques can be used in combination depending on the application and its objectives. Among these, the most used is ML; and while deep learning has garnered considerable attention in recent years, the predominant AI technology within this context remains supervised learning (Li et al., Reference Li, Sigov, Ratkin, Ivanov and Li2023). Financial forecasting directed to financial trading is the main subfield of AI used in finance. Although there is a growing body of literature on the promises of generative AI in finance (Ali and Aysan, Reference Ali and Aysan2023; Cao and Zhai, Reference Cao and Zhai2023; Dowling and Lucey, Reference Dowling and Lucey2023; Lopez-Lira and Tang, Reference Lopez-Lira and Tang2023), its impact is still hard to discern and difficult to evaluate at this point.6 By all means, financial AI is mostly ML techniques applied in different domains.
Among the most relevant practical applications are credit scoring and rating, risk management, forecasting, fraud detection, accounting, financial advisory services, and financial trading (including news trading). These applications span across different agents, such as banks, financial institutions, fintech, funds, and trader dealing in various financial instruments.
Importantly, from an infrastructural perspective, the so-called intelligent layer of AI is just the apparently superficial layer (Pasquinelli and Joler, Reference Pasquinelli and Joler2021). Such a layer is, nonetheless, of enormous importance. Proprietary models are kept under lock and key by financial market agents. However, these sophisticated models could do little or nothing without global high-speed connectivity, high computational-processing capacity, and an immeasurable amount of data produced, received, and catalogued from all parts. This is what ML, the still-dominant model for applying AI in financial markets, fundamentally requires: data/information, models/software, and hard/fast processing. Once strategically combined, these elements are the true game-changers for the dominant AI application technique in the markets.
In this regard, it is necessary to highlight that such applications are overdetermined by their materialities (Crawford, Reference Crawford2021; Steinhoff, Reference Steinhoff2021). They mobilize a large number of natural resources, human labour, and a whole complex of technical, institutional, political, and geopolitical factors that, as conditions of possibility, tend to go unnoticed (Dyer-Witheford, Kjosen, and Steinhoff, Reference Dyer-Witheford, Kjosen and Steinhoff2019; Pasquale, Reference Pasquale2020). Thus, in a sort of fractal, thinking about the infrastructures behind AI is critical to thinking about AI as infrastructure. This involves understanding why certain forms of automation were developed to the detriment of others and what social mobilization underpins such huge efforts in financial markets. However, this is only possible if we manage to bring to the analysis the different spatialities and temporalities expressed in the varied scales within which these processes are constructed (Corpataux and Crevoisier, Reference Corpataux and Crevoisier2016). Scalability – both geographical scales and scales of data and information processes within it – is fundamental to understanding not only what an AI system is, how it is produced, and how it works, but also to understanding the potential aggregate effects of its widespread application in a given sector. Scale is about power (Crawford, Reference Crawford2021): it is a product of power relations and entails and frames power dynamics.
That considered, stating that AI is being established as a financial infrastructure leads to new regulatory, institutional, and political disputes, along with emergent problems, tensions, and risks (Borch, Reference Borch2022). Examining it from a meso-level infrastructural perspective enables us to focus on the emergent scale involving interactions and material aspects connecting individual agents to the broader financial dynamics and their collective relational outcomes. Importantly, this lens pushes us to see ‘emergence’ over a far longer period – way before generative AI and ChatGPT, for example – as I discuss in the Section 3.
3 The Computerization of Markets and Its Consequences
Although the widespread adoption of AI in financial markets may seem to indicate a new financial infrastructure combining big data, deep computation, and ML models, these ‘new’ infrastructures are built upon and broadly reproduce ‘older’ ones – models, computation, and automation in general (Muniesa, Reference Muniesa2003, Reference Muniesa2007; Pardo-Guerra, Reference Pardo-Guerra2010; MacKenzie, Reference MacKenzie2015). More than supporting the STS accounts of infrastructures as never ‘settled’ and always emergent (Star, Reference Star1999; Edwards et al., Reference Edwards, Bowker, Jackson and Williams2009), this complex combination of continuity and change merits an investigation into the underlying logic this dynamic entails and, particularly, the potential risks arising from its growing complexity. This section contemplates the deeper, longer links of financial AI with the ‘installed bases’ of what existed previously.
As a departure point, it is worth noting that from the 1960s onwards, with the continuous advancement of computation, the search for anticipation, control, and parameterization within financial markets that was mentioned in Section 1 has become increasingly reliant on technological development – as also occurred in other dimensions of social life.
Under capitalism, technological development must respond to competition in the market. This is especially valid for financial industries, defined by the fluidity of its main ‘commodity’ – capital itself in its financial and monetary form, and, more particularly, ‘fictitious capital’ (Mollo, Dourado, and Paraná, Reference Mollo, Dourado and Paraná2022). Such competition is not only intense but, above all, fast and, more than that, ultra-sensitive to information. That is why financial markets, in many cases, not only anticipate but prefigure the development and implementation of a special type of technology, information and communication technologies (ICTs). As finance attempts to anticipate futures, it lays the ground for presents. Therefore, anticipating other fields of application by a few decades, global-reaching ICTs spread and quickly imposed themselves as the basic infrastructure for the functioning of financial markets from the 1970s onwards. And with that, a world of new–old promises opened: communication, machines, the rapid exchange of information, and their automatic processing could finally help us find, through the discovery of hidden patterns in price fluctuations, not only the path to the most profitable investment strategies but, in general, greater control, transparency, predictability, rationality, and efficiency in the markets. It is as if Keynes did not solve the problem of non-probabilistic uncertainty in markets simply because he had no access to computers, big data, and advances in modern mathematics. For some, it would only be a matter of time until these became available. The old Western techno-fetichism was updated, now under the banner of high-tech finance.
Along these lines, the prevailing perspective among major market players is a techno-solutionist approach to adopting these technologies (Campbell-Verduyn and Lenglet, Reference Campbell-Verduyn and Lenglet2023). This vision regards technology as neutral and external, almost magical, without acknowledging its co-construction within various social dimensions. This is a stance that is challenged by extensive scholarship within STS and SSF (see Pinzur, this volume). Thus, confirming the thesis of co-evolution and co-determination between finance, technology, and society, an evolved, ‘updated’ global financial market emerged, accelerating from the 1980s onwards. This would be an electronic market, globally interconnected, with negotiations in real time and increasingly mediated algorithmically. In particular, algorithmic trading and high-frequency trading (MacKenzie et al., Reference MacKenzie, Beunza, Millo and Pardo-Guerra2012; MacKenzie, Reference MacKenzie2015) emerge within what I defined as ‘digitalized finance’, the new globally integrated socio-technical management system for the valorization of financial capital (Paraná, Reference Paraná2019). This occurs through the deployment of cutting-edge automated technologies, which accelerate the compression of space-time flows to obtain short-term liquid financial gains on a global scale.
Finance and technology have long been interconnected (Pérez, Reference Pérez2002; de Goede, Reference de Goede2005; Ajji, Reference Ajji2020). Technology is not merely an external infrastructure but a fundamental component that plays a crucial role in market dynamics (Knorr Cetina and Bruegger, Reference Knorr Cetina and Bruegger2002a, Reference Knorr Cetina and Bruegger2002b; Pardo-Guerra, Reference Pardo-Guerra, Orton-Johnson and Prior2013). In this regard, a relevant fact that has largely gone unnoticed is that the implementation of mathematical models and trading algorithms in financial markets anticipates by a few decades their adoption in other fields of application that have become more popular, such as search engines and social media. It is possible to say that algorithmic governance (Zetzsche et al., Reference Zetzsche, Buckley, Arner and Barberis2018) goes from Wall Street to Silicon Valley, and not the other way around, as is commonly thought. More particularly, as Joque (Reference Joque2022) indicated, the Bayesian and inductivist logic of neural networks make them particularly fit for the financial speculation drives. This anticipation demonstrates another important aspect: if finance operates in an increasingly technologized way, contemporary technological development is carried out more and more by mimicking the modus operandi of finance itself and its form of governance – leveraged and speculative, short-term, fast, individualizing, deregulating (Srnicek, Reference Srnicek2016). Therefore, it is not by chance that financial companies increasingly resemble technology companies, and technology companies increasingly resemble financial institutions. The rise in prominence of the financial dimension of economies and the extensive digitalization of social life are mutually attractive processes in contemporary capitalism.
At this point, particularly from the mid-2000s, we can already observe the consolidation of some developments in these dynamics, which have been unfolding for several decades. Alongside the expansion of economic concentration at different levels and the additional difficulties in regulating the markets that these processes entail, the operational complexity of the markets is increasing. In my previous work (Paraná, Reference Paraná2019), based on a comparative analysis of American and Brazilian financial markets, I identified what I have come to define as the ‘spiral of complexity of digitalized finance’. This spiral (Figure 31.1) is a feedback process that fuels the growing complexity of financial markets, to the point that even many financial operators – let alone the regulators – do not exactly know what is happening in the markets. ‘Flash crashes’ are a good example of how hard it can be to explain these phenomena. In my understanding, AI amplifies this process. The spiral illustrates how completely rational individual decisions can produce ‘irrational’ social outcomes. There are three basic steps to it.
First, in highly digitalized finance, the search for unexploited financial gains incentivizes leading players to invent, deploy, and refine the means needed to overcome the technological or regulatory obstacles that stand in the way of speculation. Secondly, the wide adoption of these new technologies leads to the emergence of new institutional configurations, new modes of action, and new operating dynamics; as a result, markets are reconfigured, partly in response to the political and social conflicts that erupt. Finally, the ensuing emergence of a new institutional and technological environment encourages the development and implementation of even newer technical solutions. This cycle of growing complexity comes to entangle investors, regulators, tech companies, and other (often unwilling) participants. New systemic risks and instabilities proliferate as a result. This chapter explores it in the case of financial AI. Although the risks arising from pervasive AI adoption are new, I argue that the infrastructuralization of AI in finance, as defined earlier in this chapter, occurs along the same lines. This process underpins the spreading of AI through financial markets.
One of the main impacts that digital technologies (AI included) have on the market is to shorten space-time flows, that is, they make things faster and nearer.7 This increases the number and volume of trade operations and transactions. The digital systems behind those trades and transactions are immensely complex; they operate at immense speed and have tremendous learning and adaptive abilities. They make the job of regulating markets far more challenging (Pardo-Guerra, Reference Pardo-Guerra, Knorr Cetina and Preda2012; Arnoldi, Reference Arnoldi2016); there is too much systemic opacity and uncertainty involved. As regulators struggle to keep up, further concentration and centralization of capital within and between markets tend to occur. Those with more advanced technologies tend to enjoy higher competitive advantages and higher profits, leading therefore to centralization.
And here emerges another tension. As the global financial system became, in a sense, more unstable, opaque, and unpredictable – not least because of high-frequency trading and other innovations – it somehow needed to ground its operations in something that promised certainty, transparency, and predictability. This is where we find the new frontiers of this process: blockchain and, above all, AI. However, the same actors who seek stability and transparency through AI are the ones generating instability and opaqueness.
In some sense, AI’s promises represent the actualization of an old paradox. While financiers need to amass information to tame uncertainty, they themselves want to live in a low-information environment, avoiding the scrutiny of consumers, citizens, governments, and the media, at least as far as their own profit-making strategies are concerned. So, businesses always want almost full transparency from everyone but themselves. Their competitors – as well as governments and technocrats – have similar goals, producing, as a combined result, uncertain, risky, and rather opaque scenarios.
The struggle between these two tendencies – their particular combination in different contexts – is the key to understanding information governance in financial markets. It reflects the fundamental capitalist contradiction, that is, that production is a collective social process that, however, is privately conducted and controlled (Marx, Reference Marx1992 [1867]). Since the underlying contradiction can never be properly resolved, all that these disputes can do is somehow precariously accommodate these self-cancelling tendencies.
Therefore, such a ‘spiral of complexity’ entails not only a rather uncontrollable dynamic but also a highly confusing one: few experts and operators can navigate it well (Lange, Reference Lange2016; Lange, Lenglet, and Seyfert, Reference Lange, Lenglet and Seyfert2019). On the other hand, the growing opacity of the markets becomes an element of control, leaving the management of strategic information in the hands of a closed elite of financial actors. Governments and regulators, but also small investors, those acting on the edges, become increasingly dependent on large funds, brokers, and financial institutions to manage their investments. This is usually done with very little transparency. And all of this, again, occurs in the name of efficiency, stability, and predictability. Opacity is the ‘other’, the truth of the quest for financial transparency. It is in this scenario that AI makes its way as an emergent financial infrastructure.
4 The Composition Fallacy in Financial AI
After this necessary historical contextualization, we can now return to the question, presented in the Section 1 of why the obsession with controlling contingency and unpredictability, the search for patterns, and the pursuit of access to as much data and information as possible have not only persisted unabated but also intensified. Its answer lies in the individualized, albeit generalized, struggle not only for profit but for maximum profit as the foundation of survival against the competitive dynamics in financial markets. This finding may seem rather prosaic, but it helps to reveal another central problem for the elaboration of a reflection on the advance of AI in the markets: the problem of the ‘fallacy of composition’, particularly visible from a meso-level infrastructural gaze. The fallacy of composition indicates that the search for control to obtain growing profit generates, along with more profit, a greater lack of control. The insertion of AI in this process both reflects and amplifies it.
The principle of the fallacy of composition, extensively explored in economics,8 deals with the non-neutrality of aggregation and aggregates. In simple terms, the principle states that what is valid for the parts may not necessarily be valid for the whole. In short, the whole tends to be not only quantitatively but also qualitatively distinct from the parts. It is a logical dilemma related to the classic epistemological problem of inductivism versus deductivism, between empiricism and rationalism, in the philosophy of knowledge. In any case, the principle of the fallacy of composition helps us understand the critical phenomena of the digital economy. It explains, in part, the occurrence of the so-called network and platform effects, the sector’s tendency towards the ‘winner takes all’ model, and even the logic of ‘too big to fail’, which tends to encompass some of the main players in the market.
This leads to my core argument that the penetration of AI in capital markets has markedly different effects in the micro- and macroeconomic, or micro- and macro-social dimensions, which the meso focus of infrastructures helps us to see better. In the micro dimension, it is possible to concede that the advancement of AI actually delivers on some of its promises for some agents, particularly the largest and best-positioned ones: greater profit, transparency, predictability, and efficiency. It is also possible to concede that, at the same level, AI contributes to accounting, supervision, and risk management in particular contexts. It is as if some agents – and the most notorious examples are the great hedge fund Renaissance Technologies of the legendary James Simmons and Robert Mercer and, more recently, BlackRock – had, with the help of powerful computers, big data, and AI, finally managed to solve the problem of numerical signals hidden in the markets to guide the best investment strategies.
However, this result is not verified in the topology of the aggregates – the one regulation and systemic risk assessment typically tend to focus on. Again, scale matters. At the macro level, the sign seems to be reversing, and we observe an increase in risk, unpredictability, and perhaps operational inefficiency in the markets – an effect that, as mentioned before, is accompanied by increasing concentration and inequality.9 In Section 5 I will unpack this argument, discussing some of the risks and problems of implementing AI in financial markets. This argument, it is worth noting, can only be built from an infrastructural perspective, in the expanded and systemic sense of the term – for both financial markets and AI.
5 Risks, Limits, and Challenges to Financial AI
These problems stem from the structural limitations of AI that are already widely known: the problem of biases in the data (Crawford, Reference Crawford2017; Eubanks, Reference Eubanks2018), sometimes amplified by these systems, and the lack of explanation of their underlying models. To these, we can add the excessive trust deposited in artefacts and technical systems as exogenous and neutral, supposedly free from human errors and emotions, free from the so-called human factor. These problems are interrelated in a complex way in the markets; that is, ‘certain systemic conditions in markets can allow individual firms’ high-reliability practices to exacerbate market instability, rather than reduce it’ (Min and Borch, Reference Min and Borch2022, p. 277).
Conditioning these limitations, we find the so-called problem of causality (Pearl and Mackenzie, Reference Pearl and Mackenzie2018) – the fact that these systems ‘learn’ and operate based on data patterns and correlations, not on causality. These are systems that cannot give causal explanations for their actions, despite finding strong correlation and accuracy rates. This relates to explainability (Bracke et al., Reference Bracke, Datta, Jung and Sen2019), but goes beyond that. The absence of causal articulation removes some limitations but, above all, removes some advantages of human thought based on human language. Unlike machines, humans reason by combining emotions, rules, moral precepts, philosophical assumptions, flexible adaptive objectives, and a good dose of errors to elaborate theories and causal explanations for their actions. This gives us not only the most probable explanations obtained from statistical correlations but also explanations that may be highly improbable and still correct, such as the gravitation theory or the relativity theory, for example (Chomsky, Roberts, and Watumull, Reference Chomsky, Roberts and Watumull2023). Given these limits, the interactional logic of markets (MacKenzie, Reference MacKenzie2019) poses a great challenge to financial AI. As Min and Borch (Reference Min and Borch2022, p. 277) demonstrate, ‘automated markets are characterized by tight coupling and complex interactions, which render them prone to large-scale technological accidents’.
Hypothesizing the role of AI as a central banker, Danielsson and colleagues (2022) present a compelling argument for why AI can potentially undermine financial stability by ‘creating new tail risks and amplifying existing ones due to procyclicality, unknowable unknowns, the need for trust, and optimization against the system’ (p. 1), hence ‘increasing systemic risk’.10 While they focus on financial supervision (an imagined ‘Bank of England Bot’), their rationale can be extended to the wider penetration of AI in other financial domains, in our terms, its broader dissemination as an emergent financial infrastructure. Their sharp focus on hidden logic, complex dynamics, and aggregation problems demarcates a productive contrast to the sometimes circular and repetitive micro perspectives dominant in technical literature.
Expanding upon their insights, the first challenge confronting financial AI emanates from the reciprocal influence wherein economic agents responding to AI concurrently impact the system. Recognizing that the utility of AI hinges on the structural alignment with the task at hand, its optimal application occurs when addressing problems involving a solitary agent with fixed objectives and rules within a relatively predetermined space of action. Deviating from this idealized model introduces heightened complexities. Economic agents’ decisions are substantially contingent upon the environmental and infrastructural framework within which they operate. Consequently, alterations in this milieu, instigated by the actions of AI, prompt agents to adjust the parameters governing their decisions. The behavioural responses inferred by the AI engine from historical data are contextual and may falter if the engine attempts to exploit them for control purposes. This nuanced interplay extends its ramifications to elements concerning the interrelationship between market agents and political systems. In essence, deploying a machine with predetermined objectives in a highly intricate environment yields unexpected behaviours.11 As noted by Ashta and Herrmann (Reference Ashta and Herrmann2021), there exists a ‘technology-oriented’ risk associated with ‘overfitting patterns’, potentially leading to the misinterpretation of data by algorithms within the dynamic and ever-evolving ecosystem of financial markets. In alignment with this perspective, Borch (Reference Borch2022, p. 9) contends that
ML-based trading systems may face severe risks when confronted with rapidly changing market settings that differ from those reflected in the training data. Most importantly, however, complex ML-based automated trading systems built on deep neural network architectures, are characterized by opacity: it is, as of yet, exceedingly difficult to understand how they arrive at their predictions and trading policies.
The second challenge of AI as financial infrastructure pertains to data access for market regulation and supervision. In digital businesses, although a significant part of data circulation is relatively open, facilitating various activities, data is increasingly treated as a commodity and private property (Rotta and Paraná, Reference Rotta and Paraná2022; d’Alva and Paraná, Reference d’Alva and Paraná2024). Notably, in the digital economy, while ML and AI algorithms are mostly free and open source, the financial data used for their training are typically closed and proprietary (Bholat and Susskind, Reference Bholat and Susskind2021). Although the financial system may appear to be the ideal use case for AI given its generation of seemingly infinite amounts of data, challenges such as measurement problems, data silos, and hidden interconnections limit the information that can be gleaned. Furthermore, agents might intentionally obscure the explainability of proprietary models to maintain a competitive edge. It is anticipated that many disputes between regulators and market agents will arise from these challenges.12
Another significant risk associated with AI penetration in finance concerns the dynamic of trust established in expert systems, whose functioning is not likely to be questioned until a serious failure occurs. When we observe AI performing well in low-level functions, it gives the green light to at least experimental adoption in higher-level functions. Cost savings on expensive human domain knowledge will provide additional incentives to adopt AI for trading and financial decision-making. While the issue of trust is present in the current set-up, crucial differences between human decision-makers and AI make the problem particularly pernicious. Determining how an AI reasons is more challenging than assessing a human decision-maker, and holding AI accountable presents additional complexities. Additionally, because we do not know how AI would react to the unknown-unknowns of economic dynamics – statistically irrelevant yet important causes of instability – the question of trust becomes increasingly pertinent as AI encroaches on meso-/macro-like problems. Major stress events often arise from interconnections between seemingly disparate parts of the system, fuelled by political linkages that only manifest themselves once the stress is underway. Crises, by definition, are unexpected and non-statistically relevant events – black swans – that generally occur outside the universe of consideration of models looking for patterns in data.
Additionally, the widespread adoption of AI and the use of standardized models by numerous agents may lead to strategy convergence. It can unexpectedly heighten the interconnectedness of financial markets, increasing correlations among previously unrelated variables. These correlations may strengthen network effects, causing unforeseen shifts in market dynamics. As asserted by Danielsson, Macrae, and Uthemann (Reference Danielsson, Macrae and Uthemann2022), AI is poised to exacerbate the inherent pro-cyclicality within the financial system. Essentially, AI is anticipated to magnify vicious economic cycles, where micro-level rationality leads to macro-level irrationality. AI’s capacity to identify and adhere to superior risk management processes may result in a homogeneous set of techniques vulnerable to the same unknown-unknowns. Convergence in risk perception and management strategies across various AI applications, both in public and private sectors, could diminish the diversity that typically mitigates stress causalities. The risk assessment homogenization from self-reinforcing intelligent machine strategies may compromise the system’s resilience. Moreover, the heightened performance of AI during periods of stability may foster increased trust, potentially encouraging additional risk-taking behaviour. In this manner, AI manifests as pro-cyclical, echoing Minsky’s (Reference Minsky2008, p. xii) proposition that ‘stability is destabilizing’.
A distinct limitation of financial AI lies in its role in risk management, as extensively discussed by Danielsson, Macrae, and Uthemann (Reference Danielsson, Macrae and Uthemann2022) and Danielsson and Uthemann (Reference Danielsson and Uthemann2023). While the theoretical understanding of fragility, fire sales, runs, and negative feedback loops is well established, their specific manifestations depend on contextual factors, such as the current financial market structure and political environment. Despite training on comprehensive datasets that include detailed observations of past crises, AI may not identify all vulnerabilities due to its inability to reason causally. Unlike humans, AI relies on correlation rather than causation. Human regulators possess the capability to experimentally investigate and determine causal relationships, allowing them to respond proactively or reactively. While human regulators cannot predict unknown-unknowns, they possess historical, contextual, and institutional knowledge, enabling them to respond effectively. This is a notable contrast to AI, which lacks this capacity.
A final challenge involves the potential for increased coordination among economic agents in a system marked by widespread AI use, particularly when algorithms learn to cooperate. Research by Calvano and colleagues (2020) indicates that independent reinforcement-learning algorithms can sustain collusive equilibria in pricing games, maintaining prices above competitive levels. Unlike human actors, AI can facilitate tacit collusion without explicit instruction, potentially exacerbating market manipulation and raising legal and practical concerns for regulators. Paradoxically, transparency, often considered a positive attribute, may inadvertently foster coordination, potentially undermining financial stability, especially in scenarios like bank runs. These situations highlight instances where AI-induced behavioural adjustments can result in detrimental feedback loops.
6 Final Remarks
Trust in human decision-making comes from a shared understanding of values and the environment. AI’s combination of built-in values, programmed objectives, and its understanding of the environment may not be intelligible to humans. While hypothetical scenarios can be executed through an AI engine to observe its decisions, soliciting a clear explanation remains challenging.
In summary, as the infrastructural gazing at the meso-level highlights, a tension exists between macro- and micro-financial problems, significantly influencing the utility and implementation of financial AI. Despite its fast penetration, effective operation in this environment necessitates capabilities beyond the current scope of AI, including an understanding of causality, global reasoning rather than local, and the identification of potential threats before they result in adverse outcomes (Borch, Reference Borch2022; Danielsson, Macrae, and Uthemann, Reference Danielsson, Macrae and Uthemann2022).
AI, emerging as a financial infrastructure, introduces specific threats to the integrity of the entire financial system. Challenges stemming from the infrastructuralization of AI must be tackled by financial actors and regulators. A thorough understanding of these threats can be attained by adopting a perspective that considers the macro/meso/micro connections inherent in infrastructures. Addressing these concerns as systemic issues requires an integrated approach that encompasses not only the technical aspects but also the interrelated social, political, and economic dimensions.