Hostname: page-component-5f7774ffb-625c7 Total loading time: 0 Render date: 2026-02-22T08:53:57.089Z Has data issue: false hasContentIssue false

The end of theory? AI and ignorance in financial markets

Published online by Cambridge University Press:  18 February 2026

Ekaterina Svetlova*
Affiliation:
University of Twente, Netherlands
Jakob Arnoldi
Affiliation:
Department of Management, Aarhus University, Denmark
*
Corresponding author: Ekaterina Svetlova; Email: e.svetlova@utwente.nl
Rights & Permissions [Opens in a new window]

Abstract

AI’s growing role in finance challenges traditional expectations of transparency and theoretical understanding. While machine learning (ML) models enhance financial decision-making, they remain largely agnostic to established financial theories, producing knowledge and ignorance in ways that differ from traditional models like VaR, DCF, and Black-Scholes. This essay explores the decoupling of AI models from theoretical financial knowledge and the resulting forms of ignorance. Using 22 semi-structured interviews, we investigate how ML models generate epistemic uncertainties. We focus on causal ignorance: AI systems, including those supported by XAI, fail to provide genuine causal explanations. Because understanding causation is inherently theoretical, AI-driven finance remains theory-agnostic and marked by theoretical ignorance. We explore how this ignorance differs from that of traditional models and what it implies for the role of theory in finance. Finally, we present three possible scenarios for the future of theory in finance and outline directions for further research.

Information

Type
Essay
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of the Finance and Society Network

Ignorance and AI-related finance

Recently, the Bank of England (BoE) has raised concerns about the rapid expansion of AI in the financial sector. A comprehensive Bank of England (2024) survey revealed that approximately 75% of financial companies are using AI technology. Sarah Breeden, the Bank’s deputy governor, warned that the widespread adoption of AI, including generative AI, introduces new and unpredictable risks to the financial system. The complexity and lack of transparency in AI decision-making make it difficult for financial professionals to fully understand and manage these risks. The survey found that only 34% of firms have a ‘complete understanding’ of the AI they use, while 46% have only a ‘partial understanding,’ leading to significant epistemic uncertainty. In response, Breeden emphasized that ‘managers of financial firms must ‘understand and manage what their AI models are doing’’ (Arnold, Reference Arnold2024).

These statements point to a central dilemma surrounding AI models in finance: the tension between the inherent and unavoidable ignorance in AI-driven finance – stemming from both human and algorithmic limitations – and the necessity to fill these epistemic gaps.

On this background, it would be useful to integrate ignorance studies (Smithson, Reference Smithson1989; Proctor and Schiebinger, Reference Proctor and Schiebinger2008; DeNicola, Reference DeNicola2017; McGoey, Reference McGoey2019; Gross and McGoey, Reference Gross and McGoey2023) with social studies on the adoption of AI, particularly machine learning (ML), in financial markets. The literature on automated finance has already emphasized the importance of ignorance in financial modeling (Svetlova, Reference Svetlova2018), for algorithm designers and traders (Lange, Reference Lange2016; Souleles, Reference Souleles2019; Borch, Reference Borch2022), as well as in financial regulation (Coombs, Reference Coombs2016) and in markets more general (Nik-Khah and Mirowski, Reference Nik-Khah, Mirowski, Beverungen, Mirowski and Nik-Khah2019). At the same time, recent research on AI has highlighted the significance of ignorance as a fundamental concept in AI-related contexts (Grill, Reference Grill2022; Wehrli et al., Reference Wehrli, Hertweck, Amirian, Glüge and Stadelmann2022; White and Lidskog, Reference White and Lidskog2022; Kirkegaard, Kristensen, and Lauridsen, Reference Kirkegaard, Kristensen and Lauridsen2023). However, the next logical step – integrating ignorance studies with the sociology of finance, particularly regarding the rise of AI and ML – has yet to be undertaken.

In ignorance studies, non-knowledge is considered as a social fact that can be analyzed through the lens of who does not know what, how they do not know it, and with what effect. These studies examine the absence or limitations of knowledge, as well as the deliberate construction of non-knowledge across various domains. Given that all participants in financial markets possess a certain degree of ignorance regarding AI technology – and that the technology itself is prone to ignorance – it is striking that AI-supported niches of finance have so far been overlooked as a relevant domain for ignorance studies. A more systematic research effort in this field is strongly needed.

This essay explores tentative directions of such research. We refer to the recent studies in sociology of finance which have emphasized the need to distinguish ML models from more ‘traditional’ theory-driven financial models, such as DCF, VaR, Black-Scholes, and CAPM, with respect to how they produce knowledge and ignorance (Spears and Hansen, Reference Spears, Hansen, Borch and Pardo-Guerra2025). In this essay, we investigate how the role and nature of theory and knowledge changes with AI-supported finance. More specifically, we ask how the roles of theory and causation have evolved in the forms of knowledge and ignorance now shaping financial practice, focusing on causal ignorance and the theory-agnostic nature of AI.

In addition to literature on the topic, the essay draws on 22 semi-structured interviews and informal conversations with finance industry professionals during participant observations at various industry conferences and exhibitions, totaling 75 hours. We interviewed finance professionals in various sectors (such as investments, commercial banking, credit services, central banking, and financial analysis) as we are interested in common patterns across sectors and aim to make a general point about ignorance in the financial sector. Depending on participants’ preferences, some interviews were audio-recorded and transcribed, while others were documented through detailed field notes, as certain participants declined recording. Both transcripts and notes were systematically coded to ensure consistency and analytical rigor.

We begin by exploring the various forms that ignorance can take in AI-driven finance, supporting our argument for the systematic sociological analysis of AI-related finance through the lens of ignorance studies. We then focus on a particular type of ignorance: causal ignorance. Here we argue that AI algorithms, including those accompanied by XAI methods, do not provide genuine causal explanations. Understanding causation is inherently theoretical, and for this reason, AI-driven finance remains marked by theoretical ignorance and can be seen as fundamentally theory-agnostic. Focusing on the relationship between theory and causation, we deliberate on how the ignorance generated by ML models differs from that associated with traditional models, and, more generally, how the role and nature of theoretical knowledge and ignorance in finance change. In particular, we examine empirical manifestations of theoretical ignorance by presenting three scenarios of how AI-driven finance engages with or distances itself from theory, evaluate these scenarios, and conclude with a discussion of findings and outlook for future research.

Pertinence and relevance of AI-related ignorance in financial markets

The integration of AI into finance introduces various forms of ignorance. In what follows, we provide a brief overview of some key forms and sources of ignorance in AI-driven finance. While this list is not exhaustive, we believe it offers a meaningful perspective on the fundamental epistemic challenges.

First, in today’s AI-based finance, as in many other contemporary knowledge intensive professional fields (Steinberg, Reference Steinberg2024; Schmidt, Putora, and Fijten, Reference Schmidt, Putora and Fijten2025), ‘knowledge is a function of market data’ (Borch, Reference Borch2022: 7). Resultantly, conventional knowledge risks associated with data (such as data quality in terms of relevance and accuracy) are heightened by the challenge that these risks may not be promptly and timely addressed by humans. Given the speed, complexity, and continuous learning capabilities of machines, human agents have limited capacity to identify flaws and faulty inferences related to data, despite the implementation of extensive data hygiene measures designed to prevent such issues. In particular, data scarcity is an important epistemic issue. Although this problem is not new (Israel et al., Reference Israel, Kelly and Moskowitz2020), one of our interviewees (Interview 15) particularly emphasized that finance in fact often relies on small datasets, typically comprising only a few hundred relevant observations, which might lead to misestimating the statistical power of AI models. He claimed that the shift from linear models like Value at Risk (VaR) to nonlinear machine-learning models exacerbates information uncertainty, making meaningful statistical validation increasingly difficult.

Second, there are many unknowns about how an AI algorithm will behave in situations for which it was not specifically trained. There is a concern that machine-generated knowledge may not adapt quickly enough in response to unexpected and severe contextual changes, such as central bank policy shifts or political turmoil. This is the problem of overfitting and model drift, where models’ behavior in untrained scenarios remains unknown. As a result, ‘algorithms – especially models derived directly from data via machine learning – […] are often so complex and opaque that even their designers cannot anticipate how they will behave in many situations’ (Kearns and Roth, Reference Kearns and Roth2020). Our interviewees specifically highlighted that AI systems’ knowledge is not contextual enough and exhibits severe gap from reality: AI models primarily learn from past data and cannot anticipate qualitative factors influencing company’s performance, such as the eminent departure of a key executive who has been a major contributor to company’s success. This is a crucial epistemic limitation in valuation tasks, for example. A financial consultant (Interview 13) stressed that many relevant factors missed by AI can be ‘seen’ – and thus help close the theory-practice gap – only in conversation with a client: To assess the company’s financial position for the next year, you have to discuss each balance sheet and income statement item with a client. Thus, while using AI, finance professionals are in doubt whether the model sees the whole picture and which aspects and changes of the world it is ignorant of.

The model drift just mentioned can easily occur also in a normal operational mode, that is, in routine situations where there are no significant external events of which the AI models are not aware. Even in such circumstances there may be small biases in the original training data that could be augmented over time as the models are exposed to new data. For that reason, standard AI governance guidelines stipulate continuous surveillance of AI models – something that is also empathized by our respondents. This includes the use of various formal tests (KS scores, PSI, etc.) and continuous assessments of the performance of AI output against given benchmarks. An interviewee mentioned that, in some cases, it is very clear when models are drifting but also emphasized that such drifts can happen very swiftly because AI models are ‘great accelerators’ (Interview 5). The same interviewee also stated that, in many instances, it is difficult to step in early enough to catch and prevent models from drifting, so that assessments in many cases have to be based on a combination of historical data and ‘experience’, or ‘domain knowledge’ (Interview 17, see also Hansen and Souleles, Reference Hansen and Souleles2023), thus emphasizing a subjective element to such assessments. Thus, AI designers and users often struggle to fully understand the dynamics that can lead to model drift or may not always recognize when drift occurs.

Thirdly, there is a risk of ‘not knowing what actually guides trading decisions’ (Borch, Reference Borch2022: 8). Some algorithmic investment decisions cannot be exhaustively described by codes or programs (e.g., deep neural networks); rather, there are hidden and complex mid layers of statistically trained algorithmic elements which are difficult to analyses due to their complexity. This problem is, of course, not restricted to AI in finance but is a general problem with AI. For this reason, there is a vast literature on explainability in AI (including in finance), and indeed also a literature on ignorance and AI, most of which is highlighting the opacity of AI systems (Emmert-Streib, Yli-Harja, and Dehmer, Reference Emmert-Streib, Yli-Harja and Dehmer2020; Boge, Reference Boge2022; Borch and Min, Reference Borch and Min2022; Khan and Vice, Reference Khan and Vice2022; Duede, Reference Duede2023; Carloni, Berti, and Colantonio, Reference Carloni, Berti and Colantonio2025; Schmidt et al., Reference Schmidt, Putora and Fijten2025).

Finally, there are unknown systemic and structural effects of AI use, also in finance. Severe, unpredictable market swings might occur, for example, if AI systems (autonomously) collude or herd. We received a foretaste of such developments during the ‘quant meltdown’ in August 2007, the flash crash on May 6, 2010, and the mini crashes in 2014 and 2020. Such fluctuations can jeopardize markets in new, unknown ways. AI applications might destabilize the financial system and make it more prone to crises in fundamentally unpredictable manners (Gensler and Bailey, Reference Gensler and Bailey2020; Svetlova, Reference Svetlova2022; Leitner et al., Reference Leitner, Singh, van der Kraaij and Zsámboki2024).

The examples provided illustrate that artificial intelligence in finance functions within various layers of ignorance, which include data limitations, model opacity, and the unknown systemic and structural effects. Financial markets are characterized by ‘symmetrical ignorance’ (Caves, Reference Caves2003: 75; Skidelsky, Reference Skidelsky2009: 45), meaning that all involved parties (including AI algorithms) are ‘unknowers’ (McGoey, Reference McGoey2019). While the discussed knowledge gaps have previously been addressed in various AI-related literature, the implications for social theorists have not been sufficiently or radically thought through. We will make suggestions in this direction by first discussing a particular type of ignorance in AI-driven finance, namely the lack of causal explanation, after which we will link this to lack of theoretical knowledge.

Ignorance and causation

As already mentioned, the opacity of AI models is one of the main reasons for the ignorance when it comes to use of and beliefs in AI-based knowledge. It is, for the reasons mentioned above, hard for humans to assess the validity of AI output, a problem which has sparked significant interest in explainable AI (Coeckelbergh, Reference Coeckelbergh2020; Emmert-Streib et al., Reference Emmert-Streib, Yli-Harja and Dehmer2020; Zodi, Reference Zodi2022). A significant part – although far from all – of that literature on explainability centers on the role of causation (Carloni et al., Reference Carloni, Berti and Colantonio2025). Most AI models are not only difficult to understand in regard to how they function generally. Their specific output, that is, which weights are attributed to different variables, how these variables may interact to cause specific output, etc. is also unclear. Therefore, it is argued that understanding the causal relationships between (data) input and model output will vastly improve the explainability of AI systems (Pearl, Reference Pearl2018b), leading scholars to coin the term causability (Holzinger et al., Reference Holzinger, Langs, Denk, Zatloukal and Müller2019; Chou et al., Reference Chou, Moreira, Bruza, Ouyang and Jorge2022; Carloni et al., Reference Carloni, Berti and Colantonio2025). Unlike standard explainability, which only lists the factors behind a prediction, causability clarifies the causal relationships between those factors and the outcome, making the explanation meaningful and actionable for human decision-making.

At this point, there are no convincing signs causability has been achieved in AI-supported finance; instead, we observe only partial steps in that direction. Some firms employ explainability tools such as SHAP, LIME, or feature importance to highlight which factors drive predictions (e.g., volatility, trading volume, sentiment). Interestingly, several of our interviewees claimed that they managed to effectively address the issue of AI explainability (e.g. Interviews 1, 10, 14, and 15), using various eXplainable AI (XAI) methodologies. Yet, these efforts do not lift ignorance of the type we just mentioned, while remaining correlation-based and falling short of uncovering causal mechanisms.

Often, XAI methods allow to identify which factors led, say, to holding a particular security position in a portfolio (e.g. price-to-earnings ratio of a target company) or to rejecting a loan application (e.g. income level of an applicant). Our interviewees highlighted that their AI systems might uncover new factors that seem to be relevant for the final decision but for an unknown reason. These factors might have ‘a completely non-intuitive relationship to the outcome’ (Interview 16): ‘for example, like, how long you’ve been on the job is a factor that these [credit default] models consider. But it’s not clear, like, whether you’ve been on the job too long or too short a time. Like, they just say the explanation would be, like, length of employment. And so, it’s very unclear, even from that statement, like, well, have I been here too long? Do they think that, like, you know, my prospects for, like, salary increase are, like, minimal because I’ve been in this position too long? Or have I been in too short a period of time? And so, they worry that I’m going to get fired. It’s, like, people [clients who get a rejection] have to, like, fill in all the details by imagining what the relationship could be.’ In other words, while XAI might disclose a factor that contributes to an outcome, the exact relationship between that factor and outcome (negative or positive effect, linear or nonlinear, etc.) may still be unclear.

Meanwhile, academic research on causal ML – drawing on Judea Pearl’s (Reference Pearl2018a; Reference Pearl2018b) framework, causal inference, and counterfactual methods – is making progress, but its uptake in financial institutions is still minimal, where predictive rather than explanatory uses of AI dominate. Thus, the recent efforts to incorporate XAI into financial decision-making cannot alleviate causal ignorance.

From causation to theory

Understanding the causal relationship which AI models estimate between data input and model output is, we suggest, intricately related to theoretical understanding. Theory (broadly defined) is about general and generalizable causal laws based on which predictions can be made (Felin and Holweg, Reference Felin and Holweg2024). We write ‘broadly defined’ because definitions of theory can be much narrower than that, ultimately including only formal mathematics-based theories such as the law of gravity (Emmert-Streib et al., Reference Emmert-Streib, Yli-Harja and Dehmer2020).

Theories understood broadly as scientifically based sets of propositions about causal relationships are often the basis upon which humans can understand and assess AI model output. For example, if and when the causal relationship an AI model estimates can be understood by humans, they will still need to understand the validity of that estimation. If an AI model links factor A with cause X, humans will still need to assess whether that estimated link is valid or whether it is caused by, say, biases in the model’s training data. Any such assessment of validity also brings questions of generalizability and boundary conditions – does the relationship only exist in specific contexts? That assessment will be based on existing empirical knowledge, perhaps even general experience, but it will also be based on theoretical generalizations. And reversely, the AI-based output should be relatable to theory, for example by enabling testing of theories (Desjardins-Proulx, Poisot, and Gravel, Reference Desjardins-Proulx, Poisot and Gravel2019; Mökander and Schroeder, Reference Mökander and Schroeder2022).

Importantly, such theory-based understanding of causal processes is not limited to scientists but extends to professionals more generally. Kurt Lewin’s famous quip about theories being practical of course also alludes to this. Theory is a key part of the heuristics and intuition’s of experts (Gobet and Chassy, Reference Gobet and Chassy2009), upon which their interpretations, understandings, and decisions are based. Theory is a basis for understanding why and how factor A relates to outcome X (Pearl, Reference Pearl2018a).

With the growing adoption of AI – and ML in particular – a new modeling culture is emerging in finance, one that is marked by causal, and thus theory-related, ignorance (Hansen, Reference Hansen2020; Borch and Min, Reference Borch and Min2022; Spears & Hansen, Reference Spears, Hansen, Borch and Pardo-Guerra2025; Millo, Spence and Xu, Reference Millo, Spence and Xu2024). Previously, sociology of finance examined in detail the intensive theoretical developments that accompanied the rise of key models such as MPT, CAPM, EMH, APT, VaR and BSM, which were implemented in practice through automated option trading, passive investing, portable alpha, factor investing, and other applications. During that period, finance became explicitly theory-driven (see Bernstein, Reference Bernstein1992, Reference Bernstein2007).

Although traditional finance models like Markowitz’s portfolio theory or the CAPM were recognized to be highly idealized, abstract, and unrealistic (e.g., assuming rational agents, frictionless markets, normally distributed returns). Yet, philosophers and sociologists argued that such models still generate knowledge: not by being literally true, but by highlighting causal mechanisms and offering heuristics for reasoning (Cartwright, Reference Cartwright1989; Mäki, Reference Mäki and Dilworth1992, Reference Mäki2009). For example, although Markowitz’s mean-variance portfolio model doesn’t describe how investors actually behave (they might not compute covariances), it provides a possible causal mechanism: diversification can reduce risk, and efficient portfolios balance risk and return. Moreover, sociologists of finance have examined how different unrealistic theory-based models have been used to shape, or even perform, financial reality as these models are connected to the complex dynamics of economic life through domain knowledge and human judgment (Mackenzie and Millo, Reference MacKenzie and Millo2003; Svetlova, Reference Svetlova2018).

Today, however, with rise of AI-based modeling, we observe that the search for sound decision-making in finance becomes less guided by theory. Instead, it is increasingly theory-agnostic, in the sense that decisions are now made on the basis of empirical correlations, predictive accuracy, and patterns detected in large datasets, rather than on causal explanations or theoretical models of how markets function.

Predictive ML algorithms are fundamentally ignorant of the world around them – they identify associations between variables, prioritizing predictive accuracy over understanding. This reflects a more general development in scientific knowledge production. In his well-known and highly controversial article, The End of Theory, Chris Anderson (Reference Anderson2008) argued that in an era dominated by algorithms, science no longer requires theory, explanations, or causality – correlations alone are sufficient. He quotes Peter Norvig, Google’s research director: ‘All models are wrong, and increasingly you can succeed without them’, clearly referring to theoretical models here. The focus shifts from understanding mechanisms and creating theories to simply acting on observed correlations. Thus, the rise of algorithmic methods challenges the roles of theory, causation, and in turn explanation.

As a result, theoretical ignorance seems indeed to be a critical factor in AI-driven segments of financial markets. But if financial markets become increasingly detached from theory due to the rise of AI, what implications will this have for the finance as an academic science and for behavior and decision-making of market participants? Asking that question simultaneously means asking whether the lack of theoretical knowledge challenges traditional scientific norms of explanation, causality, and generalizability, and whether the theoretical understanding of ‘why’ (Pearl, Reference Pearl2018a) is becoming progressively irrelevant or will be supplanted by alternative frameworks.

Emerging patterns in the relationship between AI models and financial theory

Given the exploratory nature of our study, we cannot fully answer the above questions regarding finance and theory, but we can provide a tentative outline. We identify three emerging patterns in finance: first, practices where AI models and theory are disconnected; second, cases where AI models are overridden by human domain knowledge; and third, instances where AI models are integrated with traditional financial theories.

Scenario 1: Theory-agnostic cultures

Our interviews made it clear that some finance professionals indeed perceive their AI tools as theory-free (agnostic). Even after direct probing questions about financial theories incorporated in algorithms, they seemed puzzled by our intent, suggesting that they rarely (or never) consider whether theoretical financial knowledge is integrated into their algorithms. One interviewee (Interview 16) replied that, in order to answer our question, he has to try ‘to recall some of the stuff [he] read when [he] was a graduate student’, admitting that ‘certainly there was a period where these kind of more theory-driven, pretty parsimonious models were like the foundation for a lot of the financial markets … but I get the impression that, you know, maybe there is this kind of transition happening where like people are perhaps less wedded to these theoretical models and instead are kind of happy to do purely inductive modeling.’ Another AI developer working in a large investment bank, when asked if he relied on traditional financial models, replied that he was not aware of any role such theories might play. He further stated that he did not know any such models, as he was a computer scientist and had never been trained in finance or economics (Interview 10).

The disconnect between AI models and theory seems to be amplified by both professional and organizational divides. For example, the AI developers in one investment bank we interviewed (e.g. Interview 3) were in a separate organizational unit from the traditional quants, which limited direct knowledge exchange between the two groups. AI designers primarily focused on optimizing algorithms without grounding them in established financial theories, while quants, trained in financial modeling, had little insight into the inner workings of ML systems (also Interview 21). This separation reinforced a divide in which AI models were developed with minimal theoretical input, further distancing them from the traditional frameworks of financial knowledge. However, this scenario – though central to our argument – is not the only possible one.

Scenario 2: Bridging an AI-practice gap

ML algorithms tend to suffer from ‘overfit’ and model drift, that is, they incorporate not only market effects but also random noise. Finance professionals may realize that, and, based on that understanding, routinely make decisions that override the AI-generated output. Therefore, the first alternative scenario is one of the emergence of a ‘ML – practice gap’ – a disconnect that arises between the output of ML models and the practical, real-world decisions made by human traders or investors, for example. Equipped with their experience and understanding of the market, humans often feel the need – or are required by regulators – to step in and make corrections when the models’ outputs are deemed unreliable. Thus, we might observe patterns similar to those described with relation to the practical use of traditional financial models such as qualitative overlay and plausibility check (Svetlova, Reference Svetlova2018; Hansen, Reference Hansen2021).

Our interviews also show examples of this happening in AI-related segments of finance: Human market participants bridge the theoretical ignorance gap regarding ‘why’ with judgments and ex-post constructed narratives. In other words, agnostic models provide the freedom for decision-makers to incorporate their own experience, intuition, and contextual knowledge. ‘You can have a data-driven approach to scenario analysis, and then people can take whatever economic intuition story they want from the results’ (Interview 15). Also, another interviewee (Interview 16) observes that financial professionals confronted with inexplicable correlation-based findings construct post-hoc explanations to make them appear more intuitive and bridge the non-knowledge at their core.

Scenario 3: Combining AI and theory

This scenario suggests that AI will enable novel combinations of algorithms and financial practices, both in theoretical development and practical application.

In theory building, ML (and other AI) output can be compared with or informed by assumptions and predictions of traditional financial theoretical models after which ML output can be used as basis for new parametric modeling (de Prado, Reference de Prado2020). An example of such theorizing is provided by Cao et al. (Reference Cao, Yang, Li, Stanimirović and Katsikis2025), who use the now-classic Arbitrage Pricing Theory (APT) to develop a dynamic neural network for real-time portfolio optimization. Another example of AI integration with traditional financial modeling is discussed by Hayes (Reference Hayes2021), who demonstrates how Modern Portfolio Theory (MPT) is used to operationalize robo-advising algorithms. As de Prado (Reference de Prado2020) suggests, this integrative process enables an iterative relationship between ML outputs and financial theory. Theories can be adjusted based on the findings from ML models, and those adjusted theories can, in turn, inform future ML model development, leading to a continuous cycle of refinement and new insights. Such a mutual enrichment is in fact needed, de Prado argues, precisely because AI-generated knowledge cannot stand on its own without the support of theory. This for two reasons: theory-less models cannot deal with black swan events, and they cannot explain, i.e., they cannot answer why questions (De Prado, Reference de Prado2020: 9).

At the same time, we can identify AI-supported financial practices in which established financial theories continue to play a central role. Practitioners in these contexts often emphasize that they do not ‘deviate from theories in any way’ (Interview 19). For example, practitioners may begin by identifying a set of factors believed to drive stock performance, drawing on Fama-French theory and findings reported in academic journals. They then use AI to cluster these factors based on their similarities, while carefully maintaining theoretical interpretability. For instance, momentum as a factor can be justified through the Behavioural Finance theory of investor inertia.

Another interviewee (Interview 22) reported that the design of their AI algorithm was inspired by financial theoretical models – such as the Black-Litterman asset pricing model and Simon’s concept of bounded rationality – without being provided with concrete theoretical rules. Hence, theories were used only to guide the overarching model design.

Evaluating competing scenarios for the future of financial theory

Ultimately, our findings support the indications from research that financial modeling cultures are undergoing a fundamental transformation. From an ignorance studies perspective, this shift reflects a new form of epistemic structuring, where the role of various types of ignorance in AI-driven finance is also changing. Knowledge in financial ML-related modeling is increasingly built on pragmatic knowledge centered around data and algorithmic performance indicators such as accuracy, rather than causal explanations rooted in economic theory. The key challenge moving forward is to determine whether AI in finance will remain a theory-agnostic tool or whether financial professionals and researchers will find ways to integrate AI-driven insights into a broader theoretical framework.

We believe that the theory-agnostic practices (Scenario 1) will prevail. While the decoupling of AI-based knowledge and theory is not inevitable (we explored alternative scenarios), we believe that AI is likely to increase theoretical ignorance.

Scenarios related to bridging AI and financial practices (Scenario 2) or combining AI and theory (Scenario 3) are unlikely to crystallize other than in smaller, specific, domains. Firstly, the literature, as well as our interviews, suggest that the ML-practice gap might be too large to bridge effectively, creating challenges in aligning algorithmic decision-making with domain-specific expertise. Something similar also applies to theoretical integration where the span between AI-generated output and standard theoretical predictions may be too large to bridge.

In light of these challenges, the role of human expertise in ML-driven finance will likely evolve from interpreting model results through domain knowledge (which includes theoretical aspects) to organizing and facilitating the proper functioning of algorithms. Expertise will increasingly center on self-referential knowledge – knowledge structured around algorithm-related quality signals, such as accuracy performance (Millo et al., Reference Millo, Spence and Xu2024). In this process, one form of ML-related ignorance – data-related ignorance – receives attention (as data are cleaned, improved, and adjusted), while theoretical knowledge is increasingly neglected.

Note also that Scenarios 2 and 3 require a certain commitment. Overlay requires an active stance, questioning the AI-based output, while theoretical integration is time consuming and costly. Relying on AI output will in contrast be both fast and easy. A developer of an AI-based trading solution (Interview 18) made the observation, somewhat sarcastically, that if AI products were regulation compliant, this would in many cases be the excuse traders needed to rely heavily on the AI output. Compliance meant it was ok.

Finally, it should be mentioned that another tendency could be a shift toward simplicity. This can manifest itself in efforts to make ML models more intuitive and easier to understand (Hansen, Reference Hansen2020) or in finance professionals opting to avoid using ML algorithms for non-standardized, complex decisions (Interviews 11, 12, 13). For example, a bank employee responsible for technology integration to improve customer experiences across all available communication channels reported that his division – after initial attempts – abandoned creating AI-based credit decision-making models and ‘ended up with having […] rule-based systems of models’ so that ‘AI is not actually making the decisions … We are aiming for AI to work on the simplest of cases so that the person can work on the more complicated cases where somebody needs to think and reason more’ (Interview 11). In general, based on our interviews, it appears that finance professionals often prefer to use ML for simple, standardized tasks such as compliance checks, virtual assistance, industry comparisons, and automated sustainability reporting, in order to minimize potential ignorance-related issues.

Conclusion

The implications of the rise of theoretical ignorance should be further explored. We propose continuing the investigation of profound shifts in calculative (modeling) cultures through the lens of ignorance studies. The role of human overlay and expertise should be further clarified, ideally through deep ethnographic studies focused on specific areas of finance.

Additionally, the consequences of this shift for the concept of performativity warrant further inquiry: MacKenzie and Millo (Reference MacKenzie and Millo2003) famously argued that theory-driven models such as Black-Scholes actively shaped the option markets, and subsequent studies in the social studies of finance provided further examples of how theories performed markets. In this sense, theories and theory-based models were central to the very notion of performativity in finance. But if theories lose their significance as algorithms remain theory-agnostic, should we also speak of an ‘end of performativity’? Recent contributions (Glaser, Pollock, and D’Adderio, Reference Glaser, Pollock and D’Adderio2021; Borch, Reference Borch2022) have already called for rethinking performativity in light of the rise of AI. We take this call seriously and see it as an important future task for sociologists of finance.

Finally, our focus here has been on the use of knowledge in practice. However, much of future development will depend on the creation of new knowledge in academia and the research departments of various financial institutions. The question, which we do not attempt to answer here, is whether such new knowledge will primarily be a function of data, as described earlier, or whether an entirely new type of theory will emerge. Even so, such new forms of theory may also introduce new forms of ignorance.

References

Anderson, C. (2008) The end of theory: The data deluge makes the scientific method obsolete. Wired. https://www.wired.com/2008/06/pb-theory/ Accessed 10 December 2025.Google Scholar
Arnold, M. (2024) Use of AI could be included in stress tests, says BoE deputy. Financial Times, 1 November. https://www.ft.com/content/d4d212a8-c63a-4b00-9f4c-e06ed59f9279. Accessed 28 March 2025.Google Scholar
Bank of England (2024) Artificial intelligence in UK financial services. https://www.bankofengland.co.uk/report/2024/artificial-intelligence-in-uk-financial-services-2024. Accessed 10 December 2025.Google Scholar
Bernstein, P.L. (1992) Capital Ideas: The Improbable Origins of Modern Wall Street. Hoboken, NJ: John Wiley & Sons.Google Scholar
Bernstein, P.L. (2007) Capital Ideas Evolving. Hoboken, NJ: John Wiley & Sons.Google Scholar
Boge, F.J. (2022) Two dimensions of opacity and the deep learning predicament. Minds and Machines, 32(1): 4375.10.1007/s11023-021-09569-4CrossRefGoogle Scholar
Borch, C. (2022) Machine learning, knowledge risk, and principal-agent problems in automated trading. Technology in Society, 68: 101852.10.1016/j.techsoc.2021.101852CrossRefGoogle Scholar
Borch, C. and Min, B.H. (2022) Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading. Big Data & Society, 9(2): 113.10.1177/20539517221111361CrossRefGoogle Scholar
Cao, X., Yang, Y., Li, S., Stanimirović, P.S. and Katsikis, V.N. (2025) Artificial neural dynamics for portfolio allocation: An optimization perspective. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 55(3): 1960–71.10.1109/TSMC.2024.3514919CrossRefGoogle Scholar
Carloni, G., Berti, A. and Colantonio, S. (2025) The role of causality in explainable artificial intelligence. Data Mining and Knowledge Discovery, 15(2): e70015.10.1002/widm.70015CrossRefGoogle Scholar
Cartwright, N. (1989) Nature’s Capacities and their Measurement. Oxford: Oxford University Press.Google Scholar
Caves, R.E. (2003) Contracts between art and commerce. Journal of Economic Perspectives, 17(2): 7383.10.1257/089533003765888430CrossRefGoogle Scholar
Chou, Y.-L., Moreira, C., Bruza, P., Ouyang, C. and Jorge, J. (2022) Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Information Fusion, 81: 5983.10.1016/j.inffus.2021.11.003CrossRefGoogle Scholar
Coeckelbergh, M. (2020) Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4): 2051–68.10.1007/s11948-019-00146-8CrossRefGoogle Scholar
Coombs, N. (2016) What is an algorithm? Financial regulation in the era of high-frequency trading. Economy and Society, 45(2): 278302.10.1080/03085147.2016.1213977CrossRefGoogle Scholar
DeNicola, D.R. (2017) Understanding Ignorance: The Surprising Impact of What We Don’t Know. Cambridge, MA: MIT Press.Google Scholar
de Prado, M.M.L. (2020) Machine Learning for Asset Managers. Cambridge: Cambridge University Press.10.1017/9781108883658CrossRefGoogle Scholar
Desjardins-Proulx, P., Poisot, T. and Gravel, D. (2019) Artificial intelligence for ecological and evolutionary synthesis. Frontiers in Ecology and Evolution 7: 402.10.3389/fevo.2019.00402CrossRefGoogle Scholar
Duede, E. (2023) Deep learning opacity in scientific discovery. Philosophy of Science, 90(5): 1089–99.10.1017/psa.2023.8CrossRefGoogle Scholar
Emmert-Streib, F., Yli-Harja, O. and Dehmer, M. (2020) Explainable artificial intelligence and machine learning: A reality rooted perspective. Data Mining and Knowledge Discovery, 10(6): Article e1368.10.1002/widm.1368CrossRefGoogle Scholar
Felin, T. and Holweg, M. (2024) Theory is all you need: AI, human cognition, and causal reasoning. Strategy Science, 9(4): 346–71.10.1287/stsc.2024.0189CrossRefGoogle Scholar
Gensler, G. and Bailey, L. (2020) Deep learning and financial stability. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3723132. Accessed 28 March 2025.10.2139/ssrn.3723132CrossRefGoogle Scholar
Glaser, V.L., Pollock, N. and D’Adderio, L. (2021) The biography of an algorithm: Performing algorithmic technologies in organizations. Organization Theory, 2(2): 127.10.1177/26317877211004609CrossRefGoogle Scholar
Gobet, F. and Chassy, P. (2009) Expertise and intuition: A tale of three theories. Minds and Machines, 19(2): 151–80.10.1007/s11023-008-9131-5CrossRefGoogle Scholar
Grill, G. (2022) Constructing certainty in machine learning: On the performativity of testing and its hold on the future. OSF Preprints. https://osf.io/preprints/osf/zekqv_v1. Accessed 10 December 2025.10.31219/osf.io/zekqvCrossRefGoogle Scholar
Gross, M. and McGoey, L. (eds.) (2023) Routledge International Handbook of Ignorance Studies. 2nd edition. London: Routledge.Google Scholar
Hansen, K.B. (2020) The virtue of simplicity: On machine learning models in algorithmic trading. Big Data & Society, 7(1): 114.10.1177/2053951720926558CrossRefGoogle Scholar
Hansen, K.B. (2021) Model talk: Calculative cultures in quantitative finance. Science, Technology & Human Values, 46: 600–27.10.1177/0162243920944225CrossRefGoogle Scholar
Hansen, K.B. and Souleles, D. (2023) Expectations, competencies and domain knowledge in data-and machine-driven finance. Economy and Society, 52(3): 421–48.10.1080/03085147.2023.2216601CrossRefGoogle Scholar
Hayes, A. (2021) The active construction of passive investors: Robo-advisors and algorithmic ‘low-finance’. Socio-Economic Review, 19(1): 83110.10.1093/ser/mwz046CrossRefGoogle Scholar
Holzinger, A., Langs, G., Denk, H., Zatloukal, K. and Müller, H. (2019) Causability and explainability of artificial intelligence in medicine. Data Mining and Knowledge Discovery, 9(4): e1312.10.1002/widm.1312CrossRefGoogle ScholarPubMed
Israel, R., Kelly, B. and Moskowitz, T. (2020) Can machines ‘learn’ finance? Journal of Investment Management, 18(2): 2336.Google Scholar
Kearns, M. and Roth, A. (2020) The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford: Oxford University Press.Google Scholar
Khan, M.M. and Vice, J. (2022) Toward accountable and explainable artificial intelligence part one: Theory and examples. IEEE Access, 10: 99686–701.10.1109/ACCESS.2022.3207812CrossRefGoogle Scholar
Kirkegaard, L., Kristensen, A.R. and Lauridsen, T.S. (2023) The organization of ignorance: An ethnographic study of the production of subjects and objects in an artificial intelligence project. Ephemera: Theory & Politics in Organization, 23(1): 161–87.Google Scholar
Lange, A.-C. (2016) Organizational ignorance: An ethnographic study of high-frequency trading. Economy and Society, 45(2): 230–50.10.1080/03085147.2016.1220665CrossRefGoogle Scholar
Leitner, G., Singh, J., van der Kraaij, A. and Zsámboki, B. (2024) The rise of artificial intelligence: Benefits and risks for financial stability. Financial Stability Review, European Central Bank, May 2024. https://www.ecb.europa.eu/press/financial-stability-publications/fsr/special/html/ecb.fsrart202405_02~58c3ce5246.en.html. Accessed 10 December 2025.Google Scholar
MacKenzie, D. and Millo, Y. (2003) Constructing a market, performing theory: The historical sociology of a financial derivatives exchange. American Journal of Sociology, 109(1): 107–45.10.1086/374404CrossRefGoogle Scholar
Mäki, U. (1992) On the method of isolation in economics. In: Dilworth, C. (ed.) Idealization IV: Intelligibility in Science: Poznan Studies in the Philosophy of the Sciences and the Humanities. Volume 26. New York: Rodopi, 319–54.Google Scholar
Mäki, U. (2009) MISSing the world: Models as isolations and credible surrogate systems. Erkenntnis, 70(1): 2943.10.1007/s10670-008-9135-9CrossRefGoogle Scholar
McGoey, L. (2019) The Unknowers: How Strategic Ignorance Rules the World. London: Zed Books.10.5040/9781350225725CrossRefGoogle Scholar
Millo, Y., Spence, C. and Xu, R. (2024) Algorithmic self-referentiality: How machine learning pushes calculative practices to assess themselves. Accounting, Organizations and Society, 113: 101567.10.1016/j.aos.2024.101567CrossRefGoogle Scholar
Mökander, J. and Schroeder, R. (2022) AI and social theory. AI & Society, 37(4): 1337–51.10.1007/s00146-021-01222-zCrossRefGoogle Scholar
Nik-Khah, E. and Mirowski, P. (2019) The ghosts of Hayek in orthodox microeconomics: Markets as information processors. In: Beverungen, A., Mirowski, P. and Nik-Khah, E. (eds.) Markets. Lüneburg: Meson Press, 3170.Google Scholar
Pearl, J. (2018a) The Book of Why: The New Science of Cause and Effect. London: Penguin Random House.Google Scholar
Pearl, J. (2018b) Theoretical impediments to machine learning with seven sparks from the causal revolution. ArXiv preprint. https://arxiv.org/pdf/1801.04016. Accessed 10 December 2025.Google Scholar
Proctor, R. and Schiebinger, L. (eds.) (2008) Agnotology: The Making and Unmaking of Ignorance. Stanford, CA: Stanford University Press.Google Scholar
Schmidt, E., Putora, P.M. and Fijten, R. (2025) The epistemic cost of opacity: How the use of artificial intelligence undermines the knowledge of medical doctors in high-stakes contexts. Philosophy & Technology, 38(1): 5.10.1007/s13347-024-00834-9CrossRefGoogle Scholar
Skidelsky, R. (2009) Keynes: The Return of the Master. London: Penguin.Google Scholar
Smithson, M. (1989) Ignorance and Uncertainty: Emerging Paradigms. New York and London: Springer.10.1007/978-1-4612-3628-3CrossRefGoogle Scholar
Souleles, D. (2019) The distribution of ignorance on financial markets. Economy and Society, 48(4): 510–31.10.1080/03085147.2019.1678263CrossRefGoogle Scholar
Spears, T. and Hansen, K.B. (2025) The use and promises of machine learning in financial markets: From mundane practices to complex automated systems. In: Borch, C. and Pardo-Guerra, J.P. (eds.) The Oxford Handbook of the Sociology of Machine Learning. Oxford: Oxford University Press.Google Scholar
Steinberg, E. (2024) AI, radical ignorance, and the institutional approach to consent. Philosophy & Technology, 37(3): 101.10.1007/s13347-024-00787-zCrossRefGoogle Scholar
Svetlova, E. (2018) Financial Models and Society: Villains or Scapegoats? Cheltenham: Edward Elgar Publishing.10.4337/9781784710026CrossRefGoogle Scholar
Svetlova, E. (2022) AI ethics and systemic risks in finance. AI and Ethics, 2: 713–25.10.1007/s43681-021-00129-1CrossRefGoogle ScholarPubMed
Wehrli, S., Hertweck, C., Amirian, M., Glüge, S. and Stadelmann, T. (2022) Bias, awareness, and ignorance in deep-learning-based face recognition. AI and Ethics, 2: 509–22.10.1007/s43681-021-00108-6CrossRefGoogle Scholar
White, J.M. and Lidskog, R. (2022) Ignorance and the regulation of artificial intelligence. Journal of Risk Research, 25(4): 488500.10.1080/13669877.2021.1957985CrossRefGoogle Scholar
Zodi, Z. (2022) Algorithmic explainability and legal reasoning. Theory and Practice of Legislation, 10(1): 6792.10.1080/20508840.2022.2033945CrossRefGoogle Scholar