Hostname: page-component-cb9f654ff-p5m67 Total loading time: 0 Render date: 2025-08-24T08:43:29.139Z Has data issue: false hasContentIssue false

A question of alignment – AI, GenAI and applied linguistics

Published online by Cambridge University Press:  24 July 2025

Niall Curry
Affiliation:
Department of Languages, Information and Communications, Manchester Metropolitan University, Manchester, UK
Tony McEnery*
Affiliation:
Department of Linguistics and English Language, Lancaster University, Lancaster, England, UK School of Foreign Studies, Xi’an Jiaotong University, Xi’an, Shaanxi, China
Gavin Brookes
Affiliation:
Department of Linguistics and English Language, Lancaster University, Lancaster, England, UK
*
Corresponding author: Tony McEnery; Email: a.mcenery@lancaster.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Recent developments in artificial intelligence (AI) in general, and Generative AI (GenAI) in particular, have brought about changes across the academy. In applied linguistics, a growing body of work is emerging dedicated to testing and evaluating the use of AI in a range of subfields, spanning language education, sociolinguistics, translation studies, corpus linguistics, and discourse studies, inter alia. This paper explores the impact of AI on applied linguistics, reflecting on the alignment of contemporary AI research with the epistemological, ontological, and ethical traditions of applied linguistics. Through this critical appraisal, we identify areas of misalignment regarding perspectives on knowing, being, and evaluating research practices. The question of alignment guides our discussion as we address the potential affordances of AI and GenAI for applied linguistics as well as some of the challenges that we face when employing AI and GenAI as part of applied linguistics research processes. The goal of this paper is to attempt to align perspectives in these disparate fields and forge a fruitful way ahead for further critical interrogation and integration of AI and GenAI into applied linguistics.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.

Introduction

In recent years, AI – broadly conceived – has become a primary concern within academia. With GenAI able to produce coherent text in response to prompts and, importantly, facilitate user analyses through the large language models (LLMs) that power GenAI, access to AI has never been more widespread, nor has it been so readily repurposed. Amid this surge of work taking place across the academy, we might be forgiven for thinking that AI is a recent and entirely novel innovation. However, it is not.

While the technology may have crossed a flexion point at which its usability and utility has improved markedly, GenAI and AI have a much longer history, emerging in earnest in the 1940s, with the term artificial intelligence being coined in 1955 (McCarthy et al., Reference McCarthy, Minsky, Rochester and Shannon1955). The computational power at the time was far more limited than it is today. Turing (Reference Turing and Copeland1948) began the development of AI with his “intelligent machinery,” designed to use logic and reasoning to solve problems and make decisions.Footnote 1 This work inspired the groundbreaking Turing Test (Turing, Reference Turing1950), a test designed to assess machine intelligence through the evaluation of human–machine interaction - if a machine's conversational contributions are indistinguishable from those of humans, then that could be viewed as evidence of intelligence. This test of intelligence is a long-standing and hotly debated issue in research on AI (e.g. Curry et al., Reference Curry, Baker and Brookes2024; Pitrat, Reference Pitrat1995; Schank, Reference Schank1980; Wang, Reference Wang2007). Nevertheless, this initial work triggered a growth of interest in the notion of intelligent machines, including early comparisons between computers and the brain (von Neumann, Reference Neumann1958).

Attempts to produce GenAI chatbots are also rooted in this past. Work in AI, in general, and in the development of dialogue systems, in particular, was, until the 21st century, dominated by so-called symbolic approaches – attempts to craft sets of rules that embodied human knowledge in some formalism that could be used to produce human-like performance. Early chatbots such as ELIZA (Weizenbaum, Reference Weizenbaum1976) were developed in this tradition and offered textual responses to its human interactants while performing the role of a psychotherapist, albeit with mixed success (Goodwin & Hein, Reference Goodwin and Hein1982). The task, psychotherapy, was chosen because the nature of dialogue in psychotherapy matched the affordances of the technology – largely reflecting back to the user transformed versions of inputs rather than being creative per se. Weizenbaum had never truly intended the chatbot to be a success, as he sought to use ELIZA as a means to demonstrate the over-simplistic nature of talk therapy. Yet, users appeared to develop relationships with the chatbot, nonetheless. This theme, of ability distorting the range of use, is one to which we return with reference to modern AI later in this paper.

The progress of AI to the present has not been without interruption – indeed so called “AI winters” have blighted the field in the past, largely caused by expectation exceeding delivery in such a way that funding was withdrawn from the field. The first AI winter, claimed to have lasted from 1974 to 1980 (Crevier, Reference Crevier1993, pp. 163–196), was initiated by a highly critical review of AI research that led to the shift of research funding away from it on both sides of the Atlantic (Pierce et al., Reference Pierce, Carroll, Hamp, Hays, Hockett, Oettinger and Perlis1966). Yet AI research persisted in this lean time, and during it, important developments in statistical AI emerged, inspired by one earlier development from the 1940s, which is often overlooked: the work of Claude Shannon (Shannon & Weaver, Reference Shannon and Weaver1949). More than any of the work mentioned, it is probably Shannon’s work that is most directly connected to the technology that underpins AI today. Shannon developed a theory of communication that used mathematics in the form of information theory. The possibility of using information theory to approach the general modeling of communication was developed by Shannon with Warren Weaver (Shannon & Weaver, Reference Shannon and Weaver1949). The most immediate application of the work was in telecommunications, and it was not until researchers started to apply it to speech processing (Jelinek, Reference Jelinek1976) and morphosyntactic analysis (Garside et al., Reference Garside, Leech and Sampson1987) during the aforementioned AI winter that the potential of an expressly mathematical approach to modeling intelligent behavior, based on what we might see as corpora, was realized. This brought about a slow turn from symbolic to statistical AI. The impact of that work is already evident in applied linguistics, as it was entwined with the development of corpus linguistics (see McEnery et al., Reference McEnery, Brezina, Gablasova and Banerjee2019). In this paper, we focus on the impact that such developments have had on research in applied linguistics, questioning not whether tools such as GenAI can work, but asking, rather, how they should be used to support applied linguistics research.

This question is timely. AI has become increasingly mainstreamed, most noticeably through the launch of Open AI’s ChatGPT in autumn 2022, which brought AI into the lives of users across the world. Since then, in academic contexts, researchers in applied linguistics have approached GenAI with a mix of excitement and trepidation. For some, the advent of accessible AI meant potential advances in scholarship and analytical practices (Crosthwaite & Baisa, Reference Crosthwaite and Baisa2023), enhanced efficiency and research assistance (Pack & Maloney, Reference Pack and Maloney2023), and the possibility of developing more democratic approaches to education (Cohen et al., Reference Cohen, Mompelat, Mann and Connors2024). For others, AI produced fears of obsolescence (Li et al., Reference Li, Cao, Lin, Hou, Zhu and El Ali2024), falling quality in research (Curry et al., Reference Curry, Baker and Brookes2024), loss of knowledge and ways of thinking (Kuteeva & Andersson, Reference Kuteeva and Andersson2024), and rising inequality and forms of technological divide (Sahebi & Formosa, Reference Sahebi and Formosa2024). Individually, these disparate responses could be understood as reactionary, constructing a form of “science-as-process” (Curry & Pérez-Paredes, Reference Curry and Pérez-Paredes2021, p. 492). Yet, collectively, this work can help us to see, with some critical distance, how our understanding of AI has evolved in recent years.

Reflecting on this body of work, this paper evaluates research on AI in applied linguistics and offers a critical perspective on the affordances of AI for our wider field. In the “AI in applied linguistics: The alignment problem” section, we critically review relevant literature, focusing on the so-called “alignment problem” – that is, the potential (mis)alignment of AI with human values and intelligence (Christian, Reference Christian2021 among others). With reference to alignment, we explore three foundational concepts that govern the actions and decision-making practices of the human researcher: epistemology, ontology, and ethics. Each of these guiding factors calls on researchers to consider how they view and understand knowledge, reality, and morality. The “AI in applied linguistics: A focus on language education” section presents a focused discussion of the application of AI in an area of applied linguistics in which AI’s impact is being felt most strongly – language teaching and learning. Finally, the “Closing remarks” section reflects on the contributions of this paper and points to future pathways for developments in the field. Throughout, our focus is principally upon GenAI but, where appropriate, we illustrate our argument with reference to other AI tools of relevance to applied linguistics.

AI in applied linguistics: The alignment problem

While the recent growth of research on AI is marked, a number of earlier studies foreshadowed the contemporary relevance of AI to linguists. For example, in the symbolic AI tradition, Goldstein and Papert (Reference Goldstein and Papert1977) argued that by representing language as a structured system governed by universal rules, AI-based linguistic models supported empirical, hypothesis-driven language research. Goodwin and Hein (Reference Goodwin and Hein1982) attempted to bridge research in linguistics, applied linguistics and AI, by arguing for the value of AI for language analysis and highlighting the early barriers impeding cross-pollination of these fields. For example, they argued that AI researchers do not engage sufficiently with linguistics research, and that some linguists have a propensity to cling on to theories that are demonstrably deficient. While these initial advances are commendable for their vision and sustained relevance, it is not just approaches to AI that have changed since then. Contemporary research on AI in applied linguistics takes a very different shape as, nowadays, applied linguistics is epitomized by its shift toward multidisciplinarity, descriptive, socialized, and contextualized views of language; and diverse understandings of what it means for linguistics to be applied. To better understand the degree to which the values of AI and GenAI can be aligned with those of contemporary applied linguistics, the following three sections discuss research on (Gen)AI and epistemology, ontology, and ethics in applied linguistics.

AI and epistemology in applied linguistics

Epistemology is an inherently fuzzy concept. Broadly conceived, epistemology theorizes how we construct and make sense of our knowledge. Epistemology thus governs the values by which we validate and justify knowledge and the conventions we follow and reconstruct within our wider knowledge-making community and communities of practice. Owing to its diverse subfields and varied theoretical frameworks and methodological approaches (Kuteeva & Andersson, Reference Kuteeva and Andersson2024), applied linguistics is arguably not governed by any single epistemology. Rather, it is host to a plurality of epistemologies (Dewaele, Reference Dewaele, Wright, Harvey and Simpson2019; Pennycook, Reference Pennycook2018) that shape how we come to know things in our field. For example, research in applied linguistics often relies on local, context-dependent knowledge (Kuteeva & Andersson, Reference Kuteeva and Andersson2024). Such knowledge is mediated by language, with resultant knowledge cultures varying across languages, modalities, cultures, time, and contexts of production (Curry, Reference Curry2024; Pérez-Paredes & Curry, Reference Pérez-Paredes and Curry2024; Rymes et al., Reference Rymes, Lee and Negus2024). This breadth renders applied linguistics a heavily reflexive field that sees strength in reflection, positionality, and criticality (Consoli & Ganassin, Reference Consoli, Ganassin, Consoli and Ganassin2023), and one that moves iteratively between epistemological perspectives.

In research on GenAI, in particular, there is potential for epistemological blurring between AI and applied linguistics. For example, GenAI, with its ability to process vast volumes of data and produce outputs in various modalities, can be seen as a tool for knowledge generation (Creely, Reference Creely2024); it should be noted, however, that this term does not correspond to “knowledge generation” in the likes of Wei et al. (Reference Wei, Chen, Yu, Fei, Liu, Al-Onaizan, Bansal and Chen2024) where it is applies specifically to the validity of LLMs. The nature of such knowledge is a key concern, as what it means to know something for AI versus humans has long been a matter of discussion (Pitrat, Reference Pitrat1995; Schank, Reference Schank1980; Wang, Reference Wang2007). For GenAI, knowing is a product of processes built on probabilities and pattern recognition, derived from training data (Cope & Kalantzis, Reference Cope and Kalantzis2024). Such a conceptualization of knowing may have little to do with the knowledge-making processes attributed to the human researcher in applied linguistics. Human capacity for creativity, the role of human accountability in research, and knowledge-making as a reflexive, flexible, and non-linear process are among the ways in which such differences are realized. As such, we may wonder whether a machine can indeed know anything at all. In our view, we can understand AI approach(es) to epistemology or knowing – broadly conceived – as its way of creating knowledge, though we may wonder whether this knowledge belongs to the AI or to the sources on which its LLM is trained. Ultimately, we do not see AI-driven processes as the same as those involved in human knowledge-making for applied linguistics – at least for now.

For AI, these processes of knowledge-making can be difficult and, at times impossible, to access. Consequently, GenAI tools and their LLMs have been found to lack transparency and have been likened to a conceptual black box (Curry et al., Reference Curry, Baker and Brookes2024; Wiener, Reference Wiener1961). It is interesting to note that Weiner’s attribution of a black box quality to machines pertained mainly to the unknowns within machine systems and their processes that needed to be understood or clarified through system identification. In the context of AI, similar notions of the unknowns within machines remain relevant as, for some, the metaphorical treatment of GenAI tools as representing a black box is their means of critiquing the acceptability of AI in academic work. This critique of AI’s unknowns has led to a focus on so-called explainable AI, which centers on delivering greater transparency in all facets of AI development (Xu et al., Reference Xu, Uszkoreit, Du, Fan, Zhao and Zhu2019). Yet, despite such advances, issues remain, as the data on which AI is trained are inevitably limited – there may not be enough data available to perfect the models, should perfection be possible. This problem is exacerbated by AI generated text starting to populate the web pages from which data for training models are drawn, meaning that GenAI may poison the source of data it draws from in such a way that it reinforces its own limitations and biases.

GenAI models thus present a significant epistemological challenge as they reflect biases that can homogenize language, propagate limited cultural narratives (Choi, Reference Choi2022; Putland et al., Reference Putland, Chikodzore-Paterson and Brookes2023), and limit our capacity to move beyond normative worldviews represented in the training data. As such, the epistemology of AI, if such a thing exists, lacks a conceptual understanding of the world or human-like sentience (Rodríguez, Reference Rodríguez2023). Thus, AI’s approach to knowing contrasts sharply with human knowledge in applied linguistics, which is rooted in criticality, lived experience, cultural context, and interaction with the physical world (Creely, Reference Creely2024; Tang & Cooper, Reference Tang and Cooper2024). Recognition of such differences has begun to have a ripple effect in AI research, with some shifting from language models toward world models (e.g. Xiang et al., Reference Xiang, Tao, Gu, Shu, Wang, Yang and Hu2023) that are designed to reflect the complex models of reality on which humans draw. These models do not see language as divisible from context and offer wider world contextualization of data, as opposed to solely a linguistic contextualization of data.

As applied linguistics embodies a plurality of epistemologies, we should query, as Kuteeva and Andersson (Reference Kuteeva and Andersson2024) do, how AI-assisted tools can respond to the differing epistemological stances employed within and across applied linguistics studies. Likewise, in recognizing the importance of localized and situated epistemologies in studies across cultures and languages, we might consider what the limitations of AI’s approach to knowledge-building means for applied linguistics. For example, GenAI may not capture the nuances of human language (Sardinha, Reference Sardinha2024) and has been found to fabricate data, a process AI researchers call hallucination (e.g. Baker & Kanade, Reference Baker and Kanade2000; Curry et al., Reference Curry, Baker and Brookes2024). This was initially seen as an asset, as AI could hallucinate and enhance the quality of images and supplement missing detail in them, for example. Yet, as AI has gained more prominence in fields such as applied linguistics, hallucinations have come to be seen more negatively as they have the potential to create content that reproduces biases and eliminates diverse cultural and linguistic contexts (Brandt & Hazel, Reference Brandt and Hazel2024; Choi, Reference Choi2022; Putland et al., Reference Putland, Chikodzore-Paterson and Brookes2023). Likewise, fears of epistemicide – the destruction of knowledge systems that do not align to the dominant paradigm – emerge owing to GenAI’s inclination to standardize and simplify knowledge (Pragya, Reference Pragya2024). Many of these issues with GenAI are antithetical to wider social justice movements that shape contemporary applied linguistics (e.g., Badwan, Reference Badwan2021). Moreover, given that language is seen in applied linguistics as a means of mediating the creation of new knowledge, a question arises regarding the extent to which GenAI can support this creative practice if its operationalization requires the reproduction of existing, learned patterns (Kuteeva & Andersson, Reference Kuteeva and Andersson2024). Thus, with reference to epistemologies, we can begin to see the alignment problem (i.e. the potential (mis)alignment of research and practice in AI and applied linguistics) as an active issue.

To begin to address the problem, there is a need for critical understanding of GenAI that involves evaluating its capabilities and limitations and safeguarding the integrity of culturally rich applied linguistics research (Creely, Reference Creely2024). To support this effort, we must seek means to ensure that any knowledge GenAI appears to produce can be interrogated within the heterogeneous epistemological paradigm of applied linguistics (Pennycook, Reference Pennycook2018). This should include a focus on preserving researchers’ abilities to critically select, analyse, and interpret information as well as their ability to challenge normative views constructed through discourse. As researchers, we implicitly accept the accountability that comes with conducting research and the importance of sustained, critical, and reflective practices that underpin the research process and the development of a discipline as well as a wider community of practice. This accountability will always remain with the researcher, and any use of AI tools in research will need to be considered in line with the potential impact of AI tools both on knowledge as a product and on knowledge-making as a process.

To address the alignment problem, we must also balance GenAI’s capabilities with human agency and creativity. We must augment rather than replace human skills (e.g. using GenAI for routine and automatable tasks) and develop criteria for evaluating the quality of knowledge produced by GenAI. This will be a challenge, given the growing body of work revealing that AI-conducted research does not necessarily outperform research conducted by humans (e.g., Vaccaro et al., Reference Vaccaro, Almaatouq and Malone2024). This challenge is echoed by the evident epistemological divide between applied linguistics and GenAI. To date, few studies at this interface directly engage with notions of epistemology. Those that do have consistently problematized the capacity of GenAI to work within the epistemologies that govern applied linguistics. Research in applied linguistics is built on the central and indispensable role of human culture in knowledge construction (Tang & Cooper, Reference Tang and Cooper2024). This is why the alignment problem is so important to applied linguistics. As we move forward, it is imperative that our approach to knowledge construction in applied linguistics remains inseparable from our interaction with material objects in a world increasingly influenced by AI. AI as a tool must be put in service to epistemologies in applied linguistics – the epistemologies must not change simply to fit the affordances of AI.

AI and ontology in applied linguistics

The alignment problem stretches through from epistemology to ontology. Ontology is concerned with the nature of being and existence (Hall & Wicaksono, Reference Hall, Wicaksono, Hall and Wicaksono2020). Engaging in ontological work as part of the applied linguistics research process essentially involves asking what kinds of things exist, how they relate to each other, and whether these things exist independently of human minds and language (Hall & Wicaksono, Reference Hall, Wicaksono, Hall and Wicaksono2020). Ontology is foundational to much of what applied linguists study, as language is typically seen as the primary means through which immaterial entities and social institutions are socially constructed. In applied linguistics, ontologies are not fixed but are actively negotiated and performed through social practice (Demuro & Gurney, Reference Demuro and Gurney2021; Dewaele, Reference Dewaele, Wright, Harvey and Simpson2019). The plurality of ontologies in applied linguistics mirrors that of epistemologies – an unsurprising linkage, given that ontologies can be shaped by epistemologies and vice versa (Demuro & Gurney, Reference Demuro and Gurney2021; Hall & Wicaksono, Reference Hall, Wicaksono, Hall and Wicaksono2020). Consequently, in applied linguistics, language can be seen as an object of study (e.g., Curry & Pérez-Paredes, Reference Curry and Pérez-Paredes2021) or a practice (e.g., León et al., Reference León, Lemmi, Sedlacek, Ortiz and Feldman2024), inter alia, depending on the focus of any given analysis. From a social justice perspective, ontologies are shaped by researchers, language users, and disciplinary practices (Schalley, Reference Schalley2019). In this way, ontologies can be used to challenge monolithic and dominant views of language and promote awareness of the diverse ways in which language is understood and used (Hall & Wicaksono, Reference Hall, Wicaksono, Hall and Wicaksono2020).

In AI research, whether it be symbolic or statistical, ontology takes a very different form as there is a strong tendency to select and fix ontologies. Researchers seek to control ontologies as a way of simplifying the problems facing AI system developers. As such, AI ontologies are often designed for specific domains or purposes and aim to represent reality in a formal, structured, machine-readable format (Machado et al., Reference Machado, Almeida and Souza2020). A good example of this was the EAGLES initiative, which sought to standardize part-of-speech tag sets across languages (Leech & Wilson, Reference Leech and Wilson1994). This practice of setting ontologies also underlies large-scale AI resources such as WordNet (Miller et al., Reference Miller, Beckwith, Fellbaum and Gross1990). Yet, fixing ontologies is largely a convenience – making one set of decisions to pare down the complexity of the legitimate choices that, in fact, exist (see McEnery & Brezina, Reference McEnery and Brezina2022). Thus, although AI draws on the same broad philosophical tradition and notion of being as applied linguistics, ontology, when localized in AI research, looks very different. A key goal of ontologies in AI is to achieve semantic interoperability, thus allowing different systems to exchange and understand data while providing a common vocabulary and rules for structuring data (Machado et al., Reference Machado, Almeida and Souza2020; Spyns & De Bo, Reference Spyns and De Bo2004). These ontologies are typically organized, representing knowledge through hierarchical structures or taxonomies (Iliadis, Reference Iliadis2019; Schalley, Reference Schalley2019) designed to support practical tasks, like reasoning, inference, and knowledge retrieval (Machado et al., Reference Machado, Almeida and Souza2020; Spyns & De Bo, Reference Spyns and De Bo2004). They are also data-driven, that is, developed to organize and label data (Iliadis, Reference Iliadis2019). Reflecting upon both this and the previous paragraph, it becomes clear that ontology and the notion of being in applied linguistics and AI represent a further dimension of the alignment problem.

Applied linguists use ontology as a critical lens to examine how language and its use are shaped by social, cultural, and political forces. AI researchers use ontologies to create formal representations of knowledge for machine processing. Thus, while applied linguistics prioritizes philosophical understandings of ontology, emphasizing the social and cultural dimensions of language, AI adopts a practical, engineering approach, using ontologies as tools to model and manage information for specific applications in a way that typically explicitly simplifies the complexity of the system described (Hall & Wicaksono, Reference Hall, Wicaksono, Hall and Wicaksono2020) and promotes interoperability (Iliadis, Reference Iliadis2019). Unlike applied linguistics research, wherein the complexity of ontologies is not only accepted but seen as a core strength, in AI research, simplification in search of standardization is a key research practice. Part of the simplification is usually normative – the data on which the systems are developed and the categories chosen are oriented toward the typical.

As with epistemology, applied linguists need to be aware of misalignment between the ontological goals of applied linguistics and AI. While the appeal of being able to apply AI ontologies to a research problem is clearly great, this advantage should not be gained at the expense of distorting our view of the object of study. For example, AI language models can struggle with the nuances of human language, including idioms, cultural meanings, and expressions deeply embedded in cultural and historical contexts (Creely, Reference Creely2024). A good example is McEnery and Baker’s (Reference McEnery and Baker2017) attempt to use a semantic tagger, based on a semantic ontology, to explore the meaning of words involved in the representation of prostitutes. The ontology was developed for work on present-day English, yet the researchers were applying it to early-modern English. The attempt was deemed a failure on ontological grounds. The role of religion in the modern world had relegated religion to a single semantic field in the system used. However, in the early-modern period, religion was a frequent and complex feature of public discourse – around half the texts in the billion-word corpus used in the study concerned religion (McEnery & Baker, Reference McEnery and Baker2017, p. 163, see note 10). The descriptive poverty of the ontology rendered it unusable – it eliminated rather than illuminated nuance. Where this happens, researchers should abandon the tool, not the object of study.

Compared to the issue of epistemology, to-date an even narrower selection of studies in applied linguistics on the topic of AI or GenAI engages in depth with the notion of ontology. An initial means of advancing thinking in this area would thus be the inclusion of reflections on ontologies in research using GenAI. Those studies that do exist indicate that the worldview of applied linguistics and the limited linguistic models of GenAI may be incommensurable. While blending human input with machine learning in so-called “human in the loop” approaches may seem to address this issue, in practice such approaches have proved to be difficult to implement (Mosqueira-Rey et al., Reference Mosqueira-Rey, Hernández-Pereira, Alonso-Ríos, Bibes-Bascarán and Fernández-Leal2023). Based on this, we argue that AI cannot replace the human analyst in the research process, especially if we continue to care about issues such as equality and fairness in socially situated research. Challenges such as the well-documented tendency of AI algorithms to perpetuate negative racial stereotypes (see Baker & Potts, Reference Baker and Potts2013) should cause any applied linguistics researcher to pause before using AI uncritically. It is essential, in determining the potential use of GenAI, that applied linguists take critical approaches to technological determinism and solutionism (McKnight & Shipp, Reference McKnight and Shipp2024), the so-called Eliza effect (whereby humans place implicit trust in machines, McKnight & Shipp, Reference McKnight and Shipp2024) and the wider digital divide that governs access to GenAI tools (Li, Reference Li2023). With these divergent perspectives on the role of ontology, we once again find ourselves searching for a means through which applied linguistics and AI can align.

As a potential move toward epistemological and ontological alignment, users may now, for example, try generating training data in which a linguistic analysis is encoded in annotation, to get a GenAI tool to learn how to undertake that analysis and then apply it to new data. The tool learns and uses the ontology provided by the user. While this may on occasion prove helpful, it is littered with problems as a research process. Firstly, users may not be able to generate sufficient training data to produce an analysis that is accurate enough to be useful. Secondly, in trying to do so, they may distort their research goals, investing a great deal of time on a Grail quest for an automated analysis rather than actually undertaking the research they set out to complete. This in part may be behind findings emerging in industry that AI assistants, for example, cost rather than save time (Monahan & Burlacu, Reference Monahan and Burlacu2024). Thirdly, the results will almost certainly exhibit errors – users must be aware of this and invest time in both quantifying and understanding those errors in an attempt to appreciate the distortions that those errors may produce. It is unlikely the errors will be smoothly distributed, and it is much more likely that they will be focused on the non-normative. Fourthly, the approach clearly makes daunting demands of the researcher in terms of permitting others to repeat their study – while the researcher may record the prompts and the version of the GenAI tool used, and, in some circumstances, the seed (a random number) used to initiate the generation process, it remains to be seen whether, in the long term, this will be sufficient to permit researchers to critically evaluate the findings of papers based on such research practices. Finally, it may be that the automated approach fails – in such a case, the danger of point two above is real as time is poured into trying to get an LLM to undertake an annotation that it is simply not capable of doing effectively. This final point is most likely to apply at the more subjective end of linguistic analyses or in areas in which an understanding of real-world social context is key, for example, in pragmatics. As a rough rule of thumb, in our experience, where a research question relates very closely to lexis, the likelihood of an LLM being trained to perform an analysis is good (e.g., Curry et al., Reference Curry, Baker and Brookes2024; Yu et al., Reference Yu, Li, Su and Fuoli2024). However, as we push through into a reliance on the non-lexical and non-linguistic, the likelihood of an LLM being of use reduces, and the greater the danger of point two above becomes. So, while LLMs may, in principle, assist with the ontological alignment problem, they are not a catch-all solution to it.

A further way of approaching the question of alignment could be through the issue of representation. Both applied linguistics and AI are interested in questions of representation, albeit from different perspectives. Applied linguistics is interested in how language represents concepts and how this representation varies across social space (e.g., speakers, cultures) and time. AI is interested in how knowledge can be formally represented for computational purposes. Yet, there is potential for dialogue here. As Goodwin and Hein (Reference Goodwin and Hein1982) noted, applied linguisticsFootnote 2 could bring a critical perspective to AI research by challenging the implicit ontological assumptions embedded in AI systems. It can be difficult to bridge disciplinary ontologies; however, this is arguably the case for any interdisciplinary endeavor and, optimistically, some advances have already been made in this area (e.g., in the development of semantic web technologies, Machado et al., Reference Machado, Almeida and Souza2020; Schalley, Reference Schalley2019; Spyns & De Bo, Reference Spyns and De Bo2004). Looking forward, it is imperative that we seek to further share our foundational views of language across disciplinary boundaries to ensure that GenAI development and deployment are sensitive to the diverse ways in which language and knowledge are understood and situated within our lived realities.

AI and ethics in applied linguistics

Applied linguistics research is shaped by macroethics, concerned with institutional forms of ethics, and microethics, concerned with situated ethical challenges within and across the research process (De Costa et al., Reference De Costa, Sterling, Lee, Li and Rawal2021; Yaw et al., Reference Yaw, Plonsky, Larsson, Sterling and Kytö2023). Both facets of ethics are generally guided by respect for people and the goal of yielding optimal benefits while minimizing harm and responding to issues of (social) justice (De Costa, Reference De Costa and De Costa2015). Applied linguistics has shifted over time toward a context-dependent ethics and the view that ethical considerations are not static but are negotiated within specific rhetorical situations (Vetter et al., Reference Vetter, Lucia, Jiang and Othman2024). Ethics has moved beyond a focus on individuals toward a socially constructed view of evaluating ethical behavior.

Macro- and microethics work to protect research participants and ensure the responsible application of linguistic knowledge to social challenges. In this way, ethics in applied linguistics shapes and is shaped by its varying epistemologies and ontologies, linking ethics firmly to the discussion of alignment in the previous sections. The macro- and microethical approach of applied linguistics typically focuses on research practices, addressing issues such as informed consent, data privacy, and the potential for harm to participants (De Costa et al., Reference De Costa, Sterling, Lee, Li and Rawal2021). In each case, ethics is seen as being highly dependent on the specific research context, researchers’ positionality, and participants’ cultural values. Though learned societies can and do proffer ethical guidelines (e.g., BAAL, 2021), they often do so with reflexivity, acknowledging that there is no one universally accepted approach to conducting ethical research in applied linguistics. That is why issues of ethics are often explored within specific subfields of applied linguistics, for example language teaching (Anderson, Reference Anderson and Mirhosseini2017) and corpus linguistics (Brookes & McEnery, Reference Brookes, McEnery, De Costa, Rabie-Ahmed and Cinaglia2024). This compartmentalization of ethics provides space to localize ethical concerns amid our varied practices. Within and across applied linguistics, researchers must then identify ethical concerns and translate idealized discussions of ethics into practice. This transformation can prove challenging when working in complex contexts, with industry partners, and with or on marginalized groups.

While our exploration of epistemologies and ontologies in AI and applied linguistics offered insight into areas of misalignment, ethics in AI research is arguably more easily aligned to ethics in applied linguistics. Ethics is a central facet of responsible AI research, with researchers sharing a focus on evaluating issues such as algorithmic bias, the impact of AI on society, and the complex relationship between AI and the climate crisis (Jabotinsky & Sarel, Reference Jabotinsky and Sarel2024; Kirova et al., Reference Kirova, Ku, Laracy and Marlowe2023). However, as in the case of applied linguistics, ethics in AI research represents an ideal which is not always realized in practice, in particular in the AI industry in which ethical concerns regarding the alignment of GenAI with applied linguistics arise.

Ethics in AI research is largely concerned with the moral principles that govern the development and practical implementation of AI systems, particularly with regard to a lack of transparency and fairness, and algorithmic bias (Kirova et al., Reference Kirova, Ku, Laracy and Marlowe2023). However, there have been instances in which AI companies have been accused of training their models on copyrighted materials without permission (e.g., Milmo, Reference Milmo2024) and this has raised questions surrounding the legality of certain AI models (Gromova et al., Reference Gromova, Ferreira and Begishev2023). Further critiques of training data contend that models used in popular GenAI tools may not offer a nuanced picture of reality (Creely, Reference Creely2024; Farrelly & Baker, Reference Farrelly and Baker2023) – at least from an applied linguistics perspective. What AI can produce is, as noted, confined to its ontology, that is, to what the AI has been trained on. Typically, AI models are trained on texts from the web and therefore reflect the style and tone attributed to journalistic language (Nesi, Reference Nesi2024). As mentioned previously, the increasing prominence of AI-generated texts on the web (Europol, 2022) means that new AI models will inevitably be trained on AI-produced texts, which risks the exponential proliferation of biases, prejudices, and dominant perspectives through GenAI tools. Indeed, Thompson et al. (Reference Thompson, Dhaliwal, Frisch, Domhan, Federico, Ku, Martins and Srikumar2024) argue that already 57% of online data is the result of AI generation or machine translation.

Thus, the question of bias in AI-produced texts may not be an issue of algorithmic bias per se. It may be that AI and GenAI tools are effectively reconstructing reality based on norms and tendencies in the data as it exists. When evaluated from an algorithmic perspective, we will then likely conclude that such tools are effective at doing their job and at ethically reflecting such social tendencies, even if the kinds of biases and stereotypes that are reproduced are views that many of us in (critical) applied linguistics would challenge. As discussed earlier, these problems with training data can result in GenAI tools that reproduce knowledge from uncredited sources (Creely, Reference Creely2024; Stahl & Eke, Reference Stahl and Eke2024) and AI models that perpetuate existing societal prejudices and biases related to gender, race, and culture (Choi, Reference Choi2022; Putland et al., Reference Putland, Chikodzore-Paterson and Brookes2023). Such issues in AI development are antithetical to values in applied linguistics and, from an ethical perspective, can undermine advances in social justice initiatives, researcher autonomy, and representations of identities, inter alia.

While black box research (Casal & Kessler, Reference Casal and Kessler2023; Curry et al., Reference Curry, Baker and Brookes2024) and reasoning processes (Lodge et al., Reference Lodge, Thompson and Corrin2023) in modern AI tools present epistemological and ontological challenges, they also create a particular ethical challenge for researchers in terms of the explanatory power of results derived from GenAI (for more on this, see Egbert et al., Reference Egbert, Larsson and Biber2020; McEnery & Brezina, Reference McEnery and Brezina2022). An analogy would be like seeing the answer to a mathematical puzzle without being able to see how the answer was derived. Being unable to explain the answer is, in some ways, as problematic as a wrong answer, as it is the process of discovery, as well as the discovery itself, that forms the contribution to knowledge. In particular, understanding errors is rendered difficult-to-impossible in a context in which explanatory adequacy is low. As discussed in the “AI and epistemology in applied linguistics” section, research on explainable AI is now coming to the fore (see Angelov et al., Reference Angelov, Soares, Jiang, Arnold and Atkinson2021, for an overview) and offering some solutions to ethical concerns in the research process. However, some of the techniques used in modern AI and other areas, in particular, opaque models such as random forests and deep neural networks (Rudin, Reference Rudin2019), provide a substantial, possibly insurmountable, barrier to explanation. More recent developments, such as ChatGPT o1, have endeavored to shed more light on these reasoning processes. However, the reliability, consistency, and depth of such reasoning still needs to be tested. In this context, it is hardly surprising that questions of over-reliance on GenAI in the research process (Creely, Reference Creely2024; Lodge et al., Reference Lodge, Thompson and Corrin2023), the impact of GenAI data processing and hallucinations on data integrity and security in applied linguistic research (e.g., Curry et al., Reference Curry, Baker and Brookes2024; Muñoz-Basols et al., Reference Muñoz-Basols, Neville, Lafford and Godev2023), and the role of GenAI in producing and reporting research findings (Casal & Kessler, Reference Casal and Kessler2023) abound in the literature. These concerns represent a notable misalignment between GenAI tools and the ethical values on which, in principle, research in applied linguistics rests.

With this overarching review in mind, it becomes clear that despite academic research in AI demonstrating shared ethical values with applied linguistics research, in practice, commercial AI has not always enacted these values and some of the black box algorithms used in both industrial and academic research are poorly aligned with epistemological and ontological practices in applied linguistics. From an applied linguistics perspective, it is reasonable to assert that the use of GenAI tools that reinforce bias and negatively impact society may undermine fairness and equity in language education (Choi, Reference Choi2022), perpetuate normative or ethnocentric views (Spennemann, Reference Spennemann2024), and mis- and underrepresent society’s most vulnerable (Nguyen et al., Reference Nguyen, Nguyen, Ludovise and Santagata2024; Putland et al., Reference Putland, Chikodzore-Paterson and Brookes2023). Likewise, algorithms that obscure, simplify, or hide the processes by which analyses are undertaken also align poorly with macroethical research practices in applied linguistics. To develop a means for applied linguists to engage with AI ethically, such concerns must be addressed and attenuated.

To respond to this misalignment, we can turn to a growing body of work dedicated to enhancing GenAI users’ critical AI literacy. This research seeks to make GenAI users aware of how AI works, its limitations, and the ethical implications of its use (e.g., Casal-Otero et al., Reference Casal-Otero, Catala, Fernández-Morante, Taboada, Cebreiro and Barro2023; Strauß, Reference Strauß2021; Walter, Reference Walter2024). Yet, while research designed to develop guidelines for the ethical use of AI is already available, there remains a need to develop disciplinary-situated guidelines that localize ethical AI use or non-use within and across applied linguistics. We must tease apart and juxtapose contextually situated ethics in applied linguistics and AI research and determine their compatibility. We must adopt a critical use of AI and question the environmental and social impacts of AI, particularly in the global south where limited resources, such as water and energy are used to power and cool large server farms (Bashir et al., Reference Bashir, Donti, Cuff, Sroka, Sze, Delimitrou and Olivetti2024). There are movements toward sustainable AI (see van Wynsberghe, Reference van Wynsberghe2021), which is promising, and any user of GenAI should give considerable thought to the models on which they draw. Supporting the use of sustainable AI when we use it may be the best means of combatting such unethical practices. More importantly however, we must decide whether or not the expediency that AI promises is worth any such costs or negative impacts. Grieve et al. (Reference Grieve, Bartl, Fuoli, Grafmiller, Huang, Jawerbaum and Winter2025) note that language models could be improved through engagement with sociolinguistics research. Arguably, drawing on such social perspectives on language when developing AI could help to lead us towards critical and responsible use of AI in applied linguistics – a use that prioritizes the human values, transparency, and accountability that sit at the heart of applied linguistics.

AI in applied linguistics: A focus on language education

When taken together, epistemology, ontology and ethics offer a critical framework through which we can evaluate the challenges and opportunities inherent in the integration of AI and GenAI with applied linguistics. While there are shared perspectives across the fields of AI and applied linguistics, particularly in the context of ethics, potential misalignments need to be addressed before AI and GenAI can be fully integrated into applied linguistics research. This must be done in a way that neither diminishes nor degrades the essential nature of applied linguistics, as discussed. To exemplify these tensions within a subfield of applied linguistics, this section discusses recent literature on AI in language education, an area in which AI is being adopted rapidly.

AI already appears to be having a significant impact on language education, and though much of the optimism surrounding AI is tempered with caution, many educators and learners appear to be embracing AI for its perceived ability to personalize learning, provide feedback, and create engaging content for language learners (AbuSahyon et al., Reference AbuSahyon, Alzyoud, Alshorman and Al-Absi2023; Betal, Reference Betal2023). In the context of personalized learning, research in language education has investigated the affordances of AI for creating customized learning experiences tailored to individual student needs (Konyrova, Reference Konyrova2024) and immersive learning environments (Betal, Reference Betal2023; Negrila, Reference Negrila2023). Li and Wang (Reference Li and Wang2024) claim that AI can help learners identify and select resources to support their learning which, they argue, renders GenAI tools effective and engaging language learning aids. In materials and assessment development, AI has been used to develop interactive learning materials, including games, videos, and quizzes (Amonova et al., Reference Amonova, Juraeva and Khidoyatov2023), support teachers in their lesson planning (Kostka & Toncelli, Reference Kostka and Toncelli2023), create dialogic chatbots designed to provide opportunities for conversational practice and feedback (Katsarou et al., Reference Katsarou, Wild, Sougari and Chatzipanagiotou2023; Vajjala, Reference Vajjala2024), and facilitate automated marking and assessment (Negrila, Reference Negrila2023). In many cases, the results appear promising.

Some studies claim that AI can improve language education, by facilitating the learning of vocabulary, grammar, pronunciation, and the development of written language skills (Cohen et al., Reference Cohen, Mompelat, Mann and Connors2024; Konyrova, Reference Konyrova2024). Elsewhere, it has been argued that AI can increase learner engagement, motivation, and autonomy (AbuSahyon et al., Reference AbuSahyon, Alzyoud, Alshorman and Al-Absi2023; Betal, Reference Betal2023; Konyrova, Reference Konyrova2024), support teachers in a range of pedagogical and administrative tasks (Amonova et al., Reference Amonova, Juraeva and Khidoyatov2023; Vajjala, Reference Vajjala2024), and facilitate the development of critical thinking skills (Kostka & Toncelli, Reference Kostka and Toncelli2023). Yet, at the same time, a debate has also emerged in which alignment is a key issue. For example, Creely (Reference Creely2024) has argued for a critical approach to the use of AI, arguing that how learners are trained to engage with AI-produced texts plays a key role in governing their capacity to develop critical thinking skills, a goal that should not be abandoned.

The potential of misalignment to perpetuate bias has also emerged as an issue in the literature. For example, there are concerns about the accuracy and authenticity of texts produced by AI, including the potential for cultural and linguistic nuances to be absent or distorted (Amonova et al., Reference Amonova, Juraeva and Khidoyatov2023; Choi, Reference Choi2022; Creely, Reference Creely2024). With language education increasingly moving towards international views of language and language varieties, this propensity for GenAI technologies to reinforce standard varieties of language represents a potential challenge, ideologically, for applied linguistics. Studies in this area critique the data on which AI models are trained and question their capacity to meet the needs of culturally diverse classrooms. Questions of ownership of GenAI’s intellectual output (Betal, Reference Betal2023; Creely, Reference Creely2024), data privacy and security when using AI and GenAI in language learning (Betal, Reference Betal2023; Creely, Reference Creely2024), and issues of access to technology and the digital divide (Konyrova, Reference Konyrova2024; Li, Reference Li2023) give rise to further concerns surrounding the use of AI and GenAI in language education. Song and Song (Reference Song and Song2023) note that the product-oriented nature of GenAI risks a lack of engagement with the processes of learning, which may lead to over-reliance on technology – a practice that may diminish the development of critical thinking skills and creativity (Amonova et al., Reference Amonova, Juraeva and Khidoyatov2023; Creely, Reference Creely2024). In the context of data-driven learning, for example, the process of searching for, reading, and analyzing language extracts independently and autonomously is the true learning experience (Flowerdew, Reference Flowerdew, Leńko-Szymańska and Boulton2015). What a learner finds or discovers as the end of this process is arguably of secondary importance. The expediency and limited criticality of GenAI would likely flip this perspective, as GenAI tools focus on giving answers and responses to learners. In such cases, we may wonder what role the action of engaging with GenAI tools can play in the learning process (e.g., Tolstykh & Oshchepkova, Reference Tolstykh and Oshchepkova2024).

Many of the challenges researchers identify when bringing AI into language education reflect the wider concerns of misalignment between AI and applied linguistics that we have discussed previously. First, from an epistemological perspective, Creely (Reference Creely2024) raises questions of how knowledge is acquired and validated when using AI, signaling epistemological tensions when using AI in language learning. Drawing on Borgmann (Borgmann, Reference Borgmann1984, Reference Borgmann2006), they reflect on the quality of information produced by AI and whether AI-generated content can lead to genuine learning, as opposed to superficial learning, owing to the limited situatedness of GenAI tools. Chen (Reference Chen2023) notes that AI may force epistemological and ontological shifts in education, through the introduction of new or varied methods of knowledge acquisition, comprehension, existence, and actions within technology-driven educational settings. Likewise, Joseph (Reference Joseph2023) queries whether AI can truly reason like humans or whether, when reasoning, it is simply generating reasoning-like responses. If it is the latter, then we may be witnessing the ELIZA effect which risks creating implicit and unsubstantiated trust between learners and GenAI tools.

Second, ontological issues also arise. Alharbi (Reference Alharbi2023), for example, discusses how AI creates challenges for authorship and creativity, and Creely (Reference Creely2024) notes that, as AI becomes more proficient in creating literary and visual content, the distinction between human and AI creation becomes increasingly unclear. This reflection prompts a re-examination of traditional concepts of creativity and ownership. For student writing and assessment, this raises questions about to whom the work belongs and how it should be evaluated as part of the learning process. Creely (Reference Creely2024) also raises concerns about the potential for technology, including AI, to disengage humans from direct and meaningful relations with the world, potentially negatively impacting culture and language. Such a disengagement could create an ontological dissonance for learners.

Finally, from an ethical point of view, Baskara (Reference Baskara2023) notes that AI is often trained on large datasets that may include personal data, raising concerns about consent and the fair use of personal data in language education contexts. Further work discusses the impact of AI on academic integrity (e.g. Al-Obaydi et al., Reference Al-Obaydi, Pikhart and Klimova2023; Chen, Reference Chen2023). On this topic, Kostka and Toncelli (Reference Kostka and Toncelli2023) argue that there may be a need to redefine assessments to encourage the creative application of concepts and reflection on the learning process, owing to the ease with which GenAI tools can generate unique texts which may be used by learners as a substitute for their own writing. The scale of data needed by LLMs and other statistical AI techniques also points to a further ethical concern surrounding digital poverty. English has the most data available for it, hence English language models can be trained and refined in ways which are currently not possible for other languages. Should the benefits of AI for language learning prove to be real, then for languages experiencing digital poverty, for a variety of reasons, this poverty will translate into a disadvantage in terms of AI-based language learning opportunities and resources.

Recognizing the affordances and challenges that researchers identify surrounding the use of AI and GenAI in language education, we believe that this area can only develop through reorienting AI to provide a better alignment of its goals to those of applied linguistics. Developing research on AI literacies (e.g., Cohen et al., Reference Cohen, Mompelat, Mann and Connors2024), dedicated teacher training and AI (e.g., Belda-Medina & Calvo-Ferrer, Reference Belda-Medina and Calvo-Ferrer2022), the critical implementation of AI (e.g., Negrila, Reference Negrila2023), and collaborative approaches to developing educational resources with AI (Ji et al., Reference Ji, Han and Ko2023) will be necessary to begin this reorientation. Ultimately, while AI and GenAI have the potential to revolutionize language education, educators, researchers, and policymakers need to address the challenges brought about by a misalignment that generates epistemological, ontological, and ethical concerns. Arguably, what we see in language learning is being mirrored across applied linguistics. The solution proposed here is, we believe, applicable across the whole of applied linguistics also. Only by addressing misalignment, in its varying forms, can we ensure a balanced and equitable approach to language learning and applied linguistics research in this rapidly evolving landscape.

Closing remarks

While AI tools may be able to process large amounts of data and produce written outputs much faster than any human researcher, when considered in terms of central facets of the research process, the creativity, reflexivity, and criticality of a human researcher remains unparalleled as does the role of the human expert, who can identify, but would not make, the coarse errors that AI systems can make. For applied linguistics, we argue that there is a need to sustain the primacy of human knowledge – knowledge that is nuanced, reflexive, and contextualized.

We propose, therefore, that applied linguistics knowledge, when mediated through language, and research practices enacted by researchers and technology, be seen as part of a socially constructed system that cannot be viewed as separate from the social realities in which it is created. In seeing our knowledge and research in this way, we maintain that a socially-situated and reflexive ethical approach is necessary to critically evaluate language, texts, and their social implications. In bringing AI into this complex research process, we propose the need for critical AI literacies in applied linguistics that help us to interrogate the use of AI epistemologically, ontologically, and ethically. In applied linguistics, the growing body of work on AI and GenAI represents an exciting new research agenda for many. While ethics is receiving much welcome attention, few studies in applied linguistics have engaged with notions of epistemology and ontology when grappling with the role and remit of contemporary AI in the field. We encourage our fellow researchers to reflect on these notions as part of the research process, especially when giving AI and GenAI the agency to undertake an activity that is traditionally attributed to researchers, for example, conducting an analysis or interpreting data. Ultimately, we must hold such tools to the same standards to which we would hold ourselves.

This paper has focused on possible problems of alignment – yet, is there evidence that this is a live issue, or may it be that, quietly, researchers in applied linguistics are aware of the issues we raise and are enacting the solutions proposed here? We believe that evidence is emerging to suggest that the issues we have discussed are very much relevant. For example, Udaya and Reddy (Reference Udaya and Reddy2024) produced an edited collection published by Springer that addresses vocabulary studies, corpus linguistics, and language pedagogy. The work was machine generated, though the authors claim that they edited that output prior to publication. In Szudarski’s (Reference Szudarski2025) review of this volume, the alignment issue is apparent. Szudarski details the processes involved in producing this book and notes how the authors sought to create a resource to support researchers in accessing applied linguistics knowledge, using an LLM trained on Springer publications to generate the texts. In his review, he documents myriad issues, including problems surrounding the quality of the knowledge produced, consistency in the reporting of research, the scope and representation of the field, and the wider, overarching ethical concerns surrounding knowledge (re)production, inter alia. While, as noted, the editors claimed to have had a hand in the production of the work, Szudarski attributes many of the problems he identifies to a lack of “human curation or critical appraisal during the process of writing” (p. 3). Like Szudarski, we wonder what value such publications bring to our field. In applied linguistics, we have tried and tested practices, developed over many years, that shape our approaches to research. Our work can take time – time to carefully collect data, time to rigorously conduct research, and time to thoughtfully write up and share research and analyses. While AI can be a fast alternative, its output may not necessarily solve our problems or answer our questions to the extent, or with the quality, that we require. As new generations of linguists develop, tempering and assessing our use of AI will be important for guiding the future of the field.

To support such a practice, applied linguists may wish to return to ideas in the period in which this paper started – Ashby (Reference Ashby1956) developed the concept of amplifying intelligence, claiming that “intellectual power, like physical power, can be amplified” (p. 272). That is now commonly called augmented intelligence or cognitive augmentation and relates to the use of technology to extend the powers of human cognition, not replace them. Our proposal is that experts remain in control and act as final arbiter, using technology to gain improvements rather than as a tool to replace the human analyst in a process – what Fulbright and Walters (Reference Fulbright, Walters, Schmorrow and Fidopiastis2020) would classify as a level 2 cognitive enhancement. Undertaken with critical reflection, this approach to using AI could ensure that alignment was maximized as the applied linguist, not the tool, would ultimately determine the alignment with human values in any given study or piece of research. This is certainly a better goal than a quest for automation and expertise replacement through AI. By recalling the values that underpin applied linguistics research and developing interdisciplinary pathways for bridging AI and applied linguistics, while controlling it through a goal of augmented intelligence, we can advance our field and maintain its foundations.

Acknowledgments

The authors would like to thank the editors, the two anonymous reviewers, Prof. Alex Fang of City University, Hong Kong, and Sam Hollands, of Sheffield University, UK, for helpful comments on earlier drafts of this paper.

Footnotes

1. In this paper, we place areas such as machine learning, deep learning, and natural language processing beneath the umbrella term AI.

2. The authors contrast abstract linguistic approaches with those focused on “used languages” which is of use in the “explanation of psychological or sociological facts” (Goodwin & Hein, Reference Goodwin and Hein1982:265). We take this non-abstract approach to linguistics to align with applied linguistics.

References

AbuSahyon, A. S. A. E., Alzyoud, A., Alshorman, O., & Al-Absi, B. (2023). AI-driven technology and chatbots as tools for enhancing English language learning in the context of second language acquisition: A review study. International Journal of Membrane Science and Technology, 10(1), 12091223. https://doi.org/10.15379/ijmst.v10i1.2829CrossRefGoogle Scholar
Alharbi, W. (2023). AI in the foreign language classroom: A pedagogical overview of automated writing assistance tools. Education Research International, 2023(1), 4253331. https://doi.org/10.1155/2023/4253331CrossRefGoogle Scholar
Al-Obaydi, L. H., Pikhart, M., & Klimova, B. (2023). ChatGPT and the general concepts of education: Can artificial intelligence-driven chatbots support the process of language learning? International Journal of Emerging Technologies in Learning (IJET), 18(21), 3950. https://doi.org/10.3991/ijet.v18i21.42593CrossRefGoogle Scholar
Amonova, S., Juraeva, G., & Khidoyatov, M. (2023). Harnessing the potential of artificial intelligence in language learning: Is AI threat or opportunity? In Proceedings of the 7th International Conference on Future Networks and Distributed Systems (pp. 292297). https://doi.org/10.1145/3644713.3644751CrossRefGoogle Scholar
Anderson, C. (2017). Ethics in qualitative language education research. In Mirhosseini, S. A. (Ed.), Reflections on qualitative research in language and literacy education (pp. 5973). Springer. https://doi.org/10.1007/978-3-319-49140-0_5CrossRefGoogle Scholar
Angelov, P., Soares, E., Jiang, R., Arnold, N., & Atkinson, P. (2021). Explainable artificial intelligence: An analytical review. WIREs: Data Mining and Knowledge Discovery, 11(5), e1424. https://doi.org/10.1002/widm.1424Google Scholar
Ashby, W. R. (1956). An introduction to cybernetics. John Wiley & Sons.CrossRefGoogle Scholar
BAAL. (2021). Recommendations on good practice in applied linguistics (4th ed.). The British Association of Applied Linguistics. Retrieved April 4 , 2025, from https://www.baal.org.ukGoogle Scholar
Badwan, K. (2021). Unmooring language for social justice: Young people talking about language in place in Manchester, UK. Critical Inquiry in Language Studies, 18(2), 153173. https://doi.org/10.1080/15427587.2020.1796485CrossRefGoogle Scholar
Baker, P., & Potts, A. (2013). Why do white people have thin lips? Google and the perpetuation of stereotypes via auto-complete search forms. Critical Discourse Studies, 10(2), 187204. https://doi.org/10.1080/17405904.2012.744320CrossRefGoogle Scholar
Baker, S., & Kanade, T. (2000). Hallucinating faces. In Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (pp. 8388). IEEE. https://doi.org/10.1109/AFGR.2000.840616CrossRefGoogle Scholar
Bashir, N., Donti, P., Cuff, J., Sroka, S. I., Sze, M., Delimitrou, V., & Olivetti, E. (2024). The climate and sustainability implications of generative AI. An MIT Exploration of Generative AI. https://doi.org/10.21428/e4baedd9.9070dfe7Google Scholar
Baskara, F. R. (2023). Integrating ChatGPT into EFL writing instruction: Benefits and challenges. International Journal of Education and Learning, 5(1), 4455. https://doi.org/10.31763/ijele.v5i1.858CrossRefGoogle Scholar
Belda-Medina, J., & Calvo-Ferrer, J. R. (2022). Using chatbots as AI conversational partners in language learning. Applied Sciences, 12(17), 8427. https://doi.org/10.3390/app12178427CrossRefGoogle Scholar
Betal, A. (2023). Enhancing second language acquisition through artificial intelligence (AI): Current insights and future directions. Journal for Research Scholars and Professionals of English Language Teaching, 7, 39. https://doi.org/10.54850/jrspelt.7.39.003Google Scholar
Borgmann, A. (1984). Technology and the character of contemporary life: A philosophical inquiry. University of Chicago Press.Google Scholar
Borgmann, A. (2006). Real American ethics: Taking responsibility for our country. University of Chicago Press.CrossRefGoogle Scholar
Brandt, A., & Hazel, S. (2024). Towards interculturally adaptive conversational AI. Applied Linguistics Review, 16(2). https://doi.org/10.1515/applirev-2024-0187Google Scholar
Brookes, G., & McEnery, T. (2024). Corpus linguistics and ethics. In De Costa, P. I., Rabie-Ahmed, A., & Cinaglia, C. (Eds.), Ethical issues in applied linguistics scholarship (pp. 2844). John Benjamins Publishing Company. https://doi.org/10.1075/rmal.7.02broCrossRefGoogle Scholar
Casal, J. E., & Kessler, M. (2023). Can linguists distinguish between ChatGPT/AI and human writing? A study of research ethics and academic publishing. Research Methods in Applied Linguistics, 2(3), 100068. https://doi.org/10.1016/j.rmal.2023.100068CrossRefGoogle Scholar
Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., & Barro, S. (2023). AI literacy in K-12: A systematic literature review. International Journal of STEM Education, 10(1), 29. https://doi.org/10.1186/s40594-023-00418-7CrossRefGoogle Scholar
Chen, S. Y. (2023). Generative AI, learning and new literacies. Journal of Educational Technology Development & Exchange (JETDE), 16(2), 119. https://doi.org/10.18785/jetde.1602.01CrossRefGoogle Scholar
Choi, L. J. (2022). Interrogating structural bias in language technology: Focusing on the case of voice chatbots in South Korea. Sustainability, 14(20), 13117. https://doi.org/10.3390/su142013177CrossRefGoogle Scholar
Christian, B. (2021). The alignment problem: How can artificial intelligence learn human values? Atlantic Books.Google Scholar
Cohen, S., Mompelat, L., Mann, A., & Connors, L. (2024). The linguistic leap: Understanding, evaluating, and integrating AI in language education. Journal of Language Teaching, 4(2), 2331. https://doi.org/10.54475/jlt.2024.012CrossRefGoogle Scholar
Consoli, S., & Ganassin, S. (2023). Navigating the waters of reflexivity in applied linguistics. In Consoli, S. & Ganassin, S. (Eds.), Reflexivity in applied linguistics (pp. 116). Routledge. https://doi.org/10.4324/9781003149408Google Scholar
Cope, B., & Kalantzis, M. (2024). A multimodal grammar of artificial intelligence: Measuring the gains and losses in generative AI. Multimodality & Society, 4(2), 123152. https://doi.org/10.1177/26349795231221699CrossRefGoogle Scholar
Creely, E. (2024). Exploring the role of generative AI in enhancing language learning: Opportunities and challenges. International Journal of Changes in Education, 1(3), 158167. https://doi.org/10.47852/bonviewIJCE42022495CrossRefGoogle Scholar
Crevier, D. (1993). AI: The tumultuous search for artificial intelligence. Basic Books.Google Scholar
Crosthwaite, P., & Baisa, V. (2023). Generative AI and the end of corpus-assisted data-driven learning? Not so fast! Applied Corpus Linguistics, 3(3), 100066. https://doi.org/10.1016/j.acorp.2023.100066CrossRefGoogle Scholar
Curry, N. (2024). Questioning the climate crisis: A contrastive analysis of parascientific discourses. Nordic Journal of English Studies, 23(2), 235267. https://doi.org/10.35360/njes.v23i2.39190CrossRefGoogle Scholar
Curry, N., Baker, P., & Brookes, G. (2024). Generative AI for corpus approaches to discourse studies: A critical evaluation of ChatGPT. Applied Corpus Linguistics, 4(1), 19. https://doi.org/10.1016/j.acorp.2023.100082CrossRefGoogle Scholar
Curry, N., & Pérez-Paredes, P. (2021). Stance nouns in COVID-19 related blog posts: A contrastive analysis of blog posts published in The Conversation in Spain and the UK. International Journal of Corpus Linguistics, 26(4), 469497. https://doi.org/10.1075/ijcl.21080.curCrossRefGoogle Scholar
De Costa, P. I. (2015). Ethics in applied linguistics research: An introduction. In De Costa, P. I. (Ed.), Ethics in applied linguistics research: Language researcher narratives (pp. 1–12). Routledge. https://doi.org/10.4324/9781315816937CrossRefGoogle Scholar
De Costa, P. I., Sterling, S., Lee, J., Li, W., & Rawal, H. (2021). Research tasks on ethics in applied linguistics. Language Teaching, 54(1), 5870. https://doi.org/10.1017/S0261444820000257CrossRefGoogle Scholar
Demuro, E., & Gurney, L. (2021). Languages/languaging as world-making: The ontological bases of language. Language Sciences, 83, 113. https://doi.org/10.1016/j.langsci.2020.101307CrossRefGoogle Scholar
Dewaele, J. M. (2019). The vital need for ontological, epistemological and methodological diversity in applied linguistics. In Wright, C., Harvey, L., & Simpson, J. (Eds.), Voices and practices in applied linguistics: Diversifying a discipline (pp. 7188). White Rose University Press. https://doi.org/10.22599/BAAL1.eCrossRefGoogle Scholar
Egbert, J., Larsson, T., & Biber, D. (2020). Doing linguistics with a corpus: Methodological considerations for the everyday user. Cambridge University Press. https://doi.org/10.1017/9781108888790CrossRefGoogle Scholar
Europol. (2022). Facing reality? Law enforcement and the challenge of deepfakes, an observatory report from the Europol Innovation Lab. Luxembourg: Publications Office of the European Union. https://doi.org/10.2813/158794Google Scholar
Farrelly, T., & Baker, N. (2023). Generative artificial intelligence: Implications and considerations for higher education practice. Education Sciences, 13(11), 1109. https://doi.org/10.3390/educsci13111109CrossRefGoogle Scholar
Flowerdew, L. (2015). Data-driven learning and language learning theories: Whither the twain shall meet. In Leńko-Szymańska, A. & Boulton, A. (Eds.), Multiple affordances of language corpora for data-driven learning (pp. 1536). John Benjamins. http://digital.casalini.it/9789027268716CrossRefGoogle Scholar
Fulbright, R., & Walters, G. (2020). Synthetic expertise. In Schmorrow, D. & Fidopiastis, C. (Eds.), Augmented cognition. Human cognition and behavior. HCII 2020. Lecture Notes in Computer Science (pp. 2748). Springer. https://doi.org/10.1007/978-3-030-50439-7_3Google Scholar
Garside, R., Leech, G., & Sampson, G. (Eds.) (1987). The computational analysis of English. Longman.Google Scholar
Goldstein, I., & Papert, S. (1977). Artificial intelligence, language, and the study of knowledge. Cognitive Science, 1(1), 84123. https://doi.org/10.1016/S0364-0213(77)80006-2Google Scholar
Goodwin, J. W., & Hein, U. (1982). Artificial Intelligence and the study of language. Journal of Pragmatics, 6(3–4), 241280. https://doi.org/10.1016/0378-2166(82)90003-0CrossRefGoogle Scholar
Grieve, J., Bartl, S., Fuoli, M., Grafmiller, J., Huang, W., Jawerbaum, A., … Winter, B. (2025). The sociolinguistic foundations of language modeling. Frontiers in Artificial Intelligence, 7, 1472411. https://doi.org/10.3389/frai.2024.1472411CrossRefGoogle ScholarPubMed
Gromova, E. A., Ferreira, D. B., & Begishev, I. R. (2023). ChatGPT and other intelligent chatbots: Legal, ethical and dispute resolution concerns. Revista Brasileira de Alternative Dispute Resolution-Brazilian Journal of Alternative Dispute Resolution-RBADR, 5(10), 153175. https://rbadr.emnuvens.com.br/rbadr/article/view/213/157Google Scholar
Hall, C. J., & Wicaksono, R. (2020). Approaching ontologies of English. In Hall, C. J. & Wicaksono, R. (Eds.), Ontologies of English: Conceptualising the language for learning, teaching, and assessment (pp. 312). Cambridge University Press. https://doi.org/10.1017/9781108685153.001CrossRefGoogle Scholar
Iliadis, A. (2019). The tower of babel problem: Making data make sense with basic formal ontology. Online Information Review, 43(6), 10211045. https://doi.org/10.1108/OIR-07-2018-0210CrossRefGoogle Scholar
Jabotinsky, H. Y., & Sarel, R. (2024). Co-authoring with an AI? Ethical dilemmas and artificial intelligence. Arizona State Law Journal, 56(3), 187223. https://doi.org/10.2139/ssrn.4303959Google Scholar
Jelinek, F. (1976). Continuous speech recognition by statistical methods. Proceedings of the IEEE, 64(4), 532556. https://doi.org/10.1109/PROC.1976.10159CrossRefGoogle Scholar
Ji, H., Han, I., & Ko, Y. (2023). A systematic review of conversational AI in language education: Focusing on the collaboration with human teachers. Journal of Research on Technology in Education, 55(1), 4863. https://doi.org/10.1080/15391523.2022.2142873CrossRefGoogle Scholar
Joseph, S. (2023). Large language model-based tools in language teaching to develop critical thinking and sustainable cognitive structures. Rupkatha Journal on Interdisciplinary Studies in Humanities, 15(4), 123. https://doi.org/10.21659/rupkatha.v15n4.13CrossRefGoogle Scholar
Katsarou, E., Wild, F., Sougari, A. M., & Chatzipanagiotou, P. (2023). A systematic review of voice-based intelligent virtual agents in EFL education. International Journal of Emerging Technologies in Learning (IJET), 18(10), 6585. https://doi.org/10.3991/ijet.v18i10.37723CrossRefGoogle Scholar
Kirova, V. D., Ku, C. S., Laracy, J. R., & Marlowe, T. J. (2023). The ethics of artificial intelligence in the era of generative AI. Journal of Systemics, Cybernetics and Informatics, 21(4), 4250. https://doi.org/10.54808/JSCI.21.04.42CrossRefGoogle Scholar
Konyrova, L. (2024). The evolution of language learning: Exploring AI’s impact on teaching English as a second language. Eurasian Science ReviewAn International Peer-reviewed Multidisciplinary Journal, 2(2), 133138. https://doi.org/10.63034/esr-42CrossRefGoogle Scholar
Kostka, I., & Toncelli, R. (2023). Exploring applications of ChatGPT to English language teaching: Opportunities, challenges, and recommendations. Teaching English as a Second or Foreign Language--TESL-EJ, 27(3), Article3. https://doi.org/10.55593/ej.27107intCrossRefGoogle Scholar
Kuteeva, M., & Andersson, M. (2024). Diversity and standards in writing for publication in the age of AI—Between a rock and a hard place. Applied Linguistics, 45(3), 561567. https://doi.org/10.1093/applin/amae025CrossRefGoogle Scholar
Leech, G., & Wilson, A. (1994). EAGLES Morphosyntactic annotation. EAGLES Report EAGCSG/IR-T3.1. Istituto di Linguistica Computazionale, Pisa.Google Scholar
León, M., Lemmi, C., Sedlacek, Q., Ortiz, N. A., & Feldman, K. (2024). Languaging-as-practice in science education: An alternative to metaphors of language-as-tool. Cultural Studies of Science Education, 19(4), 623631. https://doi.org/10.1007/s11422-024-10228-0CrossRefGoogle Scholar
Li, D., & Wang, H. (2024). Natural language processing in language learning: Personalized and adaptive English language teaching using artificial intelligence. Applied Mathematics and Nonlinear Sciences, 9(1), 114. https://doi.org/10.2478/amns-2024-3290Google Scholar
Li, H. (2023). AI in education: Bridging the divide or widening the gap? Exploring equity, opportunities, and challenges in the digital age. Advances in Education, Humanities and Social Science Research, 8(1), 355355. https://doi.org/10.56028/aehssr.8.1.355.2023CrossRefGoogle Scholar
Li, J., Cao, H., Lin, L., Hou, Y., Zhu, R., & El Ali, A. (2024). User experience design professionals’ perceptions of GenAI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 118). https://doi.org/10.1145/3613904.3642114CrossRefGoogle Scholar
Lodge, J. M., Thompson, K., & Corrin, L. (2023). Mapping out a research agenda for GenAI in tertiary education. Australasian Journal of Educational Technology, 39(1), 18. https://doi.org/10.14742/ajet.8695CrossRefGoogle Scholar
Machado, L. M. O., Almeida, M. B., & Souza, R. R. (2020). What researchers are currently saying about ontologies: A review of recent web of science articles. Knowledge Organization, 47(3), 199219. https://doi.org/10.5771/0943-7444-2020-3-199CrossRefGoogle Scholar
McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904Google Scholar
McEnery, A., & Baker, H. (2017). Life as a 17th-century prostitute. Bloomsbury Academic. https://doi.org/10.5040/9781474295062CrossRefGoogle Scholar
McEnery, T., & Brezina, V. (2022). Fundamental principles of corpus linguistics. Cambridge University Press. https://doi.org/10.1017/9781107110625CrossRefGoogle Scholar
McEnery, T., Brezina, V., Gablasova, D., & Banerjee, J. (2019). Corpus linguistics, learner corpora and SLA: Employing technology to analyse language use. Annual Review of Applied Linguistics, 39, 7492. https://doi.org/10.1017/S0267190519000096CrossRefGoogle Scholar
McKnight, L., & Shipp, C. (2024). “Just a tool”? Troubling language and power in generative AI writing. English Teaching: Practice & Critique, 23(1), 2335. https://doi.org/10.1108/ETPC-08-2023-0092Google Scholar
Miller, G. A. R., Beckwith, C. D., Fellbaum, D., & Gross, K. (1990). WordNet: An on-line lexical database. International Journal of Lexicography, 3(4), 235244. https://doi.org/10.1093/ijl/3.4.235CrossRefGoogle Scholar
Milmo, D. (2024, January 8 ). ‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says. The Guardian. Retrieved April 4 , 2025, from https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai.Google Scholar
Monahan, K., & Burlacu, G. (2024). From burnout to balance: AI-enhance work models. Upwork Research Institute. https://www.upwork.com/research/ai-enhanced-work-modelsGoogle Scholar
Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bibes-Bascarán, J., & Fernández-Leal, A. (2023). Human-in-the-loop machine learning: A state of the art. The Artificial Intelligence Review, 56(4), 30053054. https://doi.org/10.1007/s10462-022-10246-wCrossRefGoogle Scholar
Muñoz-Basols, J., Neville, C., Lafford, B. A., & Godev, C. (2023). Potentialities of applied translation for language learning in the era of artificial intelligence. Hispania, 106(2), 171194. https://doi.org/10.1353/hpn.2023.a899427CrossRefGoogle Scholar
Negrila, A. M. C. (2023). The new revolution in language learning: The power of artificial intelligence and education 4.0. Bulletin of “Carol I” National Defence University, 12(2), 1627. https://doi.org/10.53477/2284-9378-23-17CrossRefGoogle Scholar
Nesi, H. (2024). Are we witnessing the death of dictionaries? Ibérica, 47(47), 714. https://doi.org/10.17398/2340-2784.47.7CrossRefGoogle Scholar
Neumann, V. (1958). The Computer and the brain. Yale University Press.Google Scholar
Nguyen, H., Nguyen, V., Ludovise, S., & Santagata, R. (2024). Misrepresentation or inclusion: Promises of generative artificial intelligence in climate change education. Learning, Media and Technology, 117. https://doi.org/10.1080/17439884.2024.2435834CrossRefGoogle Scholar
Pack, A., & Maloney, J. (2023). Using generative artificial intelligence for language education research: Insights from using OpenAI’s ChatGPT. TESOL Quarterly, 57(4), 15711582. https://doi.org/10.1002/tesq.3253CrossRefGoogle Scholar
Pennycook, A. (2018). Applied linguistics as epistemic assemblage. AILA Review, 31(1), 113134. https://doi.org/10.1075/aila.00015.penCrossRefGoogle Scholar
Pérez-Paredes, P., & Curry, N. (2024). Epistemologies of corpus linguistics across disciplines. Research Methods in Applied Linguistics, 3(3), 111. https://doi.org/10.1016/j.rmal.2024.100141CrossRefGoogle Scholar
Pierce, J., Carroll, J., Hamp, E., Hays, D., Hockett, C., Oettinger, A., & Perlis, A. (1966). Language and machines: Computers in translation and linguistics. ALPAC report, National Academy of Sciences, National Research Council.Google Scholar
Pitrat, J. (1995). AI systems are dumb because AI researchers are too clever. ACM Computing Surveys (CSUR), 27(3), 349350. https://doi.org/10.1145/212094.212124CrossRefGoogle Scholar
Pragya, A. (2024). Generative AI and epistemic diversity of its inputs and outputs: Call for further scrutiny. AI and Society, 12. https://doi.org/10.1007/s00146-024-02097-6Google Scholar
Putland, E., Chikodzore-Paterson, C., & Brookes, G. (2023). Artificial intelligence and visual discourse: A multimodal critical discourse analysis of AI-generated images of “Dementia.” Social Semiotics, 126. https://doi.org/10.1080/10350330.2023.2290555Google Scholar
Rodríguez, S. C. (2023). Epistemología y ontología en ciencia: El reto de la Inteligencia Artificial. Anales De La Real Academia Nacional De Farmacia, 89(3), 379386.Google Scholar
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206215. https://doi.org/10.1038/s42256-019-0048-xCrossRefGoogle ScholarPubMed
Rymes, B., Lee, E., & Negus, S. (2024). Language and education: Ideologies of correctness. Annual Review of Anthropology, 53(1), 187197. https://doi.org/10.1146/annurev-anthro-041422-121726CrossRefGoogle Scholar
Sahebi, S., & Formosa, P. (2024). Artificial intelligence (AI) and global justice. Minds and Machines, 35(4), 129. https://doi.org/10.1007/s11023-024-09708-7CrossRefGoogle Scholar
Sardinha, T. B. (2024). AI-generated vs human-authored texts: A multidimensional comparison. Applied Corpus Linguistics, 4(1), 110. https://doi.org/10.1016/j.acorp.2023.100083Google Scholar
Schalley, A. C. (2019). Ontologies and ontological methods in linguistics. Language and Linguistics Compass, 13(11), 119. https://doi.org/10.1111/lnc3.12356CrossRefGoogle Scholar
Schank, R. C. (1980). How much intelligence is there in artificial intelligence? Intelligence, 4(1), 114. https://doi.org/10.1016/0160-2896(80)90002-1CrossRefGoogle Scholar
Shannon, C., & Weaver, W. (1949). The mathematical theory of communication. University of Illinois Press.Google Scholar
Song, C., & Song, Y. (2023). Enhancing academic writing skills and motivation: Assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Frontiers in Psychology, 14, 114. https://doi.org/10.3389/fpsyg.2023.1260843CrossRefGoogle ScholarPubMed
Spennemann, D. H. (2024). Will artificial intelligence affect how cultural heritage will be managed in the future? Responses generated by four GenAI models. Heritage, 7(3), 14531471. https://doi.org/10.3390/heritage7030070CrossRefGoogle Scholar
Spyns, P., & De Bo, J. (2004). Ontologies: A revamped cross-disciplinary buzzword or a truly promising interdisciplinary research topic? Linguistica Antverpiensia, New Series–Themes in Translation Studies, 3, 279292. https://doi.org/10.52034/lanstts.v3i.117Google Scholar
Stahl, B. C., & Eke, D. (2024). The ethics of ChatGPT–Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 114. https://doi.org/10.1016/j.ijinfomgt.2023.102700CrossRefGoogle Scholar
Strauß, S. (2021). “Don’t let me be misunderstood”: Critical AI literacy for the constructive use of AI technology. Journal for Technology Assessment in Theory and Practice, 30(3), 4449. https://doi.org/10.14512/tatup.30.3.44Google Scholar
Szudarski, P. (2025). Vocabulary, corpus and language teaching: A machine-generated literature overview. ELT Journal. https://doi.org/10.1093/elt/ccaf006CrossRefGoogle Scholar
Tang, K. S., & Cooper, G. (2024). The role of materiality in an era of GenAI. Science and Education, 116. https://doi.org/10.1007/s11191-024-00508-0Google Scholar
Thompson, B., Dhaliwal, M., Frisch, P., Domhan, T., & Federico, M. (2024). A shocking amount of the web is machine translated: Insights from multi-way parallelism. In Ku, L.-W., Martins, A., & Srikumar, V. (Eds.), Findings of the Association for Computational Linguistics: ACL 2024 (pp. 17631775). Association of Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-acl.103CrossRefGoogle Scholar
Tolstykh, O. M., & Oshchepkova, T. (2024). Beyond ChatGPT: Roles that artificial intelligence tools can play in an English language classroom. Discover Artificial Intelligence, 4(1), 60,115. https://doi.org/10.1007/s44163-024-00158-9CrossRefGoogle Scholar
Turing, A. (1948). Intelligent machinery. London: National Physical Laboratory. (Reprinted in Copeland, B. J. (Ed.), The essential Turing (pp. 395432). Oxford University Press, 2004). https://doi.org/10.1093/oso/9780198250791.001.0001CrossRefGoogle Scholar
Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433460. https://doi.org/10.1093/mind/LIX.236.433CrossRefGoogle Scholar
Udaya, M., & Reddy, C. R. (Eds.). 2024. Vocabulary, corpus and language teaching: A machine-generated literature overview. Springer. https://doi.org/10.1007/978-3-031-45986-3CrossRefGoogle Scholar
Vaccaro, M., Almaatouq, A., & Malone, T. (2024). When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour, 8(12), 22933203. https://doi.org/10.1038/s41562-024-02024-1CrossRefGoogle ScholarPubMed
Vajjala, S. (2024). Generative artificial intelligence and applied linguistics. JALT Journal, 46(1), 5576. https://doi.org/10.37546/JALTJJ46.1-3CrossRefGoogle Scholar
van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1(3), 213218. https://doi.org/10.1007/s43681-021-00043-6CrossRefGoogle Scholar
Vetter, M. A., Lucia, B., Jiang, J., & Othman, M. (2024). Towards a framework for local Interrogation of AI ethics: A case study on text generators, academic integrity, and composing with ChatGPT. Computers and Composition, 71, 112. https://doi.org/10.1016/j.compcom.2024.102831CrossRefGoogle Scholar
Walter, Y. (2024). Embracing the future of artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(15), 129. https://doi.org/10.1186/s41239-024-00448-3CrossRefGoogle Scholar
Wang, P. (2007). Three fundamental misconceptions of artificial intelligence. Journal of Experimental & Theoretical Artificial Intelligence, 19(3), 249268. https://doi.org/10.1080/09528130601143109CrossRefGoogle Scholar
Wei, X., Chen, H., Yu, H., Fei, H., & Liu, Q. (2024). Guided knowledge generation with language models for commonsense reasoning. In Al-Onaizan, Y., Bansal, M., & Chen, Y. N. (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 11031136). Association of Computational Linguistics. https://doi.org/10.18653/v1/2024.findings-emnlp.61CrossRefGoogle Scholar
Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman and Company.Google Scholar
Wiener, N. (1961). Cybernetics, or control and communication in the animal and the machine. (2nd ed.). John Wiley & Sons. https://doi.org/10.1037/13140-000Google Scholar
Xiang, J., Tao, T., Gu, Y., Shu, T., Wang, Z., Yang, Z., & Hu, Z. (2023). Language models meet world models: Embodied experiences enhance language models. Advances in Neural Information Processing Systems, 36, 7539275412. https://doi/10.48550/arXiv.2305.10626Google Scholar
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019). Explainable AI: A brief survey on history, research areas, approaches and challenges. In Natural language processing and Chinese computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, proceedings, part II (Vol. 8, 563574). Springer International Publishing. https://doi.org/10.1007/978-3-030-32236-6_51CrossRefGoogle Scholar
Yaw, K., Plonsky, L., Larsson, T., Sterling, S., & Kytö, M. (2023). Research ethics in applied linguistics. Language Teaching, 56(4), 478494. https://doi.org/10.1017/S0261444823000010CrossRefGoogle Scholar
Yu, D., Li, L., Su, H., & Fuoli, M. (2024). Assessing the potential of LLM-assisted annotation for corpus-based pragmatics and discourse analysis: The case of apology. International Journal of Corpus Linguistics, 29(4), 534561. https://doi.org/10.1075/ijcl.23087.yuCrossRefGoogle Scholar