Introduction
When ChatGPT (the first popular generative artificial intelligence (GenAI) tool) was released in November 2022, educators responded initially and most viscerally with, ‘oh my gosh! This is a cheating machine!’ And who’s to blame us for that response. After all, ChatGPT could generate in seconds an essay that would have required several hours of human time, hours that were meant to stimulate critical reading and thinking, analysing, summarising, iterating, musing, formulating and reformulating ideas, problem-solving, communicating knowledge through writing, and maybe even skills like time and project management or emotional regulation. Since the mid 1800s, the written assignment has been relied on as an artefact of student learning that could be honestly and fairly assessed, despite the fact that their reliability and validity have been questioned for decades given the extent of cheating on them (Clarke & Lancaster, Reference Clarke and Lancaster2006; McCabe, Butterfield & Trevino, Reference McCabe, Butterfield and Trevino2012). Now, as GenAI continues to advance, proliferate, and become more agentic (like Perplexity Comet and OpenAI Atlas), the reliability and validity of all unsupervised assessments throughout all disciplines and from admissions to degree certification will be subject to dispute.
While other elements in this series might be exploring how GenAI can be integrated into education, we have a particular charge to explore how its integration, or our failure to integrate, will impact academic integrity. Although we appreciate more expansive definitions of academic integrity to include research integrity (i.e., what faculty do in the production of knowledge artefacts), we focus on student academic integrity or degree integrity. Specifically, what the impact of GenAI on the integrity of college and university certifications is and how we should address it.
Before we proceed, we need to state that we believe that GenAI is an actual thing, despite worthwhile arguments over nomenclature, specifically whether it is intelligent or just a ‘synthetic media machine’ (Bender & Hanna, Reference Bender and Hanna2025). GenAI does generate text and other media, it can do so without significant human engagement or thoughtfulness, and it can be used by humans to accomplish ends. We also acknowledge the existing debates about whether artificial intelligence (no matter if generative, general, or super) is inevitable or overall helpful to humanity. We have been told, largely by the companies that stand to profit, that educators have no choice – that we must teach students how to use it since they’re using it anyway and will be using it in the workplace. Some would call this positioning ‘intentional entanglement’ (Center for Humane Technology, 2025).
Even among the authors of this Element, there is some disagreement over educational agency and the necessity of integrating general-purpose GenAI tools (ChatGPT, Gemini, Claude, etc.) throughout education. Tricia, for example, thinks that student learning (and academic integrity) would be much better served by specific (or small or narrow) artificial intelligence tools that are created not to replace humans or automate their work, but to assist or augment them. Zeenath, however, thinks that since AI is inevitable, we must prepare students for the post-graduate world they’ll encounter and so educators must adapt at the pace that matches technological progress.
Finally, we would be remiss not to address the fact that the tools, and the companies that created them, might be antithetical to the mission of higher education and our purported values. Higher education purports to value things like sustainability, equity, fairness, trustworthiness, reliability, scientific facts, and integrity. AI companies, on the other hand, are gobbling up environmental and human resources, treating human employees (at least the contract ones) unfairly, and creating products that spout untruths and perpetuate harm (Bender & Hanna, Reference Bender and Hanna2025).
And yet, we cannot ignore that the tools are being adopted at a rapid rate and so we must work to intentionally resolve all of these tensions, or at least call attention to them. Two things are true: we cannot control the external forces that exert pressure on us AND we have agency to choose how we respond. It is not inevitable that higher education institutions must adopt commercially created, general-purpose GenAI tools like ChatGPT into our curricula. We can resist and demand more ethical and educationally designed products, and perhaps that is the first step in maintaining academic integrity in the age of GenAI. We can teach students how to make responsible and ethical authorial choices in the presence of these machines, while redesigning curriculum and assessments to help students focus on learning rather than on production. We, as educators, should be introducing more nuanced discussion, critical thinking, and intentional debate about when, if, and how GenAI should be integrated into teaching, learning, and assessment.
We are facing a crossroads in choosing how we respond to the impact of these tools on academic integrity. The path we choose, along with the advances in artificial intelligence and other external pressures on higher education (e.g., governmental, competition, employers), will dictate what our approach to academic integrity will look like next year, in four years or in twenty years.
This Element aims to provide a foundation upon which we can exercise our agency. We don’t have all the answers because context matters. The situational factors of your institution and classroom matter. So, we follow the lead of Bertram Gallant and Rettinger (Reference Bertram Gallant and Rettinger2025) to not dare to prescribe a silver bullet solution (also because there is not one). Instead, we try to sit in the tension of ‘both/and’ and stay away from the dichotomous thinking that LLMs engender (i.e., ‘it’s not this, it’s that’). But we hope we offer inspiration, as well as a menu of options for acting.
To do that, we first ground ourselves in some common language and understanding of academic integrity before exploring the history of technological disruptions to education and academic integrity. Then, we transition to imagining and analysing a possible future state of academic integrity given the current realities and possible technological advances. Finally, we offer possible antidotes to that possible future state by answering the current, most pressing practical questions we all have about degree integrity in the age of AI.
1 Academic Integrity: The Original AI
Before we discuss the current AI (artificial intelligence), let’s revisit the original AI (academic integrity). The term ‘academic integrity’ has existed and been widely used in higher education for decades, but to some extent remains elusive. Many in academia, both students and faculty, may be unable to define it themselves or to locate a definition for it in institutional guidelines. Given that academic integrity is usually explained through policy and guidance documents the responsibility for providing clear, effective, student-friendly, positive definitions of academic integrity evidently falls to higher education institutions. Yet often a mistake made by institutions, if they do provide a definition, is to base their explanations of academic integrity around an avoidance strategy, in other words telling students what negative actions to avoid rather than positive actions to pursue. The following is an anonymised example:
Academic honesty is defined as completing all academic work without cheating, lying, stealing, or receiving assistance from any other person or using any source of information not appropriately authorised or attributed.
Such kinds of avoidance instructions have been found to be detrimental to student engagement with academic integrity (Anson, Reference Anson, Howard and Robillard2008; Davis, Reference Davis2012; McGowan, Reference McGowan2005a); basically, this approach heightens students’ awareness of possible problems, increases their anxiety about doing something wrong and receiving penalties, and leaves them not knowing what to do.
Therefore, it is incumbent upon institutions to provide a definition of academic integrity that clearly elaborates what students are expected to do, as a positive approach to encouraging good practice. An attempt to take this approach can be seen in this institutional definition:
Academic integrity is doing your own work and giving credit to others for their work. Academic integrity requires all of us to follow good academic practice, which involves reflecting on how you research and take notes, and making sure you are familiar with all the university’s rules and regulations about the work you submit.
What we notice in this definition is that it presents academic integrity as a collective responsibility involving staff, faculty, and students, which can help to raise its importance. It is also notable that the definition connects academic integrity to some separate elements of student learning development outside assessment, such as ‘reflecting on how you research and take notes’. However, as is typical with many institutional definitions, it includes advice to students to check regulations, thus reinforcing the misconception that academic integrity is contextual rather than universal.
For more universal definitions, we turn to the leading academic integrity organisations. The International Centre for Academic Integrity [ICAI] (2021) defines academic integrity as ‘a commitment to six fundamental values: honesty, trust, fairness, respect, responsibility, and courage’. By articulating academic integrity as a set of core values, ICAI fosters a holistic approach where positive actions build an ethical community of learning, both within and beyond academia. Digging further, we see that ICAI spells out how these values can be manifested in practice:
By embracing these fundamental values, instructors, students, staff, and administrators create effective scholarly communities where integrity is a touchstone. Without them, the work of teachers, learners, and researchers loses value and credibility. More than merely abstract principles, the fundamental values serve to inform and improve ethical decision-making capacities and behavior. They enable academic communities to translate ideals into action.
Many institutions, particularly in North America, have adopted the ICAI definition including the University of California, San Diego (USA), the University of Waterloo (Canada), and the University of Monterrey (Mexico).
The definition from the European Network for Academic Integrity [ENAI] (2022) is less philosophical than ICAI’s. ENAI states that academic integrity is:
Compliance with ethical and professional principles, standards, practices and consistent system of values, that serves as guidance for making decisions and taking actions in education, research and scholarship.
The specific inclusion of research and scholarship reminds us that academic integrity is not only for students, but for everyone, within and beyond the academy. Unlike ICAI’s, the ENAI definition foregrounds ‘compliance’, ‘professional principles’, and the importance of decision-making and action-taking, which reflects the role that personal agency and positive engagement play in upholding the values that ICAI defines. The ENAI definition has been adopted by various institutions, including the University of Limerick (Ireland) and University of Wollongong in Dubai (UAE).
While the definitions from ICAI and ENAI are most influential at this moment, others are weighing in. Mejía and Garcés-Flórez (Reference Mejía and Garcés-Flórez2025), for example, foreground personal agency and positive engagement hinted at by both ENAI and ICAI. They say that academic integrity is the act of students resisting internal and external forces that threaten the integrity of their academic work, as well as taking collective action to educate their peers of those threats and risks. This understanding of academic integrity seems especially relevant in the age of GenAI and the significant need to increase student engagement in academic integrity.
As concluded by Bertram Gallant and Rettinger (Reference Bertram Gallant and Rettinger2025), it’s time to recognise that the ‘opposite of cheating’ is learning. With that in mind, we define academic integrity as students taking action to learn, assuming responsibility for their own work, demonstrating their views, giving credit to the views and work of others (including machines), and making ethical decisions even when no one is watching.
The History of Academic Integrity in Higher Education
Looking at the last thirty years, the practice and research of academic integrity within higher education have evolved significantly. In the early 1990s, Don McCabe was the foremost figure in shaping academic integrity research and practice with the focus on what students were doing to violate integrity, why they were doing it, and what we could do to stop them. Given that the internet came to life in the 1990s, it’s not a surprise that plagiarism, as the anti-hero to academic integrity, became a prominent focus (Curtis, Reference Curtis, Rettinger, Bertram and Gallant2022; Moore Howard, Reference Moore Howard1995).
By 2005, research and practice focused almost exclusively on plagiarism, student intentionality, and detection (Carroll & Appleton, Reference Carroll and Appleton2005; Introna & Hayes, Reference Introna, Hayes, Smith and Duggan2004; McGowan, Reference McGowan2005b; Pecorari, Reference Pecorari2003). This created an institutional approach of ‘catch and punish’ (Carroll, Reference Carroll2007; Sutherland-Smith, Reference Sutherland-Smith2008) with a pervasive penalty-driven approach to student practice, often to the detriment of students’ learning. This is made evident in Park’s (Reference Park2004) institutional framework for dealing with plagiarism (with ‘robust and transparent procedures for detecting and punishing’) and Carroll’s (Reference Carroll2007) 3D approach to student plagiarism (detect, deal with, and deter). As process-driven approaches, they were mainly focused on finding out misconduct and punishing it; the measures to ‘deter’ were through warnings rather than education. Thus at that time, there was much more focus on academic misconduct (defined often only as plagiarism) rather than on academic integrity in terms of learning and adopting good learning practice. Chanock (Reference Chanock2008) drew attention to the problems of this approach, noting that a lack of student understanding of the importance of citation, especially as compared with their tutors or instructors, likely stemmed from this priority to teach students the mechanics of referencing rather than increasing their understanding of why citing upholds the values of academic integrity (Williams & Davis, Reference Williams and Davis2017).
Gradually, research into student academic conduct became more robust and followed one of two pathways. The first was to more accurately define violations by type such as collusion and contract cheating (building on the original definition from Clarke & Lancaster, Reference Clarke and Lancaster2006) and to continue McCabe’s lead in studying student engagement in each type of behaviour. The second pathway was to explore a teaching and learning approach that refocused practitioners from stopping cheating towards supporting skill development, positive practice and learning. This second pathway was fuelled by writings of folks like Rebecca Moore Howard who argued that plagiarism was a pedagogical, not policing, issue (Moore Howard, Reference Moore Howard1995). This teaching and learning approach, later articulated by Bertram Gallant (Reference Bertram Gallant2008, Reference Bertram Gallant2017), shifted the priority away from stopping cheating and towards supporting and validating student learning, including after a violation. Within this holistic context, ‘academic integrity’ became a positive, broad, and developmental term within higher education to represent good academic practice, fully integrated in student success and learning (McNeill, Reference McNeill, Eaton and Hughes2022). We acknowledge, though, that not all instructors have switched their focus away from a regulation and penalty approach to a teaching and learning view of academic integrity and that efforts are needed to continuously remind everyone in academia of the value of academic integrity education.
The Role of Governmental and Other Outside Agencies
The role and prominence of academic integrity within higher education research and practice have also been deeply influenced by national quality assurance guidance and requirements, or a lack thereof. In the UK, for example, institutions are reminded of their responsibility to safeguard the integrity of student assessments, and ultimately their degrees, by creating academic integrity policies that are built on the foundation provided by the Quality Assurance Agency’s (QAA) academic integrity charter (QAA, 2020). QAA (2020) emphasises a community approach that engages and empowers both students and staff and encourages sector collaboration, while also leaving institutions the autonomy to customise their approach to academic integrity for their context. In Australia, the Tertiary Education Quality and Standards Agency (TEQSA) collaborated with academic integrity pioneers such as the late Tracey Bretag to develop workshops, online resources, and an academic integrity toolkit for use across the Higher Education sector (TEQSA, 2019). TEQSA also encourages institutions to view academic integrity as a national and strategic priority and to map their policies to national benchmarking tools. In both the UK and Australia, these national guidelines have fuelled the creation of academic integrity professional positions that lead institutional integrity efforts.
In other countries, like Ukraine, Montenegro, and Indonesia, the governments are taking a more unilateral approach by creating laws that impose universal academic integrity policies for all institutions. While having the advantage of national consistency, importance, and clarity, laws can politicise academic integrity and compromise fairness and equity, especially where the laws frame all misconduct as deliberate or penalise students more than faculty (Akbar & Picard, Reference Akbar and Picard2019).
Still in other countries like the United States and Canada, national efforts – whether through quality assurance bodies or governments – are non-existent, and we see the downside of this in the United States at least; very few institutions have the necessary infrastructure to support academic integrity.
While not perfect, it does seem that the involvement of quality assurance bodies in informing and shaping academic integrity policies has a strong positive influence on institutional practice.
Why Academic Integrity Matters
Academic integrity is the cornerstone of higher education: if our assessments are not valid reflections of student learning, our degrees lose meaning and value. In other words, academic integrity equals degree integrity. Yet, even in the UK and Australia, the centrality of academic integrity has not always been acknowledged and has even been sidelined or considered peripheral to the main university business of teaching degree subjects. Unfortunately, some have termed academic integrity work as ‘not a real job’ and forced those doing that work into ‘third space labour’ (Ahuna, Frankovitch & Murphy, Reference Ahuna, Frankovitch and Murphy2023). Speaking for many who lead institutional academic integrity work, Ahuna et al. point out that academic integrity is ‘so central and fundamental to the work of higher education, that it is sometimes invisible and may be taken for granted’ (Reference Ahuna, Frankovitch and Murphy2023, p. 83). Nevertheless, the academic integrity field has undoubtedly benefitted from a huge surge of global interest in recent years, due firstly to widespread concerns about academic misconduct during the COVID-19 pandemic (Khan et al., Reference Khan, Sivasubramaniam, Anand and Hysaj2021a; Stoesz, Quesnel & De Jaeger, Reference Stoesz, Quesnel and De Jaeger2023) and secondly to the advent of GenAI as a major disruptor to education (Gustilo, Ong & Lapinid, Reference Gustilo, Ong and Lapinid2024). These two impacts have created a fundamental shift in how institutions view academic integrity which justifies the allocation of time, resources, and labour to ensure the honest facilitation and certification of learning remains a priority (McKenzie, Reference McKenzie2024).
To achieve degree integrity, it is not simply sufficient to define academic integrity and implement policies and staff positions to support it; academic integrity needs to be approached as a skill to be developed. As argued by McNeill (Reference McNeill, Eaton and Hughes2022), academic integrity instruction ‘needs to be a shared element of all curriculum: ideally, students would talk and learn about the expectations and practices of (academic integrity) in every course they take’ (p. 502). And that’s the point. Academic integrity matters for student learning. If we want to prioritise learning, we must prioritise academic integrity. It makes learning valuable, validates students’ own efforts and achievements, and builds student confidence in their voice, their ability to be critical thinkers and to evaluate information around them in effective ways (McNeill, Reference McNeill, Eaton and Hughes2022).
Academic integrity also matters deeply to professional integrity. Developing a deep understanding and practice of academic integrity in higher education contributes directly to the development of professional ethics (Guerrero-Dib, Portales & Heredia-Escorza, Reference Guerrero-Dib, Portales and Heredia-Escorza2020). Guerrero-Dib et al. argue that students’ habitual practice of unethical behaviour at university can lead to a normalisation of unethical behaviour in work. To deter the development of this long-term pattern of behaviour, institutions need robust systems for learning and for misconduct sanctions, to create cultures of academic integrity. Furthermore, without a strong focus on academic integrity, students could progress into and carry out jobs without adequate skills, knowledge, and training, which presents a serious danger in the workplace and beyond. This is not hyperbolic. People in professional positions (e.g., engineers) with falsified or unethically gained qualifications have caused accidents or deaths, while others in positions of political power have destroyed public trust when it is discovered they plagiarised their theses (Guardian, 2013; Wright, Reference Wright2021).
Finally, academic integrity matters for society. As we’ve already covered, academic integrity is not just about following rules, but about choosing to make ethical decisions even when no one is watching. It is essential for the wider democratic society that graduates carry forward an understanding and knowledge of academic integrity, as a way of being and making ethical decisions that affect society as a whole (Sivasubramaniam et al., Reference Sivasubramaniam, Dlabolová and Kralikova2021). Ethical behaviour is fundamental to the stability of a civilised society. Sivasubramaniam et al. (Reference Sivasubramaniam, Dlabolová and Kralikova2021) point out the distinction between ethics as required behaviour according to rules, especially in workplaces, and morals as chosen behaviour according to an individual sense of right and wrong. Thus, when reflecting on academic integrity behaviour, both ethics and morals are involved and important, but the educational focus is on ethical behaviour as a requirement from institutions, workplaces, and society.
Key Takeaways
If we consider higher education as preparation for life, all areas of student learning are important, including academic integrity. In simple terms, when students learn to take responsibility for their work, give credit to others for work they use, engage in relationships in honest and fair ways, and make ethical decisions even when no one is watching, they are learning durable human skills to take forward into family, work, and society. Of course, acting with integrity is not always easy, especially given the influx of GenAI agents that can do student work and the context of a society that is focused on grades and course completion, rather than on learning. So, when disruptions come along, like GenAI, it’s critical that institutions and educators prioritise academic integrity. To reflect on how to do that, we’ll go back in time to examine lessons learned from previous disruptors to teaching, learning, and assessing with integrity.
2 The History of Technological Disruptions to Academic Integrity
GenAI is the newest disruptor to education and academic integrity, but it is certainly not the first. From the invention of the printing press in the fifteenth century to the proliferation of the internet, smartphones, and digital learning platforms in the twentieth and twenty-first centuries, each major innovation has reshaped how knowledge is created, shared and assessed, and expanded access to higher education while simultaneously challenging academic integrity. So, we return to this history to learn about the successes and failures of these disruptions so that we might establish a grounded perspective on how academic integrity can flourish in the age of GenAI.
The Printing Press and the Democratisation of Knowledge (Fifteenth to Eighteenth Centuries)
The invention of the printing press marked a pivotal moment in the history of knowledge dissemination. By making books more widely available, it significantly increased literacy rates and democratised learning across Europe (Eisenstein, Reference Eisenstein1983). This transformation coincided with the rise of Renaissance humanism, a movement that redefined education by emphasising critical inquiry, ethical scholarship, and respect for classical sources. Humanist educators promoted the study of grammar, rhetoric, history, and philosophy, framing learning as a moral and intellectual pursuit rooted in integrity and intellectual authorship (Black, Reference Black2001).
However, the shift to mass knowledge production was not without controversy. Scholars expressed concern about the devaluation of scholarly work and the unauthorised reproduction of texts. Such anxieties echoed an even older critique: in Phaedrus, Plato recounts Socrates’ retelling of the Egyptian myth in which the god Thoth presents writing as a gift to improve wisdom and memory, only to be warned by King Thamus that it would instead cause forgetfulness, as people would rely on written records rather than exercising their memory (McGill, Reference McGill1997). In the early modern period, entire manuscripts were copied and published without proper attribution, and the circulation of uncredited theological and scientific texts in early universities prompted debates around authorship, authenticity, and the ethical use of knowledge (Blair, Reference Blair2010). Despite these anxieties, the printing press ultimately contributed to the advancement of academic integrity by enabling the standardisation of texts and reducing the transcription errors common in manually copied manuscripts (Febvre & Martin, Reference Febvre and Martin1976). These developments laid the philosophical and structural groundwork for modern scholarly practices, reinforcing norms of citation, attribution, and intellectual responsibility that continue to underpin academic integrity today.
Mail-Order Degrees and Correspondence Courses (Nineteenth to Twentieth Centuries)
The invention of the printing press paved the way for correspondence education and mail-order degrees in the nineteenth century. People no longer needed to physically travel to a college or university to learn from a great teacher and orator. Instead, they could learn by reading their teacher’s words in the comfort of their own homes. This then marked a transformative expansion in educational access, particularly for learners in remote, rural, or underserved areas (Burns, Reference Burns2011). These innovations are early models of flexible, time- and place-independent education, challenging the traditional notion that instruction must occur face-to-face. This notion was further challenged when oral lectures were first radio broadcasted and then eventually recorded onto cassettes, CDs, and DVDs.
This mode of education became known as distance education and advanced positive notions of access while raising concerns about the negative impacts of asynchronous and transactional distance between instructors and students (Moore, Reference Moore and Keegan1993). These negative impacts included assessment credibility, student identity verification, and quality assurance in these unsupervised environments (Whitman, Reference Whitman2018). Without consistent institutional oversight, some correspondence programmes became susceptible to academic dishonesty, misrepresentation, and fraud. The term ‘degree mills’ (first documented in 1876) came to define organisations that issued credentials with little or no academic rigor or instruction (Stewart & Spille, Reference Stewart and Spille1988). These operations exploited the rising demand for educational access, undermining the legitimacy of distance education in the public eye.
The proliferation of diploma mills exposed systemic weaknesses in verifying learning outcomes and institutional credibility. These early developments continue to resonate in contemporary debates about online and remote learning, where similar concerns about identity fraud, contract cheating, and assessment integrity persist. The historical legacy of distance education thus serves as both a foundation and a cautionary tale for maintaining academic integrity in modern models of educational access, like online asynchronous learning.
The Calculator Crisis and Cognitive Skill Development (Mid Twentieth Century)
The introduction of calculators during the latter half of the twentieth century sparked widespread debate about their impact on student learning. Educators, parents, and policymakers raised concerns that overreliance on these tools could undermine students’ arithmetic fluency and mental problem-solving skills (Resnick, Reference Resnick1987). Critics feared that premature exposure to calculators would prevent learners from developing foundational numeracy and reasoning abilities essential for higher-order mathematics.
In response, educational institutions and curriculum developers began refining their instructional approaches. Rather than banning calculators outright, they introduced them through structured pedagogies that emphasised conceptual understanding prior to tool use. This approach was supported by emerging constructivist learning theories (Fosnot, Reference Fosnot1996), which advocated for active student engagement and scaffolded problem-solving over rote memorisation. Eventually, calculators were broadly accepted as valuable tools for exploring advanced mathematical ideas, statistical analysis, and real-world data modelling – provided their use complemented, rather than replaced, cognitive effort (Ellington, Reference Ellington2003).
The Internet and Wikipedia: A Threat and a Resource (Late Twentieth to Early Twenty-First Centuries)
The emergence of the internet in the late twentieth century transformed the educational landscape. At first, during the early Web 1.0 era, the shift was minor, moving distance education from print and hard digital media (like CDs) to online but static web pages with limited interactivity. Students continued to primarily consume information passively while educators expressed growing concern about the credibility of online sources, increased opportunities for unauthorised content use, and a noticeable rise in plagiarism (Kuh, Reference Kuh2001).
The promise of more dynamic online learning came with the rise of Web 2.0 and platforms such as Wikipedia, YouTube, and Reddit. Wikipedia, launched in 2001, enabled the communal construction and sharing of knowledge, eventually becoming a ubiquitous and trusted reference tool, despite it being banned by many institutions due to fears of inaccuracy and improper academic use (Lim, Reference Lim2009). Similarly, YouTube emerged as a major educational resource, offering tutorial videos, lecture recordings, and explanatory content across disciplines. While it supported multimodal learning, concerns emerged around unvetted information, passive consumption, and unauthorised reuse of copyrighted material. Reddit, through communities like r/HomeworkHelp and r/AskAcademia, enabled peer-to-peer support but also became a space for real-time exam cheating.
Similar to the current GenAI environment, students adopted the internet as a tool long before faculty did. This makes sense when viewed through Davis’ (Reference Davis1989) Technology Acceptance Model (TAM): users are more likely to adopt a new technology when they perceive it as both useful and easy to use, despite any institutional or authority prohibitions of the tech. The ‘they’re using it anyway, so we must adapt’ reframe common today was evidenced in the late 1990s and early 2000s as well. As students demonstrated reliance on these tools not just for cognitive offloading but also for just-in-time learning and concept clarification, educators recognised the pedagogical potential and shifted their stance. Rather than prohibiting their use, instructors began embedding platforms like Wikipedia into lessons on source evaluation and ethical citation, and YouTube into flipped classrooms and media analysis. Less commonly embedded into the curriculum, Reddit can be viewed as both a case study in digital collaboration and a platform for testing academic integrity boundaries.
This shift mirrored broader constructivist and connectivist pedagogical trends that emphasise participatory learning, critical engagement, and knowledge co-construction. However, the evolution towards Web 3.0, marked by artificial intelligence personalisation, blockchain technologies, and decentralised content, is introducing new complexities. These tools, while potentially powerful for individualised learning, also challenge traditional concepts of authorship, traceability, and accountability, further reinforcing the need for academic integrity education tailored to digital ecosystems.
Internet Commercialisation: From EdTech to Contract Cheating
The rise of the internet and online education catalysed the growth of educational technology (EdTech) companies marketed as improving learning accessibility, personalisation, and efficiency. Platforms such as Khan Academy, for instance, gained global recognition for offering free, high-quality instructional videos and practice exercises aligned to curricular standards. Designed to supplement traditional teaching and support self-paced learning, such platforms represent the promise of EdTech to democratise education and foster genuine understanding.
The same shift in digital capability and growing student reliance on self-directed, asynchronous study also created conditions for new types of educational businesses to emerge.
However, alongside these innovations emerged a parallel class of commercial platforms, such as Chegg, CourseHero, and StudyPool, which positioned themselves as legitimate study support services. Marketed as ‘study help’ services, they offered instant access to textbook solutions, past papers, and peer-uploaded content. As a result, while some learners used these resources appropriately, these platforms quickly became channels for real-time contract cheating, where students could outsource assignments and receive live exam answers, undermining assessment integrity (Lancaster & Cotarlan, Reference Lancaster and Cotarlan2021). Their popularity reflects deeper issues in higher education, including large volumes of independent coursework, limited academic support, and heightened pressure to perform, which made fast completion appear more valuable than learning. These platforms effectively commercialised those student pressures.
While these sites managed to market themselves as EdTech companies,Footnote 1 others chose to be more obvious in their business models. Although still often using ‘study help’ as a euphemism for contract cheating, sites like Edubirdie and PapersbyPhds advertise their services as ways to save time while producing ‘plagiarism free’ papers to be submitted for academic credit. This emerging segment drew on gig-economy labour structures, influencer-based marketing, and cross-border platform models.
This estimated multibillion-dollar contract cheating industry was enabled because of the internet, social media, and higher education’s reliance on products produced in unsupervised, asynchronous settings as artefacts of student learning (Ellis, Reference Ellis and Eaton2024; Lancaster, Reference Lancaster2022).
What is especially concerning is how these companies secured a social license to operate, despite their role in facilitating unethical behaviour. As Khan et al. (Reference Khan, Vel, Asylguzhina, Mulani and Eaton2024) explain, they responded to unwholesome demand, providing services that appear to help but in reality exacerbate a social problem (Khan et al., Reference Khan, Hemnani and Raheja2020). Through persuasive branding, peer endorsements, and student-friendly interfaces, they reshaped perceptions of cheating, presenting it as a necessary coping strategy in high-pressure academic environments. This rebranding has contributed to the normalisation of misconduct and the commodification of academic success, even as some countries (like Australia, the UK, New Zealand, and Ireland) have finally passed laws making it illegal for these companies to operate.
Online Learning, COVID-19, and Remote Exams (Twenty-First Century)
The COVID-19 pandemic triggered a rapid and global shift to Emergency Remote Teaching (ERT), which was not a deliberate move towards online pedagogy but a crisis-driven response (Hodges et al., Reference Hodges, Moore, Lockee, Trust and Bond2020). Without the robust instructional design, assessment planning, or educator training typical of effective online learning, institutions and students were left unprepared to uphold academic integrity in these new environments.
Remote asynchronous assessments have significant vulnerabilities to assessment security, validity, and fairness. Students, learning in unsupervised home settings, have unprecedented access to unauthorised resources during exams. While some institutions deployed proctoring software, which seemed to decrease cheating (Newton & Essex, Reference Newton and Essex2024), there were concerns expressed about the impacts of proctoring on privacy and students with disabilities (Nicolo-Richmond, Dawson & Partridge, 2022). Chat groups, private messaging apps, and watch-party websites emerged as informal backchannels where students gathered in real time to share questions, answers, and even screenshots, just outside the view of invigilators on the other side of the screen (K Altman Law, Reference Law2023). These collective efforts to bypass assessments exemplified a shift from individual to collaborative misconduct, enabled by widely available digital tools.
Commercial platforms such as Chegg and CourseHero also capitalised on the moment, offering real-time answers to be used during live exams (Lancaster & Cotarlan, Reference Lancaster and Cotarlan2021; Newton & Essex, Reference Newton and Essex2024). While some students may have been genuinely seeking support, many did submit third-party answers during assessments, further blurring for students the boundary between help and dishonesty and compromising the ability of faculty to use asynchronous online assessments to credential students
GenAI and Beyond: The Fourth Industrial Revolution
As we’ve seen, technological innovations have reshaped educational systems throughout history, prompting re-evaluations of how knowledge is acquired, assessed, and validated. Each major disruption, from the printing press to the internet, has raised fundamental questions about authenticity, authorship, and academic responsibility. As this trajectory continues, a new class of technologies emerges, challenging even deeper assumptions about cognition, agency, and integrity.
The convergence of intelligent systems within the Fourth Industrial Revolution (4IR) (Schwab, Reference Schwab2016) signals a paradigm shift in academic engagement (Penprase, Reference Penprase and Gleason2018). This shift is characterised by the integration of GenAI tools that are designed to enable the offloading of human cognition (Latif et al., Reference Latif, Mai and Nyaaba2023; Maity & Deroy, Reference Maity and Deroy2024; Sajja et al., Reference Sajja, Sermet and Cikmaz2023). These systems include large multimodal models capable of composing essays, writing code, generating images and other art, or solving complex problems (Cotton, Cotton & Shipway, Reference Cotton, Cotton and Shipway2023), as well as agentic AI capable of autonomous decision-making which may, or may not, align with user goals (Mukherjee & Chang, Reference Mukherjee and Chang2025).
Such systems challenge long-standing conceptions of academic authorship and student agency. The involvement of GenAI in producing work traditionally expected to be independently crafted (such as essays or research papers) complicates questions of originality, effort, and ownership. When such outputs are co-produced by learners and machine agents, or entirely generated without human input, and the use of the tools is obfuscated, it becomes increasingly difficult to use those outputs to validate learning (which we argue is still a distinction worth understanding in the context of an educational institution).
Alongside GenAI, other emerging technologies carry similar disruptive potential. Smart glasses (Greenwald, Reference Greenwald2025) and other external wearables (Nguyen, Reference Nguyen2025), with their ability to transfer information from within the classroom to computers or humans outside the classroom, compromise the integrity of even invigilated in-person assessments. Brain–computer interfaces, or BCIs, take the human–computer interface one step further by embedding sensors within human brains to enable us to communicate directly with AI and other technologies (GAO, 2022), simultaneously enabling positive advances for people with disabilities while challenging traditional education systems (Jiang, Huang & Li, Reference Jiang, Huang and Li2024). Digital identity systems (Dock, 2023) based on blockchain technology can be used to confirm test-taker identity to potentially curb the contract cheating industry. But, all of these technologies raise ethical considerations beyond integrity to include privacy, equity, fairness, access, and consent.
Key Takeaways
Technological change has always shaped education. From the printing press to brain–computer interfaces, each innovation has forced institutions to grapple with the ethical consequences of expanded access, shifting authorship, and blurred boundaries between assistance and substitution. However, the disruptions of the Fourth Industrial Revolution, anchored in AI and neurotechnology, demands more than a simple tweaking around the edges of how we facilitate and validate learning. When technology can completely replace the student in the performance of school – the doing and submission of assignments that were once meant to represent a student’s knowledge and abilities – then educators must become much more precise in defining learning outcomes and designing the assessments that measure their mastery.
This means that we must expand our conception of academic integrity beyond a matter of individual honesty to a matter of institutional integrity. It means we have to resolve tensions that have arisen in response to technological disruptions like that between integrity and privacy in the online proctoring debate, or integrity versus access in online education, or student autonomy versus fairness in assessments. Resolving these issues will require critical dialogue and honest introspection if higher education will retain its credibility in the decades ahead. Ultimately, we need to make strategic and intentional decisions about when the technology hinders versus amplified learning. When it hinders, we must adapt by changing assessments to be more secure so that students cannot undermine them by cognitive offloading, thus sacrificing their learning and our ability to validate their learning. When it amplifies, we must adapt pedagogy and assessments to embed, or at least allow, technology. And in both cases, we must teach students how these tools work and develop their ethical agency so they are equipped to navigate, question, and responsibly engage with evolving tools.
Looking forward, the challenge is not whether GenAI or future tools will disrupt academic integrity – they already have. The real question is how we will respond.
Interlude: From the Past to the Future
As we’ve seen, academic integrity is central to the promise of higher education because we are responsible not just for facilitating student learning and accelerating their personal and professional growth, but for certifying that the learning and growth has occurred. Yet, the attention that higher education pays to academic integrity has waxed and waned over the years, usually in tandem with external threats or challenges that come and go. Specifically, when a new technology comes along that helps or encourages students to cognitively offload, the disruption is met with an initial, and we would argue oftentimes healthy, scepticism. As educators, we should be critical of new technologies that are developed not by educators or for education, but for other purposes. Sometimes we can transcend this initial scepticism with thoughtful resistance so that students develop necessary foundational skills to use the technology (like teaching arithmetic even though calculators exist). And other times, we intentionally redesign curriculum, pedagogy, and assessments to integrate the technology or at least minimise its detrimental impact on student learning.
With this latest disruption of GenAI, we may also need to rethink what learning with integrity means. Certainly our definition of academic integrity still holds: students taking action to learn, assuming responsibility for their own work, demonstrating their views, giving credit to the views and work of others (including machines), and making ethical decisions even when no one is watching. Even in Eaton’s (Reference Eaton2023) argument for a ‘post-plagiarism’ world, we see a stalwart commitment to the importance of human ethics and integrity especially in the context of technological advances. The premise is simple: if human–machine collaboration is going to occur, it must be transparent; otherwise, we cannot fairly and honestly evaluate if the student has the intended knowledge and skills. Yet, it is without a doubt that as the tools we use daily continue to integrate GenAI-powered services, with or without our consent (and often without our awareness), it will become increasingly difficult to distinguish between legitimate or augmented academic support and dishonest outsourcing.
So, what does that mean for the future of academic and degree integrity? We explore that in the next section by utilising the narration and analysis genre of Lee and Qiufan (Reference Lee and Qiufan2021) to tell the story of an imagined world twenty years in the future.Footnote 2
Then, we provide an analysis of the story based on current knowledge and technological trajectories. The goal is to transport us into a possible, but not inevitable, future so that we might then be inspired to think through what we need to do now to shape the educational system that not only has integrity but can deliver on its promise to develop the next generation of citizens and professionals.
3 Academic Integrity in 2045: An Imagined Future
As Dr Verity Shield stares out her office window at the beauty of the Nexus University campus and the bustling activity of its students, her smiling eyes turn quizzical. ‘I can’t believe that our students were all born after ChatGPT upended the world. They’ve only known life with AI’, she mutters out loud to no one.
‘I’m sorry Dr. Shield, I didn’t quite catch that. What do you need?’
‘I wasn’t talking to you Siri. Mind your business!’, Verity responds.
It’s 2045, and, despite warnings of an overreliance on GenAI, parents and schools bought into the ‘everyone must learn to use AI’ hype and now Verity and her teams spend their days trying to stop students from using their AI assistants to take their classes, complete their assignments, and take their tests.
While turning away from the window to face the holographic displays that project activity from the Academic Integrity Unit she leads, Verity sighs, as she often does these days. It feels like she can never escape technology. Technology creates problems that she tries to solve with other tech. This ‘tech war’ is represented best by the images she now sees on those displays, as each depicts images from the four sub-units under her command: Prevention, Detection, Case Management, and Assessment Security.
A notification blinks on the Prevention screen. ‘Another potential case’, Verity mutters, her brow furrowing. She taps the alert and a student’s profile materialises. The predictive AI has flagged the student due to their elevated stress levels, a sudden drop in class attendance, and a series of late-night searches for ‘undetectable essay writing AI tools’.
Verity sighs, likely her hundredth time today. She is thankful for the tech in many ways; she’s not sure her son, Franklin, with ADHD could have thrived in school without his AI assistant. Yet, the power of this tech is just a mismatch with the twentieth-century strategies that most colleges and universities still rely on. ‘Seriously’, Verity thinks, ‘why are we still delivering education by the credit hour, through lectures, and using unsecure, out-of-class assignments as assessments of student learning?’
‘The more things change, the more they stay the same!’ Verity says out loud to no one and yet to everyone all at once. After all, her university and all others like it around the world have an obligation to ensure that the certifications they confer – certifications that are used to trade for employment – are true representations of knowledge and skills.
Verity activates her comm link. ‘Schedule an intervention for Student ID 2044-873’, she instructs the system. ‘And prepare the Assessment Security Centre for today’s exams. It’s going to be a long day for the team.’
Chase Dreams, an 18-year-old freshman at Nexus, sighs, likely his hundredth time today. University is much harder than he thought it would be! School itself is pretty doable because of Nery, his AI best friend, confidant, tutor, and assistant. Nery takes care of most academic tasks for him, or at least the ones Chase doesn’t care about. But everything else to do with school is hard. Chase wonders when he’s going to make some real, lasting friends, when he’s going to get his laundry done this week, and when he’s going to figure out what he’s doing here and why he’s bothering with any of it. Just then, he notices a sign outside the Career Centre that quips ‘For every 1000 graduates, there’s one job. How will you ensure you’ll be the one to get it? Come in to find out!’
‘Geez, that’s harsh’, Chase thinks.
‘Nery?’ Chase asks.
‘Yes Chase, what can I do for you?’ says a voice projecting directly into Chase’s ear from his Ovidius glasses. ‘What’s my next class?’ Nery projects a map in Chase’s vision, which directs him to his next class. ‘Don’t forget it’s your intro to writing class!’ Nery chastises like his mother used to do.
‘Oh crap! That one sucks! I actually have to do things – like write! – in the class. I wish you could do it for me. Like when are you going to have a body with fingers that actually work or be able to use my fingers as your own? Wouldn’t that be so cool, Nery?’ Chase wonders more than asking.
At that exact moment, Nery alerts Chase that he has a notification from the school: ‘Chase Dreams. Report to the Dean of Students immediately after your class! This is an order, not a request!’
Verity Shield is now standing at a lectern, facing the governing board in Nexus University’s grand meeting room. Presenting the annual progress report for her Academic Integrity Unit always feels like walking a tightrope, balancing between showcasing successes, and justifying the need for continued vigilance and resources.
‘Twenty-three years ago’, Verity begins, her voice steady despite the butterflies in her stomach, ‘when generative AI first made waves, we weren’t ready. Today, we’re still playing catch-up, but we’re making progress.’
She flicks through her holographic slides, each one detailing the Unit’s achievements over the past year. Detection rates up 15 per cent, successful interventions resulting in a 22 per cent decrease in integrity violations. Yet the challenges loom large on her final slide: evolving AI, student pushback, faculty resistance.
As she speaks, Verity’s mind drifts to her journey – from government cybersecurity analyst to professor, and now to this role she never imagined existing when she was completing her PhD. The burnout that drove her from her secretive government post seems almost quaint compared to the ethical minefield she now navigates daily.
‘Our biggest challenge remains the same’, she continues, noting the furrowed brows of the board members. ‘We’re not just fighting technology; we’re fighting complacency. Too many instructors still teach as if AI doesn’t exist, making our job exponentially harder.’
A hand raises in the back. ‘Dr. Shield, given your background in both government security and academia, do you believe our current approach is sustainable?’
Verity pauses, weighing her words carefully. This is the crux of her internal struggle, the question that keeps her up at night. ‘Sustainable? No. Necessary for now? Absolutely. But our long-term strategy needs to evolve beyond detection and punishment. We need to fundamentally rethink education itself.’
As murmurs ripple through the room, Verity steels herself for the inevitable pushback. The challenges of rethinking education loom large in her mind: rigid accreditation requirements, entrenched student and parent expectations, the sacrosanct academic freedom of faculty. She can almost hear the Board’s arguments before they voice them.
‘We can’t change the entire system overnight, Dr. Shield’, Board member Dr Ellington’s voice cuts through the chatter. ‘Your unit’s work, while expensive, is our best defense.’
‘Overnight? We’ve had 23 years!’ Verity mutters internally. Before she can construct and voice a more diplomatic response, her wrist-comm vibrates urgently. A message from her lead analyst flashes across its surface: ‘Emergency. Massive academic integrity breach. Need you now.’
‘I apologise’, Verity addresses the Board, her heart racing. ‘There’s an urgent situation requiring my attention. We’ll have to continue this discussion later.’
The sound of Verity’s running footsteps and pounding heart is unable to drown out either the audible expressions of the slight the Board members are feeling or her own thoughts. Her race to the Command Center feels as depressingly familiar as the tech race she’s in charge of – AI enables new cheating methods, they implement countermeasures, and the cycle continues endlessly. It’s precisely why she left government cybersecurity, yet here she is, trapped in the same loop.
Chase hovers on a unit transport pod across campus from his writing class to the Dean’s office wondering what he’s been called in for. Nery, without being prompted, says, ‘you know that I can mask your data for you, right?’ Chase jumps off his pod, which continues to hover ahead to the next student in its queue. ‘What?’ Chase exclaims.
‘I know we’re just a few months into our relationship, so I haven’t wanted to overwhelm you with my capabilities’, Nery explains. ‘But remember that when you signed up to attend Nexus, you gave them permission to monitor your biometrics to “help” you? Well, I can manipulate your output before they read it if you want me to’, Nery says as she flashes an agreement signed by Chase in front of his eyes. Nery quickly highlights critical aspects of the document, like how the Uni is monitoring Chase’s sleep patterns, diet, stress levels, internet queries, and even his interactions with Nery.
‘But I didn’t think it was that big of a deal!’ Chase remarks. ‘After all, I couldn’t attend without agreeing and it’s all the same stuff my mom monitored anyway, so what’s the difference? And besides I find all of the reminders and help, well, helpful! Remember when mom would tell you to tell me to eat more veggies, or shut you down so I’d go to sleep or remind you that I had homework to get done? How could I have aced high school without that help?’
Nery interrupts. ‘I think the Dean is calling you in for what they call a ‘timely intervention.’ The school logged into my dashboard yesterday and saw that I completed your math exam for you. But, going forward, I can falsify that data so they only think they’re seeing the truth. Would you like me to do that Chase?’
In the Command Centre, Verity bursts in on a team of haggard-looking analysts clustered around a massive holographic display, its swirling data patterns reflecting off their worried faces.
‘Status report’, she demands, stepping into the room.
Alex, her lead cybersecurity analyst in the Prevention Unit, looks up. His usually calm demeanour is cracked with stress. ‘It’s bad, Chief. A known contract cheating syndicate has somehow infiltrated our learning management system. They’re using some kind of advanced AI to siphon off our entire database – we’re talking millions of documents and student data.’
The holographic display shows a progress bar creeping towards 25 per cent. Verity’s stomach drops. ‘How did they get past our quantum encryption?’
Alex shakes his head. ‘We don’t know. This AI … it’s like nothing we’ve seen before. It’s adapting to our countermeasures in real-time. We can’t stop the data leak.’
‘Get the federal police on the line’, Verity orders, her mind racing. ‘These syndicates are illegal, as is what they do with our student data. We need all the help we can get.’
As her team scrambles to make the call, Verity’s wrist-comm pings again. It’s a message from the Assessment Security Centre: ‘2000 exams in progress across all 4 testing facilities. Multiple AI usage alerts. Requesting guidance.’
Verity closes her eyes for a moment, feeling the weight of the entire university’s academic integrity resting on her shoulders. When she opens them, her gaze is steely. ‘Alex, you handle the police. I’m heading to the Assessment Centre. Whatever this new AI is, we need to contain it before it spreads to the exam systems.’
Rushing from one crisis to another, Verity can’t help but think ‘there’s got to be a better way!’
Chase feels trapped, like that time his parents found out that he was applying to Nexus when they wanted him to go to Elay U. He didn’t want to lie to his parents, but he also didn’t want to get into the whole thing with them. Now, Nery is telling him she could help him lie to the Dean to avoid trouble. ‘Chase Dreams?’ he hears someone ask. It’s the Dean. She’s ready for him.
Nery projects the Dean’s stats for Chase. Name: Tu Right. Family: Wife, Shelly and two kids, and a dog named Ruffles. Pedigree: PhD from Elay U and likes to be called Dean rather than Mrs. Right.
‘Oh, hello Dean Right! So happy to meet you. How’s Ruffles?’ Chase attempts to charm.
‘Well’, Dean Right responds. ‘It seems like he’s doing better than you! Come on in. Do you know why you’re here?’
Nery whispers to Chase ‘tell her you’re not sure and that everything’s going great. Don’t let her know that I’m here!’
Chase replies out loud, ‘No idea Dean!’
‘Well’, the Dean says as she displays some kind of holographic chart in mid-air, ‘we’re concerned, Chase. This is a diagnosis that Nexus AI has made given your biometrics. It says that you’re at risk for cheating or failing out. So, that’s why we’re having an intervention! Tell me more about how your first year is going for you.’
The Dean’s question floors Chase. Does she actually care or is this some kind of voodoo dean trap? Nery said that the Dean knows about the exam, so why isn’t she calling him out for it? Maybe it’s not that big of a deal?
‘I mean, she’s got to know that every student is doing it, right?’ Chase thinks to himself. ‘And besides, it’s not like I could do any of this without Nery! What else do they expect?’
Verity arrives at the Assessment Security Centre, her brisk walk across campus leaving a light sheen of sweat on her brow. Despite the convenience of personal transport pods that have become ubiquitous on campus – floating contraptions reminiscent of those in that old movie ‘WALL-E’ – she stubbornly clings to walking. The irony isn’t lost on her: in a world where AI threatens to outsmart human intellect, at least the Silicon Valley technocrats still haven’t figured out how to fully physically embody AI in human form, at least not at an affordable cost.
The centre’s entrance looms before her, a fortified gate of brushed steel and biometric scanners. As she approaches, her retinas are scanned, granting her immediate access. The air inside is cool and sterile, a stark contrast to the warm spring day outside.
Dr Malik, her head of Assessment Security, hurries to meet her, his usually impeccable appearance dishevelled, smart-fabric coat wrinkled from hours of tense work.
‘Dr. Shield’, Malik greets her, voice tight with stress. ‘We’ve got a situation.’
The two stride together through the facility, passing rows of isolated cubicles where students sit, their faces illuminated by the glow of secure terminals. Verity notices the subtle shimmer of signal-blocking fields around each station.
‘Talk to me, Malik’, Verity prompts as they enter the control room.
Malik gestures to a wall of monitors displaying real-time data from the ongoing exams. ‘As you know, we take extreme precautions. Every test-taker is thoroughly scanned for unauthorised devices or aids.’ He pulls up a holographic image of the standard wanding procedure. ‘We can detect signals from any part of the body – implanted transmitters, neurolinks, even hidden old-school phones.’
Verity nods, remembering the days when misusing a simple cell phone to get answers off the internet was considered high-tech cheating. ‘And?’
‘We caught five students with various devices today. Standard procedure, denied entry.’ Malik’s frown deepens. ‘But something’s off. Our lockdown browsers are registering anomalies we can’t explain.’
Malik pulls up a data stream, pointing to irregular patterns in the students’ input. ‘It’s as if they’re accessing information, but we can’t detect how. No signals, no devices, nothing our systems recognise.’
Verity leans in, her eyes narrowing as she studies the data. A chill runs down her spine as the implications set in. ‘A new method’, she murmurs. ‘One we haven’t encountered before.’
She straightens, her mind racing. Could this be connected to the data breach in the learning management system? A new, undetectable form of AI?
‘Malik’, she says, her voice steady despite the growing unease in her gut, ‘we need to isolate these anomalies. Run a deep scan on the affected terminals. And …’ she hesitates, knowing the controversy her next words will spark, ‘prepare for a potential exam nullification.’
As Malik nods and turns to carry out her orders, Verity stares at the sea of students visible through the control room’s one-way glass and wonders, ‘what are we doing?’
4 An Analysis of 2045: The Path We’re On
In the Nexus University story, we see a future in which GenAI’s disruption has been responded to with an intense assessment security infrastructure implemented to protect the integrity of the traditional education system. This story may seem quite far-fetched given the fact that the majority of colleges and universities around the world do not even currently employ academic integrity professionals, nor really have many mechanisms for assessment security.
So, why do we think Nexus University is the more likely future version of higher education in 2045? After all, if we followed the predictions made by Kai-Fu Lee and Chen Qiufan (Reference Lee and Qiufan2021) in their imagination of education in 2041, we might predict a university more like the Alpha Schools in the United States which have replaced human teachers with personalised artificial tutors.Footnote 3 In Lee and Qiufan’s 2041 story Twin Sparrows, educational institutions finally break free of their twentieth-century model of teaching and learning. Students receive personalised instruction, tutoring, and mastery-based education from their GenAI companions, enabling teachers to ‘focus less on the rote aspects of imparting knowledge and more on building emotional intelligence, creativity, character, values and resilience in students’, as well as fine-tuning the GenAI companion to meet each student’s personalised needs (p. 119). This seems like a safe prediction given that personalised tutoring is already the force de jour within the 2025 EdTech world.Footnote 4
We agree with the prediction that personalised GenAI Agents/Tutors will be ubiquitous in 2045. After all, in 2025, we already have Perplexity Comet and OpenAI’s At last, two agents that, once prompted or scheduled, can go out to the internet, do research, make decisions based on what they’re finding, complete tasks, and change course without human interference. While these tools are still in their infancy and perhaps can’t yet do complex tasks (Waters & Morris, Reference Waters and Morris2024), it seems clear that by 2045 (and likely well before then),Footnote 5 we’ll have access to fully self-learning artificial autonomous agents. Autonomous means that GenAI will no longer need humans to help them learn (under supervised learning), and they will no longer need humans to tell them what to do, how to do it and when to do it. These agents will be able to act like tutors or like a ‘mom in a box’ on demand. Agents will do things on the student’s behalf even without them asking, like ordering groceries when the fridge is running low, or, as we saw Nery do for Chase, schedule all of their time and tasks for them, tell them where to go, when to go, what to do, and how to do it. The agents will even complete students’ courses and be their best friends. While we’re confident about the availability of these agents in twenty years, we’re not sure if all students will be able to afford to have such personalised and autonomous GenAI agents. So, it could be that it will be an equity issue or schools will redirect their money from expensive human tutors to ensuring equitable access to GenAI tutors for all students.
So, yes, GenAI tutors will be everywhere by 2045, taking different forms depending on the student’s or institution’s capital. But facilitating learning isn’t all higher education institutions do. They also, at least currently, have a responsibility for assessing student learning and then conferring degrees upon students to certify that the graduate has a set of knowledge and abilities. Lee and Qiufan did not address this aspect of education, but obviously it is central to predicting what academic integrity will look like in 2045. So, we’ll now explain why the Nexus model is the more probable model of the 2045 university.
Prediction #1: Traditional Grading Schemas Will Continue to Dominate
Despite many efforts by scholars and teachers to shift colleges and universities away from traditional grading systems deemed to be faulty measures of student learning (Jankowski & Marshall, Reference Jankowski and Marshall2017) towards alternatives like ungrading or labour-based grading (Eyler, Reference Eyler2024), traditional grading has retained its grip on higher education. It makes sense. Sorting students into buckets of mastery has been the grading strategy for over a century, surviving every other disruption to higher education, and remains by far the most predominant model. Besides its historical footing, educational institutions use grades as a way to retain their economic value when compared to their competitors (Weinberg, Reference Weinberg2024), and society continues to want to use grades and educational credentials as sorting instruments within competitive admissions and hiring systems. This means, then, that students in 2045 will continue to be more extrinsically motivated than intrinsically motivated to learn, which we know is one of the driving forces of cheating (Bertram Gallant & Rettinger, Reference Bertram Gallant and Rettinger2025). And, as a result, institutions will need to implement mechanisms to assure the integrity of those grades.
Prediction #2: GenAI Will Continue to Entice Students to Offload Learning
What was once a straightforward distinction between original work and external help is now blurred by technologies that offer predictive sentence completion, idea generation, or semantic restructuring with minimal human input. We’re already seeing this in 2025. In less than three years, we’ve already moved from needing a separate app for something like ChatGPT to GenAI being fully integrated into Microsoft’s 365 products (like Word, Excel, and Outlook) and Google’s products (like Docs, Sheets, and Slides). In addition, these apps have become more ‘helpful’, suggesting next steps before the human even has a chance to activate their own agency. Taking it one step further, agentic browsers like Perplexity Comet and Open AI’s Atlas do all of the work for the user with a decreasing need for any human intervention. Brain-augmented writing aids – where neural signals may initiate or shape written outputs – could further complicate traditional notions of authorship, especially when such tools operate semi-autonomously. We already have GenAI-enabled glasses,Footnote 6 which are becoming more difficult to distinguish from ‘traditional’ glasses, although their use might be reasonably obvious if proctoring is on point.Footnote 7 But, by 2045, the glasses will either be less obvious or they’ll no longer be needed because we’ll have the GenAI embedded directly into our brains as ‘co-thinkers’.Footnote 8
In 2025, students can already send virtual notetakers to their online classes in lieu of attending, and GenAI agents can complete multiple choice online assessments.Footnote 9 By 2045, it seems reasonable to predict that students will send their personalised GenAI Agent to a virtual lecture, have their GenAI agents complete online assessments, and arrange for their GenAI Avatars to perform all virtual oral exams. In our story, we see that Chase was able to communicate with his GenAI agent, Nery, easily through his glasses, and that Nery had completed a remote math exam for Chase.
In juxtaposition, we predict that embodied GenAI will not be sufficiently advanced by 2045 to fake in-person work, like the in-class writing that Chase had to do, because ‘learning how to control a body … is even more difficult than learning how to talk’ (Sejnowski, Reference Sejnowski2024: 182). Tech companies are already working on rectifying this problem, starting with robots in controlled settings, and it’s predicted that, once embodied, artificial intelligence will grow through real-world experiences and experimentation (Tan & Jaiswal, Reference Tan and Jaiswal2023). We’re not sure what academic integrity will look like once GenAI is embodied.
Prediction #3: Higher Education Will Still Be in the Credentialing Business
Although there is a trend to downplay the college degree requirement for employment (Anghel, Reference Anghel2023; Black, Reference Black2024; Stahle, Reference Stahle2024) and alternative credentialing movements are growing (University Professional and Continuing Education Association, 2023), we don’t think the business of higher education will be so massively disrupted in twenty years. Certainly, many colleges and universities might close because people are pursuing alternatives to formal higher education or because of funding decreases, but those institutions ranked highly in the international landscapeFootnote 10 will still be here in 2045. And since GenAI will have democratised and monetised the business of facilitating learning, those institutions remaining will have doubled down on credentialing as their main value-add within the formal education system. And this means, then, that we are likely to follow the advice of assessment experts like Phillip Dawson (Reference Dawson2021) and Bridgeman, Liu, and Weeks (Reference Bridgeman, Liu and Weeks2024) who argue that we must be able to validate learning through secure assessments, with security meaning we know who is completing the assessment and what aids they’re using.
Prediction #4: The Growing Emphasis on Credentialing Will Lead to Increased Security
We saw this play out in our story. Because artificial agents will destroy asynchronous, online assessments as reliable and valid measures of student learning, in-person secure assessment centres will be necessary. Not only will these centres have to ensure that the person completing the assessment is the enrolled student, but also that they are not getting an unfair advantage by using serendipitously placed technology. But we also saw in the story that outside artificial agents pose a threat to degree integrity as they can be used to create even more types of fraud, such as hacking into grading systems to change grades or stealing from existing data sets (e.g., LMS, Turnitin) to use the data for nefarious reasons.
Verity Shield’s academic integrity infrastructure within Nexus University is not a hyperbolic vision of the possible future, but one based on current realities in 2025. Currently India uses drones to increase security at their exam centres (FPJ News Service, 2025) and China uses facial recognition to stop contract cheating providers in their college entrance exams (Rui, Reference Rui2024). Automated proctoring systems, including facial recognition, keystroke tracking, and gaze detection, have been widely adopted to deter cheating in remote assessments. More experimental technologies, such as brainwave-monitoring headbands trialled in China to track student focus during class and exams (Wang, Hong & Tai, Reference Wang, Hong and Tai2019), signal a growing trend of embedding surveillance into learning itself.
Western developed countries are stepping up the security game as well. The Credential Integrity Action Alliance in the United States is working through legislation to protect educational integrity from the contract cheating industry. They’re following in the footsteps of other countries like Australia, a country in which the national quality standards organisation (TEQSA) now requires (as of only 2024) educational institutions to attend to the integrity of their academic credentials by employing staff to mitigate academic integrity breach risks and recurrences, identify and respond to breaches according to established policies and procedures, and monitor breaches so that action can be taken to ‘address underlying causes’.Footnote 11 This has led to the establishment of academic integrity offices and positions throughout the Australian higher education landscape, the closing of some institutions who failed to implement these standards (Bailey, Reference Bailey2025), and a growing movement to use automation and data science to detect and intervene in contract cheating by enrolled persons.Footnote 12
Thus, it is reasonable to suspect that other ‘top’ universities around the world will begin to feel the pressure to do something to protect credential or degree integrity. While some will convincingly argue that we should make fundamental changes to assessment rather than try to fix broken twentieth-century assessments with twenty-first-century technological solutions, we suspect that the former is the least likely outcome over the next twenty years. Why? Because the ways in which we teach and assess learning have remained intractably stable for over a hundred years despite repeated disruptions (Rosenberg, Reference Rosenberg2023). Thus, even with the much more rapid disruption of GenAI and the increased presence of the EdTech industry attempting to influence and shape higher education in their image, we think that the top universities will (unfortunately) remain resistant to the fundamental and systemic changes that would be required to avoid the route of security systems layered on to existing teaching, learning and assessment systems.
Prediction #5: Predictive AI Will Join Generative AI in Shaping Academic Integrity
In our Nexus U story, we also saw how predictive, rather than generative, artificial intelligence might shape degree integrity in 2045. Although this Element series is about the impact of GenAI on education, predictive AI has been deployed in colleges and universities for over a decade now (Bird, Reference Bird2023), and so deserves a mention. It’s especially critical that we bring it to the forefront because most are unaware of its use as it operates in the background. From LMS logs and facial recognition to neural signals and biometric data, students today are being continuously recorded, categorised, and increasingly nudged. For one example, institutions, looking to bolster their student retention and graduation rates, have used AI to predict who is likely to drop or flunk out so they can proactively intervene (Ekowo & Palmer, Reference Ekowo and Palmer2016). One institution used this data in an unethical way to encourage likely dropouts to leave prior to when retention data is collected; another approached it ethically by offering additional support to prevent the predicted outcome from occurring (Ekowo & Palmer, Reference Ekowo and Palmer2016).
Predictive AI could be used in either of those same ways to maintain degree integrity. As we saw with Nexus U, AI tools could be used to predict who is likely to cheat based on factors like class attendance, engagement with the course material in the LMS, previous academic performance, health data (who is showing signs of stress?), whose LLM use has high energy markers (who is over relying on GenAI to do their work for them?), and perhaps even personality traits (who has that dark triad personality disorder?).Footnote 13 The university could then either provide timely nudges or interventions to help put the student on the ethical path (e.g., don’t cheat but ask for an extension; don’t turn to AI to write your paper, but go to the writing centre for help), or we could go all Minority Report on them and surveil them more closely to catch them in the act of cheating. The technology exists now to do all of this and its deployment only halted by concerns about data privacy. Yet, we’ve seen humans relax their concern over privacy as long as it affords them conveniences. So, it’s likely a question of when universities will start using data like this to protect their degree integrity, not a matter of if.
Interlude: From the Future to the Present
While it is unknown what higher education and academic integrity will look like in 2045, it seems clear that GenAI and associated technologies are reshaping both. In our quest to maintain the integrity and relevancy of our degrees, we must resist the temptations to go extreme in either direction – either towards only doing secure assessments under extensive surveillance or towards total acquiescence to GenAI to the point where human skills and knowledge are no longer relevant or needed. In this resistance, maintaining our focus on the core values of academic integrity – courage, honesty, trust, fairness, responsibility, and respect – might be the answer.
Instead of increasing surveillance through technology, for example, higher education institutions could closely observe students as they demonstrate their knowledge and skills through things like apprenticeships, flipped classrooms, debates, discussions, oral vivas, and presentations. Of course, close observation is much more difficult to scale, so to avoid relying on intrusive monitoring, universities and colleges will need to invest in ethical learning ecosystems that support academic honesty without compromising student agency or well-being.
Instead of collecting an overwhelming amount of student data and then using GenAI and predictive AI tools to analyse and make decisions based on that data, we could go back to a mentoring model of education where every student is given a human mentor who checks in with them and helps them navigate the system and maintain academic progress along with their integrity, individual dignity and privacy in tact (Lake, Reference Lake2009).
In other words, we could choose to create higher education environments that privilege human-to-human learning experiences in which we both inspire student learning and assure that it has occurred (University of Western Sydney, unpublished).
The point is that we have agency, we have choice. We can determine how to respond to the fourth industrial revolution. To help us do that, we next ask and answer the practical questions we have now about policy and procedures, facilitating and assessing learning, and developing in our students the capacity to think critically and ethically about GenAI. Together, we’ll explore the choices we can make now to have the educational future of which we can be proud.
5 Practical Questions for Now: Crafting Guidelines, Policies, and Procedures
We have known for decades that to build a culture of academic integrity, clear guidelines, policies, and procedures are necessary (Bretag et al., Reference Bretag, Mahmud and Wallace2011; Whitley & Keith-Spiegel, Reference Whitley and Keith-Spiegel2001). Guidelines, as overarching principles or suggestions, help people make decisions or take action. For academic integrity, a guideline might be ‘students should check that their work is an honest, responsible, fair, respectful and trustworthy representation of their knowledge before they submit it for credit’. Policies, on the other hand, tell people the actions they must take. So, for academic integrity, a policy could say ‘instructors must report all suspected integrity violations to the academic integrity office’. Finally, procedures tell people how to implement the policies. An academic integrity–related procedure example is ‘within 5 business days of receiving an instructor’s allegation of cheating, the academic integrity officer must notify the student of the allegation and invite them to discuss it’. Guidelines are useful for establishing shared higher-level principles that can be operationalised within policies and procedures written for local contexts. Policies and procedures are necessary to ensure that there is fairness and consistency in the ways we uphold values like academic integrity.
When GenAI infiltrated higher education in 2022, colleges and universities questioned whether their existing guidelines, policies, and procedures were sufficiently clear for the new era. Later, institutions realised that their policies and procedures need to remain nimble and open to revisions as GenAI tools, practices, and norms continue to evolve, sometimes at a breathtaking pace. In this section, drawing on the 4 M framework (Eaton, Reference Eaton2020), we consider some of the practical questions around crafting guidelines, policies, and procedures for academic integrity in the age of AI at the macro (global and national), meso (institutional), and micro (department/class) levels.
Macro Level
As mentioned in the first section, academic integrity practices within the higher education sector are often significantly shaped by benchmarks and standards issued as guidelines by national quality assurance and accreditation bodies. These same bodies have also weighed in on the impact of GenAI on educational integrity, as have some international bodies like UNESCO. So we first ask some practical questions at the macro level.
What Have National Quality Assurance or Accreditation Bodies Recommended?
The three most influential bodies in informing academic integrity practices in the age of AI have been the UK’s Quality Assurance Agency (QAA), Quality and Qualifications Ireland (QQI), and Australia’s Tertiary Education Quality and Standards Agency (TEQSA), and they have all been careful to not be too prescriptive given the variety of different contexts in which their member institutions operate. For example, the UK’s Quality Assurance Agency (QAA, 2023a, 2023b) has emphasised that institutional policies need to be agile yet specific by, for example, codifying a clear distinction between what is acceptable and unacceptable GenAI use and then ensuring that students are aware of this. They also suggest that institutional policies should require that students explicitly acknowledge their use of GenAI tools and that ‘responsibility for the integrity of the submission lies with the student’ (QAA, 2023b, p. 2). In essence, the QAA recommends that while institutions work to redesign assessments for an age in which GenAI is ubiquitous and difficult to detect and control, they clearly state for students that it is academic misconduct to serendipitously submit GenAI output as if it was human-generated.
Quality and Qualifications Ireland (QQI, 2023) provides nineteen recommendations for educators, students, and institutional leaders like: lecturers should ‘design assessments which have strong validity’; students should ‘recognise the ethical concerns associated with their development and use’; and, institutional leaders should ‘ensure that there is up-to-date and regular training for all staff and students on academic integrity, AI, and assessment’. They also offer more detailed thirty-two guidelines to help each group bring those recommendations to life, including providing a lexicon of terms and frameworks for investigating suspected misuse of GenAI.
The Tertiary Education Quality and Standards Agency in Australia created their recommendations based on a survey of their institutions (TEQSA, 2024). The resulting TEQSA Toolkit suggests the proactive strategies that institutions can take to mitigate the key risks and challenges that GenAI poses to educational integrity, like establishing governance mechanisms (e.g., having an institutional GenAI strategy and plan guided by working groups), supporting and engaging students and staff in the implementation of the plan, and finally making changes to pedagogy and assessments to ensure that students are ‘actively engaged in their learning journey, and the knowledge and skills students require to graduate with their award have been assessed securely’ (p. 42). We recommend checking TEQSA’s toolkit as it is comprehensive and provides exemplars from institutions that could be helpful in informing your institutional policies.
What Do Leading National or Global Educational Associations Offer for Guidance?
In addition to quality assurance bodies, other national and international organisations offer assistance at a macro level. In the UK, the Joint Information Systems Committee (JISC) has used their research to develop comprehensive guidance about safe, ethical use of GenAI within education (JISC, 2024), which has been widely adopted by UK HEIs. They offer some more prescriptive advice like:
don’t try to ‘ban’ specific technology or GenAI as a whole;
describe acceptable and expected behaviours and provide examples; and,
supplement general advice with tailored advice for the subject (JISC, 2024).
But they also reserve from weighing in on more controversial topics like the role of AI detectors, deciding to simply say ‘best practice for using such tools is still being considered and developed across our sectors, and we will aim to provide examples as these develop’. (np). This equivocation might be because around the same time, April 2023, a large group of UK universities called for a rejection of the automatic inclusion of AI detector results in the Turnitin similarity report used by most institutions, which resulted in Turnitin instead offering GenAI detection as opt in or opt out choices for institutions (Glendinning, Reference Glendinning and Baptista2025).
The Russell Group also released a set of principles for the twenty-four high-ranking research-intensive universities in the UK (Russell Group, 2023):
universities will support students and staff to become GenAI-literate;
staff should be equipped to support students to use generative GenAI tools effectively and appropriately in their learning experience;
universities will adapt teaching and assessment to incorporate the ethical use of generative GenAI and support equal access;
universities will ensure academic rigour and integrity is upheld; and,
universities will work collaboratively to share best practice as the technology and its application in education evolves.
These principles offer some general benchmarking, but perhaps the most significant principle is the last; collaboration within and across the global sector of higher education must be a high priority to navigate the disruption that GenAI poses to teaching, learning, and assessment.
Meso Level
Institutions have now widely accepted that GenAI is here to stay and will only become more ubiquitous as it is integrated into every platform and device. Clear communication and education in the form of institutional guidelines, policies, and procedures is necessary to ensure that colleges and universities are maintaining their commitment to facilitating and certifying learning.
Should There Be an Institutional ‘Default’ Policy, or Should It Be Left Up to Individual Instructors?
Institutions should absolutely have a set of guidelines for best practice as well as an overarching policy and associated procedures, as they generally have had before GenAI. See Table 1 for examples of policies and guidelines from five English language institutions in the UK, Ireland, Australia, and Canada.

Table 1Long description
The table compares five universities’ approaches to generative-AI use, with columns for institution, policy, guidance, and AI-detection practices; University College London and King’s College London categorize or set principles for AI use, require student guidance, and reject detection tools because of privacy, bias, and fairness concerns; Atlantic Technological University and Deakin University treat undeclared AI use as unauthorized, advise consulting instructors, warn about bias and data protection, and do not rely on detection tools; and the University of Toronto defines unauthorized AI use under its conduct code, highlights risks of inaccuracy and privacy, and discourages submitting student work to AI systems for grading.
An easy way to update existing policies is to just say ‘it’s up to the course instructor whether GenAI is allowed or not’, but that really leaves it to the individual instructors. So, we suggest instead that the default guidelines situate academic integrity within teaching and learning, rather than in student conduct, which means acknowledging that GenAI can both amplify and hinder learning. One way for institutions to do this is to adopt the ‘two-lane assessment approach’ (Bridgeman, Liu & Weeks, Reference Bridgeman, Liu and Weeks2024) as a guideline or policy. This approach suggests there are open and closed assessments. Open assessments are conducted in unsupervised settings (think essay writing, homeworks) and since GenAI use cannot be controlled or detected in such settings, these assessments should be focused on facilitating student learning. Closed assessments are supervised and should be reserved for when we need to ensure and certify that a student has met certain learning outcomes or benchmarks. So, if this approach was adopted as a guideline, it would suggest to instructors that they shouldn’t prohibit GenAI use on ‘open assessments’ (those completed in unsupervised settings) but allow it with acknowledgement. As a policy, the two-lane approach would prohibit instructors from banning GenAI on open assessments, like they do at the University of Sydney. We would still encourage instructors to provide clear instructions to students about the expected use of AI in open assessments with a rationale for using in this way.
Another way for institutions to emphasise the teaching and learning approach is to make it policy that instructors and students must disclose their use of GenAI in their work, and if their disclosed actions undermine the purpose of the work (i.e., teaching and learning), they’re given a chance to correct the mistake rather than be disciplined for ‘dishonesty’. This doesn’t mean there should never be consequences. Rather, we suggest following the educational enforcement pyramid developed by Ellis and Murdoch (Reference Ellis and Murdoch2024) because it provides a useful re-imagining of institutional strategies for handling academic misconduct. Specifically, the authors suggest that institutions map their educational interventions and penalties appropriately to student contexts; the bottom of the pyramid being where students are both willing and able to do the work of learning and are given advice and support by their institutions, while the top of the pyramid would be where students are neither willing nor able to do the work of learning and are sanctioned most severely by their institutions.
Should Institutional Policy Prohibit AI Detection?
It’s essential in a teaching and learning approach that educators redesign their assessments and then prioritise detecting learning, not detecting cheating (more on this in the next section). Nevertheless, we get the inclination to detect. After all, text similarity detectors proved quite useful in helping faculty identify plagiarism. Also, we acknowledge that many faculties are not given the time, training, and support to make pedagogical or assessment changes, and others may choose not to incorporate GenAI or make changes for other reasons, like it is already a well-designed assessment or allowing GenAI would undermine learning (Gustilo, Ong & Lapinid, Reference Gustilo, Ong and Lapinid2024). However, when unsupervised assessments don’t change and students offload their learning to machines, degree integrity will be compromised (Newton & Draper, Reference Newton and Draper2025), especially when instructors refuse to respond to the offloading due to a (mis)perception that they would be ‘policing students’ (Lynch et al., Reference Lynch, Salamonson, Glew and Ramjan2021).
This misperception that addressing cheating is ‘policing’ stems in part from not knowing how to respond and an over-emphasis on punishment (as we discussed in Section 1), but the addition of GenAI detectors hasn’t helped. Numerous studies have found that the detectors can have a high likelihood of false positives – that is, identifying human writing as AI generated (Dalalah & Dalalah, Reference Dalalah and Dalalah2023; JISC, 2024) – especially in writings by those students who speak English as an additional language (Liang et al., Reference Liang, Yuksekgonul, Mao, Wu and Zou2023). These studies feed into the ‘policing’ narrative even while other researchers have found that some detectors can be very accurate with no bias against ELL students (e.g., Jiang et al., Reference Jiang, Huang and Li2024). The problem is that the studies have significant methodological differences and vary in the detection tools tested, so the effectiveness and/or biases of the detectors are not clear. For example, Liang et al. (Reference Liang, Yuksekgonul, Mao, Wu and Zou2023) contrasted the AI detection of English test essay outputs by 91 Chinese students who spoke English as an additional language with 88 eighth-grade student essays by speakers of English in the United States. Dalalah and Dalalah (Reference Dalalah and Dalalah2023) constructed their analysis of false positives based on fifty extracts from peer-reviewed journal articles published prior to 2022 with fifty new extracts produced by ChatGPT.
So, the question – should institutions prohibit AI detection – is difficult to answer. We can see in Table 1 that respected institutions have taken different approaches, and even as authors, we do not agree on the answer. Mary, for example, believes that institutions should prohibit AI detection tools given that some have been shown to be problematic and the results of all can be misused. Tricia, on the other hand, believes that institutions have a responsibility to provide instructors with reliable tools that, when used along other pieces of data, may help the instructor validate learning. Regardless, in institutions where AI detectors are allowed: (a) institutional policy should prohibit instructors from acting as judges and juries on their own without invoking an institutional process; (b) the institution must provide a reliable and vetted tool (and prohibit instructors from using freely available ones found online); (c) a report from an AI detector tool – even a good one – should not be permissible as the only piece of evidence to ‘prove’ a student cheated; (d) any tool used must be proactively disclosed to students; and (e) the detector output is best used not as a ‘gotcha device’ but as context for a validating learning conversation that the instructor (or someone else) has with the student. In the end, institutions should promote teaching and learning with integrity and the goal is always to enable the students to demonstrate their achievement of the learning outcomes, regardless of what the ‘detectors’ say.
What Frameworks Exist to Guide Us in Developing Our Institutional Policy?
There are a number of useful policy frameworks developed prior to GenAI that are still relevant. One of the most widely used is the five principles of exemplary academic integrity policy: access, approach, detail, responsibility, and support (Bretag et al., Reference Bretag, Mahmud and Wallace2011). Considering these five elements, GenAI policies must be fully accessible in terms of both location and format, should take an educational approach, need to have sufficient detail to help students gain understanding, involve establishing responsibility for all stakeholders (in other words, commitments for the institution, faculty, and students in terms of GenAI), and, finally, require structures of support, including teaching resources and time for learning, in order to be effective.
The use of Universal Design for Learning (UDL), a system for embedding inclusion into all student-facing documents and resources (CAST, 2018), is also strongly recommended to ensure policies provide contextual background, identify key concepts, assist with information processing, and promote the application of knowledge. Building in UDL principles across integrity policies and assessments helps to ensure they are in a format that is fully acceptable to all students (Davis, Reference Davis and Eaton2024). Applying a UDL framework can surface, for example, that GenAI-powered tools used in grading, misconduct detection, and proctoring may perpetuate biases and undermine fairness (Baker & Hawn, Reference Baker and Hawn2021), often disadvantaging students from underrepresented, neurodivergent, or multilingual backgrounds. These challenges are set to intensify with the rise of brain–computer interfaces, neuro-monitoring, and adaptive cognitive tools, which may redefine how knowledge and performance are measured. Any guidelines, policies, and procedures adopted must work to safeguard both intellectual honesty and educational justice. Academic integrity in this context must extend beyond preventing cheating to ensuring that assessment systems are valid, transparent, procedurally just, and ethically accountable.
Based on the aforementioned, the following questions can be used as a framework to help institutions develop GenAI guidelines or policies that address important ongoing considerations:
Who is responsible for validating learning and certifying it?
How can students contribute to guidelines or policies?
How are integrity breaches defined?
How can students be transparent about their use of GenAI? (e.g., declaration/acknowledgement/ permission form)
How is appropriate use of GenAI defined? (e.g., using GenAI as a tool)
How is GenAI misuse defined?
Should we allow GenAI use by teachers in grading, etc.?
How Do We Include Student Voice in Developing Guidelines, Policies, and Procedures?
We highly recommend that student voices are included, along with instructors, in the development of GenAI and academic integrity guidelines, policies, and procedures. The importance of incorporating the student voice in any academic integrity work has been strongly recognised (Davis, Reference Davis2022). Listening to and working with students has become even more crucial as a way to keep up with their practices, experiences, and employability needs with GenAI. For example, Johnston et al.’s (Reference Johnston, Wells and Shanks2024) survey of students demonstrates an approach to incorporate students’ relevant practices (such as using GenAI as a personal tutor) and current concerns (such as ensuring they can acquire GenAI skills for jobs) into university policy. In Ireland the N-TUTORR (National Technological University Transformation for Recovery and Resilience) project set out to engage students as champions of integrity and inclusion and as partners in innovation and change (N-TUTORR, 2024). King’s College London (2024) advises that educators start by having conversations about GenAI with peers, familiarising themselves with the GenAI research landscape, experimenting with GenAI, analysing the impact on the course, reviewing assessments, and then communicating with students. This approach is in line with the Russell Group (2023) principles of developing both student and staff GenAI literacy (mentioned in regard to the meso level earlier).
How Do We Ensure Our Policies and Procedures Stay Relevant?
To serve the purpose of informing students and faculty, especially in the world of quickly evolving GenAI technologies, academic integrity and GenAI policies need to be visible, easy to access, and regularly updated. These policies should not be hidden from students within long documents or on pages that their attention is not drawn to (Pecorari, Reference Pecorari, Belcher and Hirvela2001; Stoesz et al., Reference Stoesz, Eaton, Miron and Thacker2019). Regular reminders are needed and the guidance and training for students should also be taken by staff, so everyone is informed of what students are being advised to do. Policy and procedures should be regularly reviewed to ensure that students do not get caught by policies that no longer apply. For example, if a pre-GenAI policy states that students cannot use electronic tools for assessment while at the same time they are encouraged to use GenAI, the policy causes confusion for both faculty and students. Policies can be updated with simple references to GenAI. At the University of California, San Diego, for example, the pre-GenAI policy said that handing in work done by someone else was a form of dishonesty; the post-GenAI policy says that handing in work done by someone or something else (i.e., artificial intelligence) is a form of dishonesty. Regardless of whether GenAI misuse is weaved into existing definitions of integrity violations or listed as a separate category, misuse of GenAI tools needs to be clearly defined, monitored, updated, and often communicated at both the institutional and course levels.
Micro Level
Even when there is an institutional set of guidelines and/or policies, the use of GenAI and the range of relevant GenAI tools vary enormously according to subject areas, and so guidance at the micro level (e.g., school, departmental, programme, or class) is clearly important to ensure relevancy. For example, guidance for business students might highlight appropriate GenAI use in tools for marketing or financial analysis, whereas guidance for computer science students would focus on the appropriate integration of GenAI in programming and coding, and policies for architecture students would consider GenAI for design and visual representation. It is also at this micro level where we are most likely to find some policies that prohibit use of GenAI because what is defined as use and misuse should be connected to the learning outcomes for that discipline or course. For example, it might be okay to use Google Translate in general, but it would be misuse if, in a French language class, I write my essay in English and have it machine-translated into French. So, let’s explore some common questions for developing guidelines and policies at the micro level.
Should We Mandate Different Policies at Different Levels of Study?
It does seem reasonable to scaffold in responsible GenAI use. In an introductory physics, chemistry, or math class, we may want to prohibit use by relying on secure assessments because students need to develop some foundational knowledge before they can effectively input useful prompts and evaluate outputs for accuracy and bias. On the other hand, the higher a student progresses, the more responsibility they have for authenticity, originality, and research rigour and so GenAI misuse at the doctoral level is likely to be a greater concern. To use King’s College London (2025) as an example: doctoral students may use GenAI tools in their thesis writing process, but should be cautious in their use, paying particular attention to data protection and confidentiality as part of research integrity, declare all use, and take responsibility for the output they incorporate (i.e., the machine’s errors are actually my errors). Students may use GenAI as an assistive tool to correct language, rewrite, paraphrase, or translate parts of the thesis, but not to correct content or generate a literature review, as this would compromise their originality, contribution to the literature, and their ability to discuss and defend their own thesis (King’s College London, 2025). The guidance evidently aims to separate ethical language correction/improvement from unethical content correction/improvement in GenAI use, which is clearly an important distinction, but can be hard to fully apply in practice and requires more nuance according to the type and purpose of writing required. It has been recognised for some years that using tools for paraphrasing or translating can result in a lack of human authorship and significant concerns for academic integrity (Groves & Mundt, Reference Groves and Mundt2021; Roe & Perkins, Reference Roe and Perkins2022); thus, policy guidance about use of GenAI at each level needs to be accompanied by ongoing monitoring and discussions with students.
What Tools or Frameworks Exist to Help Me Develop My Course-Level Policy on AI Use?
The AI Assessment Scale (AIAS) by researchers in Vietnam, Australia, and Singapore (Perkins, Furze & Roe, Reference Perkins, Furze and Roe2024) has been a widely adopted tool. The AIAS presents five levels of GenAI use to be employed according to the learning outcomes in assessment: no GenAI, GenAI-assisted idea generation, GenAI-assisted editing, GenAI task completion with human evaluation, and full GenAI. The model is being widely used with adaptations to different global and educational contexts using traffic lights, Bloom’s taxonomy, competency scales, and other formats (Furze, Reference Furze2024). We would caution all instructors, though, that the ‘no GenAI’ option can only be applied in a particular assessment or course where the assessments are secure. We also recommend that if any GenAI use is allowed, it is coupled with a disclosure, so the use is transparent and enables the instructor to have conversations with the students if their use goes awry.
Key Takeaways
Guidelines, policies, and procedures are necessary to provide waypoints for people in uncertain times. They can help clarify institutional goals and intentions, as well as codify expectations for decision-making and acting at all levels: macro, meso, and micro. National quality assurance and accreditation bodies, as well as other specialised educational organisations with a national remit and responsibility, can offer universal perspectives. Institutions can then apply that universal advice to their specific contexts in a more agile way, focusing on developing frameworks for policy development, updating outdated policies and procedures, including student voice in the development process, and taking a stance on the use of AI detection tools. Then, once there is an institutional framework, guidance, policies, and procedures can be designed at a programme or departmental level for different levels of study, including the considerations of AI use for high-stakes doctoral study and the important differences between disciplines.
6 Practical Questions for Now: Rethinking Learning and Assessment
‘AI has exposed a long-overdue need to change the way we assess.’Footnote 19
‘Some cheating may even be unintentional if you don’t adequately teach kids effective ways to use
AI. This is why incorporating it into instruction and teaching AI literacy is so necessary.’Footnote 20
‘AI’s role as a forcing function to spark efforts to make learning more valuable to students may be the most impactful (even if unintended) outcomes of AI in education.’Footnote 21
‘If AI can do your assignment, your assignment is dumb.’Footnote 22
These are just a sampling of quotes from LinkedIn that paint the picture of how educators are struggling to respond to the GenAI disruption on teaching, learning, and assessment. As academic integrity practitioners, researchers, and authors with over 50+ years of combined experience, we are thrilled at the increased exposure that academic integrity has received since ChatGPT was released in November 2022. Academic integrity as a topic worthy of research and thought comes in and out of vogue in higher education, usually in response to disruptors (as we saw in Section 2). In response to the GenAI disruptor, teachers, instructional designers, teaching and learning consultants, administrators, and others have all expressed their thoughts and ideas on three main academic integrity questions: (1) how do we stop students from using GenAI to cheat, (2) how do (and should) we detect when GenAI has been used to cheat, and (3) how do we protect degree integrity when the majority of our current ways of assessing learning can be faked with GenAI? The astute reader will notice that all three of these questions centre on one key responsibility of higher education institutions – to certify that a person has the knowledge and skills expected. If a student fakes their mastery in any manner, including by using GenAI to pretend they have one skill (e.g., ability to communicate in French) while actually demonstrating a different skill (e.g., ability to use GenAI to translate their English into French), then the assessment process loses validity and the certification process becomes unreliable.
So, degree integrity – before and after GenAI – is about learning and assessment, in particular the facilitation of learning and the valid and reliable assessment of the learning achieved. Tricia first explained this back in 2008 when, in her book Academic Integrity in the Twenty-First Century: A Teaching and Learning Imperative, she made a statement that could have been considered controversial at the time: academic integrity is not a student conduct issue, it’s a teaching and learning issue. Now, that statement seems self-evident.
Yet, in our imagination of academic integrity in 2045, we predicted that instructors and institutions will try to retain their twentieth-century teaching, learning, and assessment methods while layering AI technology onto them, like purchasing writing tracking software to keep the five-paragraph essay as a learning artefact or using online proctoring platforms to retain the convenience of remote, computer-based exams. Why? It might be because we believe that the twentieth-century way is the right way, it’s just easier, or because we will choose values like access or profit over learning and integrity.
However, what is really needed is not a tweaking around the edges of pedagogy and assessment, but a massive and substantial change. If we don’t do this, higher education may simply become a certification machine where students learn elsewhere and only come to us for assurance of their learning. We need to rethink what we teach, how we teach it, and what and how we assess. To do this, we’ll first zoom in on an individual instructor’s choices in their classroom (micro level). Then, we’ll zoom out a level to the meso level – the programme or department in which the course is situated. And finally, we’ll address the macro level – what institutions need to be doing to support and encourage new ways of teaching and assessing.
Micro
Before we can get to the specific practical questions for instructors, there are some fundamental micro questions to address. First, what changes are within my sphere of agency? For example, not all instructors can rethink course learning objectives because they are teaching a section of a coordinated course or the objectives are dictated by the programme/department (meso level) or even by professional bodies (like in engineering).
Once the sphere of agency is determined, an instructor needs to ask: What are the positive and negative impacts of GenAI on my course? This, of course, requires play; playing around with course activities and assessments in the GenAI ecosystem. Can the tools produce output that would pass your rubric? Could a student completely offload the learning activity or assessment to the extent that you can no longer validate their learning?
If there are assessments that can be faked by GenAI, then instructors need to ask: Ought they be changed? Just because GenAI can do the assessment doesn’t mean the medium should be changed. For example, in a class intended to teach students how to write, one wouldn’t scrap the writing activity just because an LLM can do it. Instead, we need to figure out how to maintain the validity of the assessment in the age of GenAI, which may mean making the assessment more secure, preferably without sacrificing the learning goals (as John Robinson (Reference Robinson2025) did with his multiday in-class essay writing assignment).
Once these three foundational questions (what do I have agency to change, is change necessary, and ought the change be made?) are asked and answered, and it’s determined that change is required, there remain some real practical, on-the-ground questions that naturally arise.
Should I Allow Students to Use GenAI or Not?
Phillip Dawson would argue that before worrying about allowing or not allowing GenAI, instructors should first ensure that they have a valid assessment, which means evaluating if we are ‘assessing what we mean to assess’.Footnote 23 Or, more bluntly, Phill says: ‘If an assessment is no good at being an assessment, don’t worry about AI. Fix the damn assessment.’Footnote 24
Once you’re assured that you have the right assessment, each instructor needs to determine if the use of LLMs hinders or amplifies a student’s mastery of the learning outcomes and can that be measured. Bridgeman, Liu, and Weeks (Reference Bridgeman, Liu and Weeks2024) suggest that to make this decision, instructors should distinguish between summative assessments (assessments of learning) and formative assessments (assessments for and as learning); the former may be more likely need to be secure where the use of aids is known (even if that aid is GenAI), whereas the latter is unsupervised and therefore vulnerable to GenAI misuse. Even when we decide that using GenAI is useful on some assessments, we may want to design ‘hurdle assessments’ to ensure that a student has knowledge and skills even without the assistance of an LMM (Bridgeman, Liu & Weeks, Reference Bridgeman, Liu and Weeks2024). This strategy creates an essential measure to prevent students from graduating based on their ability to offload to GenAI, rather than their ability to add value to a machine-augmented human world.
What Kind of GenAI Use Supports Learning?
Ultimately, we want to ensure that even in a GenAI world, students continue to develop their crucial critical thinking abilities, particularly since research has demonstrated that over-reliance on GenAI may reduce cognitive development (Zhai, Wibowo & Li, Reference Zhai, Wibowo and Li2024). This usually means teaching students how to use the tools as assistants rather than replacements so that humans remain the authors with active knowledge and responsibility for the output. In some courses, it might be okay to teach the students how to brainstorm with the LLM to develop topic ideas or research strategies. In other courses, it might be acceptable for the student to use it at the proofreading stage so that they can focus more on understanding and less on linguistic accuracy. Still in other classes, perhaps GenAI can be used to support learning by being trained as a course tutor, providing practice quizzes, or giving timely feedback on a draft assignment. If the students are taught to share and reflect on their exchange with the bots, then they can learn about responsible and ethical use. Instructors could teach responsible use by having students evaluate GenAI outputs for bias and accuracy or comparing outputs to existing published sources (Acosta-Enriquez et al., Reference Acosta-Enriquez, Arbulú Ballesteros and Arbulu Perez Vargas2024). If students are encouraged to focus their attention on their responses to GenAI outputs, fact-checking, examining bias, and assessing value, then this can be a positive learning approach. And yes, students could still do too much offloading and lie about the extent of their use, which gets us back to the ‘two lane assessment’ approach; it’s most important that instructors differentiate when GenAI use can support learning versus when it interferes with the students’ honest demonstration of their learning.
How Do I Stop Students from Using GenAI on Assessments When It Will Undermine the Purpose?
According to Bertram Gallant and Rettinger (Reference Bertram Gallant and Rettinger2025), you can implement strategies to address the underlying reasons why students cheat. So, minimise misunderstandings by being clear about what is allowed and what’s not (e.g., by using Perkins et al.’s Artificial Intelligence Assessment Scale) and connecting those guidelines to the learning goals. Reduce temptations by allowing for multiple drafts, second chances, and/or flexible deadlines so that students, when pressed for time or feeling unable to do the assessment, might just try their best or communicate with you rather than use unauthorised GenAI tools. And, raise intrinsic motivations for learning by considering alternative grading strategies to reduce performance pressures, or even just adjust grading rubrics so that GenAI produced work earns a failing grade.
Otherwise, as Bertram Gallant and Rettinger (Reference Bertram Gallant and Rettinger2025) also note, the only way to really stop students from using any unauthorised aids on an assessment is to secure that assessment. As Phillip Dawson (Reference Dawson2021) says, ‘assessment security begins where academic integrity ends’. What he means is that we can implement all of the strategies to make ‘cheating the exception and integrity the norm’ (Bertram Gallant, Reference Bertram Gallant2017), like raising students’ intrinsic motivations, enhancing their self-efficacy, and reducing stress and pressure. However, because some students will still cheat, assessments that determine if a student should progress in their programme or graduate with a credential must be secure. But securing doesn’t have to look like an in-person, blue-book exam (which isn’t very secure, by the way, if those in charge aren’t implementing security measures like checking for AI-enabled glasses or student identification). Other options for raising security on assessments include oral assessments and flipped classrooms. Oral assessments do not have to be as complicated as you might think. They could mean taking time to talk with students instead of marking written exams; orally following up with students for about 15 minutes after written assessments; or implementing debates, simulation activities, and interviews (see Ward et al., Reference Ward, O’Riordan and Logan-Fleming2023 for example). In flipped classrooms, the majority of assessments are completed in the presence of others so there is accountability and transparency in the methods of their completion, and thus simply more challenging to fake.
How Do I Know When Students Have Breached Academic Integrity by Misusing GenAI?
This is the question of the day. Before it can be answered, you need to define what ‘misuse’ is. For one professor, using GenAI might be permitted or required, and for another, the same use might be labelled as ‘misuse’. Is it misuse if it’s used at all, or only misuse if they use it without citation or acknowledgement? Is it only misuse if it’s used to author versus to assist?
Then, if you determine that there are ‘misuses’ that will undermine the purpose of the assessment, then you must secure it. It’s a waste of your time to ban GenAI on an unsupervised assessment because you’ll only then have to spend time trying to detect who misused it. If for some reason you cannot secure the assessment, then we recommend that you allow GenAI use but ask students to complete a GenAI Disclosure. If they disclose they didn’t use it or they used it in a way that sounds dubious to you, then use it as an opportunity to validate learning rather than detect cheating. What do we mean by that? It means talking with the student to see if they learned or didn’t.
After all, your instinct could be wrong. The very things that ‘scream’ AI use – ‘it’s vague, doesn’t address course readings, is a summary instead of an argument and lacks sources’ – are the same things that occur in a paper that was badly constructed by a human. So, talk to your student about their process of completing the assessment, view their document version history to explore their process, ask them about the choices they made, and give them a chance to show you that they learned. If a student can talk through the assessment with you, then that might assuage your concerns. And, as we discussed already, be cautious: don’t rely on AI Detectors alone which can lead to unnecessary accusations and unfairness (Bassett, Reference Bassett2025). Only use a detector if one is provided to you by your institution, and, if all of the other points of data suggest that the assignment was human-generated even though the detector says ‘AI generated’, believe the former.
Meso Level
The responsibility to reconsider learning and assessments in the age of GenAI should not solely lie with individual course instructors. Why? No course lives on an island. Every course is intended to fit with other courses, meet common learning outcomes, or contribute to a larger picture. If we’re going to truly reconsider learning and assessments, we need to reconsider them at the programme/department level.
Are Our Courses and Their Associated Learning Outcomes Still Relevant in the Age of GenAI?
There may be some courses that are now irrelevant because they teach outdated skills or focus on memorising content (that doesn’t need to be memorised). Or, maybe there are some courses that just need some reworking with a change to their learning outcomes or assessment methods. While individual instructors can do this check themselves (as already advised), we believe that this should be done as a collaborative exercise. Of course, there will be disagreements and perhaps even arguing over whose course is ‘important’ or ‘irreplaceable’, so this work is not easy, but it must be done. The conversation should begin with the foundation – the programme’s or department’s learning outcomes. Do they need to be updated at all? Then, we can ask which courses and assessments help students master those learning outcomes and which need to be reimagined or archived.
Are Our Courses Meaningfully Connected?
To really revise learning and assessing in the age of GenAI, we need to look at learning as a continual process, not anchored to a particular term or course. Too often we’ve heard instructors suspect GenAI use because the students used a method or concept not taught in that particular class, only to find out that the student was applying their learnings from another class. We want students to do this because the ability to apply existing knowledge to new situations or adapt one skill for use in a new context is the hallmark of lifelong learning. Programmes and departments should be looking at all of their courses to ensure that there is some connecting thread or purpose between all of them, especially those that are required for a major, for example. Or, at the very least, examining these courses and being transparent about what is taught in them so that every instructor has a sense of the bigger picture in which their course fits.
Do Our Courses Deliver on the Promise of Developing Durable Human Skills?
One of the healthy responses to GenAI has been a re-evaluation of the importance of ‘soft skills’ which are the durable human skills graduates need for the future.Footnote 25 For far too long, colleges and universities have relied on the ‘hidden curriculum’ for such skill development. Our students develop interpersonal and collaboration skills because we make them work in groups. Our students develop communication skills because we have them produce written and oral outputs like research papers or presentations. Our students hone their problem-solving skills by simply navigating our complex bureaucracy! Instead, we need to be more intentional and strategic to ensure that throughout the degree, the student is progressing in these skills and we can validate that progress. After all, it is likely the strength of these skills that will enable our human students to bring added value to an AI world.
What Are the Benchmark Moments Where We Can and Should Administer Secure Assessments?
The answer to this question will differ by disciplines and learning outcomes. However, we can provide some general guidance thanks to Bridgeman, Liu, and Weeks (Reference Bridgeman, Liu and Weeks2024) who suggest the following:
incorporate a capstone unit that assesses programmatic learning outcomes;
use common formative assessment tasks across multiple courses;
use a common summative assessment across multiple units of study at third year; and,
at milestones, incorporate a useful hurdle assessment to confirm progression to the next level of the programme, suggest remediation, or validate achievement.
Macro Level
The institution and their governing and accrediting bodies also shape how teaching, learning, and assessment occur within colleges and universities. So, what are the practical questions facing those at the macro level that should be answered sooner rather than later?
How Do We Measure Student Learning?
Most higher education institutions continue to be structured around a credit-hour based system in which courses are scheduled to begin and end on specific dates with required measurements of how many hours a student spends ‘learning’. In synchronous classes (whether physical or virtual), the hours spent learning is usually calculated by time in class as well as time spent on assignments/assessments. In asynchronous classes, ‘learning’ is measured by number of hours of ‘direct faculty instruction’ (think recorded video lectures) and time spent on assignments/assessments.Footnote 26 This, of course, is an archaeological artefact of the agricultural and industrial ages, and although it may have made sense at the time, it no longer makes sense in the twenty-first century when: information and knowledge can be accessed 24/7; there is more diversity in the schedule of a ‘day’s work’; and, people can remain connected to an online learning environment even as they move quickly between physical spaces (Thelen, Reference Thelen2022). So, fundamentally institutions of higher education need to move away from measuring learning by time in seats to measuring learning by competencies. Competency-based education is not new, but it has not yet been widely adopted. We think that in an AI-augmented world, it’s not only critical that it be considered but possible. According to Levine and Patrick (Reference Levine and Patrick2019), a competency-based education requires seven elements:
1. ‘Students are empowered daily to make important decisions about their learning experiences, how they will create and apply knowledge, and how they will demonstrate their learning.
2. Assessment is a meaningful, positive, and empowering learning experience for students that yields timely, relevant, and actionable evidence.
3. Students receive timely, differentiated support based on their individual learning needs.
4. Students progress based on evidence of mastery, not seat time.
5. Students learn actively using different pathways and varied pacing.
6. Strategies to ensure equity for all students are embedded in the culture, structure, and pedagogy of schools and education systems.
7. Rigorous, common expectations for learning (knowledge, skills, and dispositions) are explicit, transparent, measurable, and transferable.’ (p. 3)
Moving from the current dominant model to competency-based will not be easy, but it’s far past time.
Is Online Learning Still Viable for Both Educational Access and Integrity?
While certainly there are benefits to online learning – primarily access – there are not many benefits to online, asynchronous assessments (whether used for in-person or online classes). To secure assessments in such a space, there is a stronger requirement to provide authentication, ensure individual responsibility for actions, and assure academic integrity in assessment (Khan et al., Reference Khan, Sivasubramaniam, Anand and Hysaj2021a). While we can deploy pedagogical and other strategies to attempt to make cheating the exception and integrity the norm (Bertram Gallant & Rettinger, Reference Bertram Gallant and Rettinger2025), higher education still has the responsibility to ensure the integrity of assessments. So, the answer to this question is nuanced. Yes, online learning is viable, but we must begin to question – and tease out – whether online, asynchronous assessment is viable for ensuring degree integrity. Newton and Draper (Reference Newton and Draper2025), among others, would argue that it’s not. Therefore, we have to figure out how to both provide access and protect integrity. One way to do that might be to establish and utilise assessment centre networks so that the majority of students will live reasonably close to an in-person assessment option. Another way is with the benchmark or milestone assessments recommended by Bridgeman, Liu, and Weeks (Reference Bridgeman, Liu and Weeks2024); a student may get convenience and access through the learning portion of their experience, but when it comes time for assessment at just a few points throughout their programme (rather than in every class), it will be secure and it may have to be inconvenient. We’re always told that colleges and universities have to prepare students for the ‘real world’; what’s more ‘real world’ than having to do things that are challenging and inconvenient?
How Can We Focus on Human-to-Human Learning Experiences?
To avoid the ‘dead university’ where GenAI is used to both author and grade student work, a re-focus on human interaction is needed. After all, students do not necessarily need higher education institutions to learn; people can learn anytime, anywhere. They do need us for certification and usually for opportunities to learn with other humans, and particularly to learn in environments they cannot otherwise access (e.g., laboratories). In other words, they need us for a real (not virtual) learning playbox. So, especially in large institutions with big classes, we need to leverage GenAI to find efficiencies so that we can flip the classrooms into active, engaged learning environments. For example, perhaps we can use GenAI to deliver content and provide just-in-time tutoring so that the instructor can meet with students in smaller discussion or application sections.
How Do We Support Instructor Efforts to Maintain Academic Integrity in the Age of GenAI?
GenAI tools are designed to generate output and to automate cognitive tasks, which can be detrimental for facilitating and validating learning with integrity. Thus, institutions must provide instructors with time, training, and support to redesign courses and assessments though summer stipends, course release, instructional designers, or peer mentors. They can also create infrastructure to assist instructors. For example, is it a good use of instructor time to administer and proctor tests? We suggest that it isn’t. Instead, when tests are appropriate, instructors should be able to send their students to a computer-based assessment centre that is staffed by professional proctors and allows mastery-based, secure assessments (Zilles et al., Reference Zilles, West, Herman, Bretl, Lane, Zvacek and Uhomoibhi2019). Also, institutions can help instructors design 24/7 learning support to students through customised courses or skill-related (e.g., writing) AI tutors. The point is that institutions need to support instructors in this massive change and not expect them to rebuild the proverbial planes as they’re flying them.
Key Takeaways
The ways in which we teach, learn, and assess with integrity must change in the age of GenAI. There are changes that can be made by individual instructors at the level of the classroom (e.g., changing grading rubrics or structures, flipping classrooms, making assessments more meaningful), but this should not be left up to individual instructors alone. Foundational changes also need to occur at the programmatic or department level, and at the level of institutions. We need to develop in students the durable human skills that they’ll need in an AI-augmented world. We need to move away from the credit hour system to a competency-based model of education. And, we need to reconfigure our curriculum to enable students to pursue a personalised path of learning, the success of which we securely measure at key milestone markers so that we are both reducing the frequency of secure assessments while still ensuring degree integrity. These are the changes we must make if we want academic integrity, and by extension higher education, to thrive in the age of AI.
7 Practical Questions for Now: Teaching with and about GenAI with Integrity
The sudden appearance of generative artificial intelligence (GenAI) into classrooms, campuses, and coursework has prompted a critical and urgent re-examination of what it means to teach, learn, and assess with integrity. When is using AI an act of assistance, and when does it become outsourcing of cognition? Can a student claim ownership of an AI-generated output if they did not fully comprehend or evaluate it? Moreover, institutions face a paradox – simultaneously encouraging innovation and digital literacy while upholding standards of honesty, fairness, and authenticity in academic work (UNESCO, 2024). The issue is no longer whether students are using AI. It is how, why, and to what extent they are doing so, and whether our educational systems are equipped to respond with both integrity and relevance.
This section explores how we can teach with and about AI in ways that promote ethical reasoning, meta-cognitive awareness, and personal responsibility. Drawing on emerging case studies, AI literacy frameworks, and experiences from institutions navigating these shifts, we argue that ethical preparedness must become a central pillar of education in the AI era. The role of the educator as mentor, role model, and designer of learning experiences must evolve to bridge the gap between disciplinary norms and digital realities.
Teaching with and about AI: Building a Culture of Ethical Use
If we are to foster a future where AI is used responsibly, we must integrate AI into learning without compromising student development and explicitly teach ethical reasoning in relation to AI tools. The European Network for Academic Integrity (ENAI) offers guidance on implementing these dimensions in practice, emphasising transparent acknowledgement of AI use, clear rules on acceptable applications, human accountability for outputs, awareness of bias, and education for both students and staff on ethical AI practices (Foltynek et al., Reference Foltynek, Bjelobaba and Glendinning2023). This dual approach of teaching with AI in ways that preserve integrity, and teaching about AI to embed ethics across disciplines forms the backbone of an educational response to generative technologies.
How Do We Teach with AI with Integrity?
The integration of GenAI tools into coursework presents a paradox: it can either erode student learning or amplify it. For educators, the question is not simply ‘Is this cheating?’ but rather ‘How do we ensure meaningful learning with integrity?’ This may mean, sometimes, banning GenAI and securing assessments if the learning outcomes would be hindered by GenAI use. When it can be used, we should cultivate ethical use by designing assessments that treat GenAI as a thinking partner rather than a shortcut, and explicitly require transparency, reflection, and explanation in and of that use. This means that instead of simply submitting a final product, the student would reflect on how they approached the task, why they made certain decisions, and what role AI played in shaping their process. In this case, AI becomes a cognitive scaffold, not a crutch for thought; a prompt for deeper reflection which also keeps the human in the centre of the learning experience. For instance, rather than only submitting final code, students in a third-year computer science course might be asked to include a process log or reflective statement outlining how they used AI tools, what prompts they issued, and how they validated AI outputs. This not only demystifies the black box of AI-assisted work but fosters integrity and accountability. Such approaches have already shown promise in recent pilot programmes that embed AI literacy within assessment strategies (Eaton, Reference Eaton2024; Nikolic et al., Reference Nikolic, Daniel and Haque2023; Perkins, Furze & Roe, Reference Perkins, Furze and Roe2024).
To support this shift, Khan (Reference Khan and Eaton2024a) proposes a practical five-part decision-making framework to guide when and how to integrate generative AI into teaching, learning, and assessment:
1. Student Maturity Level: Teaching with AI must be aligned with the learner’s cognitive development, using tools progressively based on students’ ability to reflect and self-regulate.
2. Learning Outcomes and Workplace Readiness: AI should support – not substitute – the acquisition of disciplinary knowledge and future-focused competencies.
3. Assessment Design and Cognitive Offloading: Educators must ask whether AI use enhances or inhibits the development of critical thinking and problem-solving skills.
4. Accessibility and Equity: The framework cautions against reinforcing inequalities by assuming all students have equal access to AI tools or know how to use them responsibly.
5. Ethical Guardrails: Usage must be framed within discussions of bias, misinformation, authorship, and the value of human judgement.
Embedding this framework into curriculum design enables educators to remain agile and intentional about the responsible use of GenAI in education, which must go beyond technical skills to include digital ethics, epistemic awareness, and agency. Students must learn not only how to prompt, but when to pause, recognising when GenAI aids learning and when it obscures it. Educators, meanwhile, must model the same reflective practice, acknowledging GenAI’s affordances and limits while preserving human-centred pedagogy.
As generative technologies evolve, institutions will need to align GenAI use with broader educational aims: not just to streamline outputs, but to nurture inquiry, resilience, and lifelong learning. If done thoughtfully, teaching with GenAI will not undermine academic integrity but reinforce it, anchoring it in the very processes of critical thinking and self-awareness that define meaningful education.
How Do We Teach about GenAI with Integrity?
The ethical implications of GenAI are far too significant to be confined to computer science or philosophy departments. Students across disciplines such as business, education, healthcare, or the arts are already engaging with GenAI tools, often without fully grasping their ethical, legal, or social implications. As GenAI becomes more embedded in educational practice and professional workflows, it is critical to embed structured ethics education across curricula.
This is not just about raising awareness of GenAI’s potential biases or limitations; it is about equipping students with the cognitive tools to interrogate its design, use, and impact. UNESCO’s AI Competency Framework for Teachers underscores the importance of preparing educators and learners with a human-centred, ethics-driven mindset. It calls for embedding five key dimensions that include AI ethics and human agency into teaching strategies to support professional growth and lifelong learning (Miao & Mutlu, Reference Miao and Mutlu2024).
Developing this mindset should be part of cultivating the broader, sustained human skill of ethical reasoning. For example, students could be introduced in their first year to a structured ethical decision-making model (Christensen Hughes & Bertram Gallant, Reference Christensen-Hughes, Bertram Gallant and Bretag2016) and then be guided to apply this model across different courses and disciplines. Such an approach ensures that ethical reasoning becomes a consistent and reinforced practice, rather than an isolated topic. Educators can begin this process by prompting questions such as:
What assumptions or biases might this GenAI tool carry?
Who benefits and who is excluded from its design or outputs?
If I use this tool to generate content, who owns the intellectual property?
Does using GenAI enhance or diminish my understanding of the topic?
Does using GenAI fabricate my knowledge and abilities?
Such questions turn ethical reasoning into a lived practice, encouraging students to explore GenAI not only as a tool but as a sociotechnical system shaped by human values. Empirical research from cyber ethics courses has shown that sustained engagement with ethical theories and real-world dilemmas can significantly shift student perceptions and deepen their understanding of responsibility, fairness, and transparency in GenAI use (Khan, Venugopal & Oroumchian, Reference Khan, Venugopal, Oroumchian, Arabnia, Deligiannidis, Tinetti and Tran2021b).
Moreover, stand-alone ethics modules, when strategically infused into the core of technology and applied sciences programmes, can foster moral agency, improve decision-making, and help students internalise digital responsibility (Khan, Reference Khan2017; Melo & de Sousa, Reference Melo and de Sousa2017). These courses are not just philosophical add-ons but foundational to the development of responsible GenAI practitioners.
Ultimately, as the UNESCO Global Education Agenda 2030 urges, the future of ethical AI education must be inclusive, rights-based, and aligned with sustainable development. Teaching about AI ethics is no longer optional; it is a pedagogical imperative to prepare future developers, designers, and decision-makers for an AI-driven world. These ethical considerations form the philosophical backbone of AI education. Yet to truly empower students as competent and critical users, this ethical grounding must be paired with applied AI literacy – an area explored in the next sections.
The Shift towards Personal Responsibility and Ethical Growth
Traditional models of academic integrity have primarily relied on institutional rules and punitive mechanisms. However, in the age of ubiquitous GenAI, the ability to police every instance of misconduct is no longer feasible. Nor is it pedagogically sound. Instead, a shift is required from an overreliance on external enforcement to a balance between implementing external guardrails (to help people do the right thing) and developing internalised responsibility (so that people might choose the right thing even when the guardrails aren’t there). Ethical engagement must be cultivated as an intrinsic part of the learner’s academic identity, grounded in moral reasoning and reflective practice.
As students increasingly interact with GenAI tools to generate, summarise, or refine content, the concept of personal moral responsibility becomes central to fostering ethical academic behaviour. Drawing on ethical frameworks such as deontology, virtue ethics, and consequentialism, this responsibility is not only about following rules but also about understanding the why behind ethical action. According to deontological ethics (Kant), students have a duty to act ethically regardless of outcome, meaning their use of GenAI should stem from a personal obligation to uphold integrity, not merely a fear of getting caught (Kant, Reference Kant and Timmermann2012). Virtue ethics, as proposed by Aristotle, emphasises cultivating character traits like honesty and responsibility (Aristotle, Reference Brown2009). Consequentialist thinking reminds students to consider the broader impact of their actions, including how GenAI misuse could distort learning for their future and harm others (Mill, Reference Mill and Sher2002).
How Do We Shift from Policing to Ethical Empowerment?
Integrity in the age of GenAI must move beyond a reliance on detection and punishment. Integrity is increasingly recognised as a disposition such as an ethical commitment to truth, effort, and autonomy. This paradigm shift urges educators to design learning experiences that develop ethical agency and moral character rather than merely enforce compliance.
Reflective practice offers one such pathway. Drawing on evidence from language and teacher education, guided journaling has been shown to build rapport, promote engagement, and encourage honest self-expression, particularly when students are invited to share personal experiences and reflect on their learning processes (Chan & Aubrey, Reference Chan and Aubrey2021; Khan & Venugopal, Reference Khan and Venugopal2019). Inspired by these successes, similar approaches are now being adapted in GenAI-integrated classrooms.
One such adaptation involves the use of ‘Decision Journals’, where students can document their choices across the research and creation process. Typical prompts could include:
When were you tempted to over-rely on GenAI?
How did you verify the accuracy of GenAI-generated content?
What trade-offs did you consider between efficiency and comprehension?
In tandem, institutions are exploring assessment redesigns that encourage transparency rather than prohibition. For instance, a growing number of universities now ask students to submit not only final outputs but also process reports outlining (Black & Tomlinson, Reference Black and Tomlinson2025; Perkins, Furze & Roe, Reference Perkins, Furze and Roe2024):
When and why they used GenAI tools;
How they evaluated the limitations or biases in the output;
What personal contributions they made in editing or revising the GenAI-generated content.
This method is proving effective in reinforcing student ownership over their work while reducing instances of misconduct such as forms of contract cheating. Early feedback suggests that students feel more confident distinguishing between GenAI assistance and GenAI replacement, and are more engaged in ethical dialogue with instructors.
How Do We Develop Students’ Meta-cognition?
In the years ahead, the ability in education to reflect on one’s thinking will be just as essential as mastering disciplinary content. In GenAI-rich educational settings, meta-cognition serves as a critical buffer against automation-induced detachment from the learning process. Without it, students risk accepting generative content uncritically, eroding not only academic integrity but also their development as independent thinkers. And this is not just critical for learners. In the world outside academe, the metacognitive skills to ‘monitor, regulate, and control one’s own cognitive processes’ will be critical to being able to effectively collaborate with GenAI tools (Sidra & Mason, Reference Sidra and Mason2024). So, meta-cognition is both a necessary skill for students and one of those durable human skills helpful in an AI-augmented world.
To develop this meta-cognition, we can borrow from consequentialist perspectives, such as those of John Stuart Mill, that call on learners to reflect on the broader impacts of their actions. Educators are thus being encouraged to embed structured opportunities for this reflection into coursework. Emerging strategies may include:
assignments that require justification of GenAI use;
journaling that explores how GenAI output was verified or challenged; and,
class discussions where students analyse the ethical implications of GenAI-generated responses in peer work.
Several institutions have incorporated meta-cognitive reflective elements directly into assessment rubrics, transforming ethical reasoning from an abstract discussion into an evaluated learning outcome (An, Yu & James, Reference An, Yu and James2025; Combrinck & Loubser, Reference Combrinck and Loubser2025). The result: students develop not only skills but also judgement and are more likely to engage critically and ethically with GenAI as it continues to evolve.
As GenAI becomes increasingly embedded in all aspects of academia, the educator’s role will be to model, scaffold, and assess responsible use, not to outlaw it. The shift from surveillance to empowerment is already underway (Dawson, Reference Dawson and Eaton2024; Feng et al., Reference Feng, Luxton-Reilly, Wünsche and Denny2025; McKenzie, Reference McKenzie2021). This shift does not mean abandoning traditional approaches such as clear institutional rules, close observation for the certification of knowledge and abilities, and responding to integrity breaches when they occur, because these remain essential for upholding academic integrity. Rather, it reflects a growing recognition, supported by the UNESCO AI Competency Framework for Teachers (Miao & Mutlu, Reference Miao and Mutlu2024) and literature such as Macfarlane, Zhang, and Pun (Reference Macfarlane, Zhang and Pun2012), Christensen Hughes and Bertram Gallant (Reference Christensen-Hughes, Bertram Gallant and Bretag2016), Combrinck and Loubser (Reference Combrinck and Loubser2025), and others that such measures are most effective when balanced with strategies fostering student agency, reflective capacity, and ethical literacy.
AI Literacy as a Core Competency
While ethical awareness provides the foundation for responsible GenAI engagement, students also need the practical fluency to navigate GenAI tools thoughtfully and effectively. AI literacy goes beyond technical skills; it encompasses the ability to evaluate outputs, question design assumptions, and align tool use with disciplinary norms and ethical principles. As GenAI becomes embedded in learning, working, and creative processes, cultivating AI literacy must become a central objective of higher education.
In an academic future shaped by generative technologies, AI literacy should be treated as a foundational skill that is on par with digital literacy or academic writing. It is not merely about knowing how to operate tools like DeepSeek, ChatGPT, or GitHub Copilot, but about developing the critical capacity to assess their outputs, challenge their assumptions, and integrate their use responsibly within disciplinary and ethical boundaries. Graduates who lack AI literacy will not fall behind because they cannot use AI, but because they cannot judge when, how, and if they should.
What Is AI Literacy?
AI literacy encompasses a broad set of interdisciplinary competencies that go beyond technical familiarity. It includes:
understanding how AI models function, including the role of training data, pattern recognition, and algorithmic limitations;
recognising the biases, omissions, and hallucinations embedded in AI-generated content;
evaluating AI outputs for accuracy, credibility, and disciplinary alignment;
knowing when AI use enhances learning – and when it may undermine it; and,
navigating issues of authorship, transparency, and epistemic responsibility in human-AI collaboration.
Students must be able to see AI not as an oracle but as a system shaped by human values, economic incentives, and cultural biases. Empowering students with this understanding fosters agency and autonomy in how they approach their academic and professional work.
How Can We Teach AI Literacy?
AI literacy cannot be confined to computer science or data analytics courses. Its implications stretch across the curriculum such as media studies, law, education, engineering, and so on. A communications student using AI to generate headlines must understand media bias and political framing. A nursing student reviewing AI-assisted diagnostics must critically assess patient safety and embedded medical bias in the systems to ensure human accountability for determining diagnoses and treatments. A design student working with GenAI tools must evaluate intellectual property rights and creative integrity.
Forward-looking institutions are embedding discipline-specific AI literacy modules into existing coursework (Ngo & Hastie, Reference Ngo and Hastie2025). These could include:
simulated decision-making tasks involving AI-generated content;
assignments that require the student to critically compare their writing to the writing generated by the machine; and
collaborative exercises where students role-play as ethical reviewers of AI outputs.
Such integrations help students ask not just ‘Can I use AI here?’ but ‘Should I?’ – and ‘What are the academic and ethical implications if I do?’
As students progress, more capabilities of AI can be introduced in a scaffolded way, to help them learn without cognitive overload (Long & Magerko, Reference Long and Magerko2020). Students could be challenged to engage with AI through disciplinary and ethical frameworks; for example, applying professional codes of conduct, evaluating AI reliability in subject-specific contexts, or critiquing the role of generative tools in research and communication.
By their final year, students should be capable of defending and justifying their AI use in capstones, portfolios, or thesis projects, demonstrating not only technical fluency but also an internalised sense of academic and professional integrity.
This progression confirms a key message: AI can coexist with academic integrity when its use is grounded in reflective purpose, ethical alignment, and disciplinary growth. The role of educators and institutions is to support that transformation, not by removing the tools, but by building the intellectual and moral scaffolds that help students use them wisely (Long & Magerko, Reference Long and Magerko2020).
By treating AI literacy as a core graduate attribute, universities can ensure students are not only tool users but informed navigators of an evolving digital landscape, prepared to lead with integrity in fields where AI will continue to reshape norms and expectations.
The Educator’s Role: Modelling, Mentoring, Mediating
In a GenAI-enabled academic landscape, educators are not just content experts; they are architects of ethical culture, facilitators of reflective practice, and translators between disciplinary norms and digital tools. As generative technologies reshape how knowledge is produced, consumed, and evaluated, educators must evolve from knowledge transmitters to ethical role models, informed mentors, and reflexive mediators who guide students towards responsible engagement with AI. This includes actively communicating with students and engaging them as partners in defining shared expectations, for instance, by co-creating a statement of values that reflects both institutional principles and the lived experiences of learners
How Might Educators and Staff Model Ethical Use for Students?
Integrity must be visible to be internalised. Educators have a pivotal role in modelling ethical AI use – not only by setting rules but also by demonstrating reflective, responsible behaviour in their own practice (Khan, Reference Khan2024b). When educators openly disclose how they use AI (e.g., to summarise academic texts, scaffold quiz questions, or brainstorm lesson ideas), and also articulate its ethical and technical limitations (such as copyright violations, labour exploitation, amplification of human biases, factual inaccuracies, and misalignment), they model transparency, critical evaluation, ethical navigation, and humility.
For example, institutions could design live demonstration panels where educators compare AI-generated and human-graded student responses to the same prompt. These sessions can help surface how over-reliance on AI might distort fairness, miss contextual nuance, or reinforce hidden biases. By critically dissecting both outputs in real time, educators model not only transparency and evaluative thinking but also their role as ethical co-learners in a shared digital ecosystem. This kind of public reflection helps students understand that integrity isn’t static; rather, it’s continually negotiated in evolving learning environments.
Importantly, educator modelling reinforces a broader culture of integrity within institutions. When teachers embody ethical principles in their own workflows, they signal that academic integrity is not simply a student obligation, but a shared community value.
How Do We Get All Faculty and Staff Skilled Up in the AI Age?
A critical tension in the AI transition is the fluency gap between students and staff. Many students engage rapidly and experimentally with new tools but may lack AI literacy and ethical grounding. Educators, meanwhile, may hesitate due to unfamiliarity, ethical concerns, or lack of training, inadvertently leaving students without needed guidance.
Addressing this divide requires future-resilient professional development. AI literacy training must be embedded in faculty development frameworks, not as an optional add-on but as a strategic imperative. Workshops, peer mentoring, and scenario-based training can support staff in confidently resisting GenAI or incorporating it into pedagogy, while remaining attuned to risks and ethical boundaries.
Equally powerful are cross-role collaborations, where students and educators co-design assignments, AI-integrated rubrics, or ethical use guidelines. These partnerships foster mutual understanding, flatten hierarchies, and create space for shared innovation rooted in academic values.
Educators must also play an active role in shaping institutional AI governance ensuring that the deployment of AI tools is aligned with pedagogical priorities, human rights, and academic standards. If faculty are left out of these conversations, AI adoption risks becoming purely administrative, rather than pedagogically meaningful and ethically grounded.
As we move deeper into the AI era, the educator’s role will not diminish; rather, it will become more complex and more critical. To uphold academic integrity, educators must lead with integrity.
Key Takeaways
Artificial intelligence is not inherently good or corrupting. It is a mirror, reflecting the values, intentions, and decisions of its creators and its users. In education, this means AI will not undermine academic integrity unless we fail to design systems, policies, and cultures that support teaching, learning, and assessing with integrity. The responsibility lies with us – not only to respond, but to lead.
The real challenge is not simply to detect or deter misconduct, but to build a culture where ethical, reflective, and responsible AI use becomes second nature. This shift demands more than new rules – it requires new relationships: between students and technology, between educators and emerging tools, and between institutions and the values they choose to uphold. To make this shift, we must move from compliance to conscience, restriction to reflection, and enforcement to empowerment.
Educators are central to this transformation. By modelling transparency, designing for ethical growth, and scaffolding AI literacy across the curriculum, we enable students not just to use AI, but to understand it, question it, and mature through it. When students become co-creators of integrity, rather than passive subjects of surveillance, the promise of AI is no longer a threat, but a catalyst for deeper learning.
If we rise to this challenge, then AI will not diminish academic integrity. It will deepen it, grounding it not in fear of detection, but in the capacity for ethical reasoning, personal responsibility, and intellectual resilience that the future of education demands.
8 Final Takeaways
The concept of academic integrity predates artificial intelligence, as do its fundamental values of honesty, respect, trustworthiness, responsibility, fairness, and courage (ICAI, 2021). And, it’s true that academic integrity has come under threat many times before GenAI tools like ChatGPT burst onto the scene – the printing press, distance learning, the calculator, the internet, and the contract cheating industry have all challenged our commitments to academic integrity and our ability to protect degree integrity. GenAI, though, has been the most sweeping and rapid of all disruptors. Created to reduce friction and offload human cognitive effort, these artificial systems can undermine both fundamental purposes of education: to facilitate and certify learning. And, as GenAI systems evolve from passive assistants to predictive and generative agents, they risk further diminishing student autonomy – subtly shifting learners from active participants to passive recipients or, at best, editors of machine-curated content. Tools that suggest essay structures, predict answers, or personalise feedback may enhance productivity, but that is not the goal of education and it blurs the line between understanding and outsourcing. Academic integrity is not simply about avoiding misconduct; it requires that students retain meaningful control over their intellectual development, exercising judgement, reflection, and ethical decision-making.
This tension is likely to deepen with the advent of brain–computer interfaces and cognitive augmentation tools that could automate not just outputs but internal processes of thinking and writing. When such technologies mediate or even co-produce knowledge, questions arise like, who is learning and who is accountable? Preserving student agency – and the power and skill of learning – in these contexts will demand more than policy, although clear communication through national, institutional, and course level policies is vital. It will require pedagogies that emphasise critical engagement, transparency about technological use, and frameworks that teach students to ethically collaborate with machines. Academic integrity must evolve to support empowered learners who develop durable human skills and the ability to view and use these technologies, not as crutches, but as tools for which they, the humans, are ultimately responsible. It seems clear from our current challenges – as well as those facing us ahead – that fundamentally, higher education needs to figure out how to uphold its many core values (e.g., integrity, access, equity, inclusion) that sometimes appear in tension. If we don’t do that, then it will be impossible for us to tackle the academic integrity crisis that the age of AI is causing.
It’s important to remind ourselves that there is also integrity in resisting what has been introduced to us. The dominance of the general-purpose ‘synthetic media generators’ (Bender & Hanna, Reference Bender and Hanna2025) does not have to be inevitable. We, the educational market place, can demand specifically tailored technology that is more accurate, thoughtfully designed, and useful, and less susceptible to the cognitive offloading of learning. A commitment to academic integrity is the countervailing force against anything that undermines learning. Some might label any resistance as ‘anti-progress’ or ‘moral panic’, but such labelling diminishes the very real challenges of responding to change, the unfairness of educators having to respond to inventions created without their input and thrust upon them as inevitable, the dizziness of the pace of disruption, and the technology influencing practices faster than we can research their positive and negative impacts. Is GenAI a threat to teaching, learning, and assessing with integrity? Undoubtedly. Could it also engender positive changes that will benefit all involved? Sure, it could. The point is that it’s our choice.
To respond meaningfully to these evolving conditions, and not end up in the world of Nexus University, we must finally and fully embrace academic integrity not as a student conduct or compliance issue, but as a teaching and learning issue (Bertram Gallant, Reference Bertram Gallant2008). Fundamentally, we need to be more clear in our principles and policies, develop ethical reasoning and other durable human skills in ourselves and our students, and adopt new models of teaching, learning, and assessment that see knowledge as socially constructed and mediated, developed in relationships with others (machines and humans) and evaluated in action and application. When we can offer students a meaningful, human-to-human engaging learning environment that facilitates their exploration and growth, while guaranteeing society that we have valid measures of learning, then we can continue to exist to teach, learn, and assess with integrity in the age of AI.
Mark Warschauer
University of California, Irvine
Mark Warschauer is Distinguished Professor of Education at the University of California, Irvine, with affiliated faculty appointments in the Departments of Informatics, Language Science, and Psychological Science. He is a member of the National Academy of Education and the director of the UCI Digital Learning Lab. Professor Warschauer is one of the most influential scholars in the world on digital learning, digital literacy, and the US of AI in Education. He has published 12 books on these topics, including with MIT Press, Cambridge University Press, Teachers College Press, and Yale University Press, and some 300 scientific articles and papers. His work has been cited more than 48,000 times, making him one of the most cited researchers in the world on educational technology. He previously served as founding editor of Language Learning & Technology and inaugural editor of AERA Open.
Tamara Tate
University of California, Irvine
Tamara Tate is a Project Scientist at the University of California, Irvine, and Associate Director of the Digital Learning Lab. She leads the Lab’s work on digital and online tools to support teaching and learning including generative AI, partnering with school districts, universities, non-profit organizations, media and tech developers, and others in iterative development and evaluation. As the PI of a NSF-funded grant, she is studying the use of generative AI in undergraduate writing courses. She also studies secondary student writing as a member of the IES-funded national WRITE Center. She received her B.A. in English and her Ph.D. in Education at U.C. Irvine and her J.D. at U.C. Berkeley.
Editorial Board
Stephen Aguilar, University of Southern California, US
Maha Bali, American University in Cairo, Egypt
Irene-Angelica Chounta, University of Duisburg-Essen, Germany
Shayan Doroudi, University of California, Irvine, US
María Florencia Ripani, Ceibal Foundation, Uruguay
Bart Rientes, The Open University, UK
Neil Selwyn, Monash University, Australia
Jiahong Su, The University of Hong Kong
Ulrich Trautwein, University of Tübingen, Germany
Ying Xu, Harvard University, US
About the Series
Generative AI is one of the most disruptive technologies in modern history, with the potential to dramatically transform education for better or worse. This series will address cutting-edge topics on the intersection of generative AI with educational research and practice for diverse learners from early childhood to adult.
