To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Large Language Models (LLMs) like OpenAI’s ChatGPT or Google’s Gemini are the new sensation in artificial intelligence (AI) research. These systems exhibit impressive conversational abilities and have even managed to convince some people of their possible sentience. But do LLMs actually speak our language, or do they merely appear as if they do? Do they really reason and think, or are they simply good at superficially imitating these abilities? In this chapter, I argue that Wilfrid Sellars’s functionalist-pragmatist approach to language and concept learning might be especially useful in the context of answering the questions above. In particular, conceiving the process of learning and mastering language as analogous to the process of learning to play a game within a set of normative social practices can shed light on the kind of abilities LLMs possess and what we can expect them to do in the future, including becoming genuine members of our linguistic community rather than mere “stochastic parrots.”
This study investigates whether and how interacting with ChatGPT may offer a context that supports perspective shifting and the development of cognitive flexibility, defined as the capacity to move between etic (outsider) and emic (insider) perspectives. Drawing on individual interviews with students enrolled in an advanced university-level Language for Specific Purposes (LSP) French course focused on marketing and advertising in France, this qualitative study examines students’ perspectives on their experiences using ChatGPT to conduct market research on French consumer needs and preferences. The analysis reveals that while students expressed concerns about the legitimacy, authenticity, and cultural positioning of AI-generated content, the interactive and conversational nature of the tool enabled some students to experiment with culturally unfamiliar roles, adopt emerging emic stances, and reflect on the limits of their interpretive frameworks. However, co-creative engagement or shared agency with ChatGPT was not automatic and depended on prompt design, tolerance for ambiguity, and the negotiation of subjective positioning. Rather than facilitating perspective transformation, ChatGPT-supported interactions appeared to foster more modest but meaningful shifts in interpretive positioning and dialectical thinking. The study points to prompt literacy as crucial for fostering more dynamic partnerships with ChatGPT and enabling students to explore alternative perspectives and roles in ways that support the development of intercultural competence in the L2 classroom.
This paper evaluates the performance of baseline and domain-augmented ChatGPT models for literature-based knowledge support in flood susceptibility mapping (FSM) using machine Learning approaches. To assess this, we designed five key questions related to FSM, with benchmark responses derived from our comprehensive review article (Pourzangbar et al., Journal of Flood Risk Management18, e70042), which analyzed 100 studies on ML applications in FSM. The same questions were posed (i) to standard ChatGPT-4 and ChatGPT-4o models without additional contextual material, and (ii) to a domain-augmented GPT-4 configuration (Chat-FSM) equipped with retrieval access to the 100 reviewed articles. The comparison highlights that GPT-based models can reasonably reproduce frequently reported machine learning models and conditioning factors from the reviewed literature, but show weaker consistency in feature selection methods, often suggesting less relevant techniques. Among the models, ChatGPT-4o demonstrated the weakest alignment with benchmark data, while Chat-FSM demonstrated the highest agreement with the benchmark dataset across most evaluated questions. In terms of application-level efficiency, GPT models required substantially less time and computational effort compared to manual literature synthesis under the defined experimental setup. While ChatGPT-based systems can support literature-informed exploration in FSM, human expertise remains essential for critical reasoning, methodological design, and application to novel or context-specific scenarios.
Using the fields of memory studies and digital humanities, this article argues that there has been a shift from more collective and social memory to more personalised and individual memory. This shift, it is argued here, can be conceptualised through the psychoanalytic concept of ‘psychosis’. While the causes of the changes in our patterns of memory have been located in capitalist and neoliberal principles, the effects of the changes in our memory habits might be found in psychosis. From falling in love with machinic AI replicas to indulging in conspiracy theories to acting as if we are social media influencers or backing ourselves to win out in impossible job markets, we are inclined towards personal fantasy, often at the expense of participating in social life. But why do we do this? Why is it easier to believe a farfetched conspiracy theory or wild personal dream than it is to participate socially and collectively in the world we live in? Part of the reason, at least, is found in our increasing habitual reliance on new and emergent technologies. Often presented to us as a brand-new form of Artificial Intelligence, these generative tools are the latest update to a longer pattern in our digital world: the trend of developing ‘relationships’ with algorithms that, to larger and smaller degrees, we come to rely on for habits of cognition and recognition. By affecting our patterns of memory, these technologies produce a kind of isolation that lends itself to individual and fantastical – rather than shared and realist – thinking.
In Chapter 7, “Upgrades in the Age of Generative AI,” we consider the hype around generative AI tools, like ChatGPT, and explain how the razzle-dazzle has captured the public’s imagination, even as the technology hasn’t come close to being artificial general intelligence—the goal companies like OpenAI aspire for. While tech giants race to develop generative AI products, we emphasize that they currently are sophisticated pattern-matching systems that simulate intelligence without truly understanding it. Analyzing both negative (political campaigns) and positive (the possibility of helping doctors communicate more empathetically over patient portals) examples, we offer recommendations for spotting uses of generative AI to avoid and how technological upgrades can be carefully and ethically integrated into communication systems to improve human welfare.
Many writers and musicians believe they can see their own efforts in the works of others, even when no one else can – a phenomenon dubbed projective plagiarism. This psychological illusion is driven by egocentrism and a belief in one’s own uniqueness. At the other extreme are cases in which individuals have plagiarized from the works of others without damage to their reputations. In some instances, this happens because they are held in such high esteem that charges of plagiarism don’t really stick – a phenomenon dubbed Teflon plagiarism. There is also unrepentant plagiarism – writers and musicians who have seemingly appropriated the works of others across their entire careers without apology. But what drives someone to plagiarize? The various excuses offered up by plagiarists are examined, as is the question of whether appropriation correlates with particular personality characteristics. And is plagiarism even deserving of its highly negative reputation? The question of whether the productions of chatbots constitute plagiarism or ghostwriting is considered – even as litigation swirls around the possibility of infringement occurring during the training of chatbots.
The emergence of large language models, exemplified by ChatGPT, has garnered growing attention for their potential to generate feedback in second language writing, particularly automated written corrective feedback (AWCF). In this study, we examined how prompt design – a generic prompt and two domain-specific prompts (zero-shot and one-shot) enriched with comprehensive domain knowledge about written corrective feedback (WCF) – influences ChatGPT’s ability to provide AWCF. The accuracy and coverage of ChatGPT’s feedback across these three prompts were benchmarked against Grammarly, a widely used traditional automated writing evaluation (AWE) tool. We find that ChatGPT’s ability in flagging language errors grew considerably with prompt sophistication driven by the integration of domain-specific knowledge and examples. While the generic prompt resulted in substantially lower performance than Grammarly, the zero-shot prompt achieved comparable results to it and the one-shot prompt surpassed it considerably in error detection. Notably, the most pronounced improvement in ChatGPT’s performance was observed in its detection of frequent error categories, including those of word choice or expression, direct translation, sentence structure and pronoun. Nonetheless, even with the most sophisticated prompt, ChatGPT still displayed certain limitations when compared to Grammarly. Our study has both theoretical and practical implications. Theoretically, it lends empirical evidence to Knoth et al.’s (2024) proposition to separate domain-specific AI literacy from generic AI literacy. Practically, it sheds light on the pedagogical application and technical development of AWE systems.
This chapter takes the distinctive materiality of the modern stage, the homely table, as a way to place two very different productions into conversation: Forced Entertainment’s Table Top Shakespeare and Annie Dorsen’s Prometheus Firebringer. Although these two productions might trace the arc from the residual (telling a story at a table using small household items) to the emergent (a dialogue between an AI-generated reconstruction of a lost Aeschylus play and a narrative composed of citations), they also dramatize an increasing absorption of the human into the apparatus of performance, a possibly fearsome absorption traced through Dorsen’s work, and touching on a range of other contemporary performances, including Mona Pirnot’s I Love You So Much I Could Die.
Extant work shows that generative AI such as GPT-3.5 and perpetuate social stereotypes and biases. A less explored source of bias is ideology: do GPT models take ideological stances on politically sensitive topics? We develop a novel approach to identify ideological bias and show that it can originate in both the training data and the filtering algorithm. Using linguistic variation across countries with contrasting political attitudes, we evaluate average GPT responses in those languages. GPT output is more conservative in languages conservative societies (polish) and more liberal in languages used in liberal ones (Swedish). These differences persist from GPT-3.5 to GPT-4. We conclude that high-quality, curated training data are essential for reducing bias.
Starting from the evolution of the protection of human rights on the internet, the first part of this chapter analyses the proposals for new digital human rights and the methodology of their creation in different forums such as the Council of Europe and European Union as well as related processes in the United Nations Human Rights Council. The second part focuses on the challenges related to the rapid developments in artificial intelligence, such as ChatGPT, for the protection of human rights and regulatory efforts by the Council of Europe, in particular its Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law adopted in 2024 and the Artificial Intelligence Act of the European Union dating from the same year. Both instruments are analysed for their potential to protect human and fundamental rights in particular through new digital human rights. The contribution finds possible complementarity between the two regulatory approaches. Giving several examples, it concludes that there is an ongoing process of the concretisation of new digital human rights, which are mainly but not exclusively based on existing human rights.
The use of large language models (LLMs) has exploded since November 2022, but there is sparse evidence regarding LLM use in health, medical, and research contexts. We aimed to summarise the current uses of and attitudes towards LLMs across our campus’ clinical, research, and teaching sites. We administered a survey about LLM uses and attitudes. We conducted summary quantitative analysis and inductive qualitative analysis of free text responses. In August–September 2023, we circulated the survey amongst all staff and students across our three campus sites (approximately n = 7500), comprising a paediatric academic hospital, research institute, and paediatric university department. We received 281 anonymous survey responses. We asked about participants’ knowledge of LLMs, their current use of LLMs in professional or learning contexts, and perspectives on possible future uses, opportunities, and risks of LLM use. Over 90% of respondents have heard of LLM tools and about two-thirds have used them in their work on our campus. Respondents reported using LLMs for various uses, including generating or editing text and exploring ideas. Many, but not necessarily all, respondents seem aware of the limitations and potential risks of LLMs, including privacy and security risks. Various respondents expressed enthusiasm about the opportunities of LLM use, including increased efficiency. Our findings show LLM tools are already widely used on our campus. Guidelines and governance are needed to keep up with practice. Insights from this survey were used to develop recommendations for the use of LLMs on our campus.
With research showing the benefits of feedback, teachers have come under increasing pressure to provide more, including more personalised, and more detailed responses to students. This often places heavy demands on teachers and with ever-larger class sizes and heavier workloads, teacher fatigue and burn-out are common. Automation has the potential to change all this and new digital resources have already proven to be valuable in supporting L2 writing. In this paper I look at the contribution of Automated Writing Evaluation (AWE) programmes and Generative Artificial Intelligence (GenAI) to feedback. The ability to provide instant local and global feedback across multiple drafts targeted to student needs and in greater quantities promises to increase learner motivation and autonomy while relieving teachers of hours of marking. But haven’t we heard this all before? Are these empty claims which raise our expectations of removing some of the drudgery of mundane grammar correction? Most importantly, what is the role of teachers in all this, and can AI really improve writers and not just texts?
Education aims to improve our innate abilities, teach new skills and habits, and nurture intellectual virtues. Poorly designed or misused generative AI disrupts these educational goals. I propose strategies to design generative AI that aligns with education’s aims. The paper proposes a design for a generative AI tutor that teaches students to question well. I argue that such an AI can also help students learn to lead noble inquiries, achieve deeper understanding, and experience a sense of curiosity and fascination. Students who learn to question effectively through such an AI tutor may also develop crucial intellectual virtues.
The last decade has seen an exponential increase in the development and adoption of language technologies, from personal assistants such as Siri and Alexa, through automatic translation, to chatbots like ChatGPT. Yet questions remain about what we stand to lose or gain when we rely on them in our everyday lives. As a non-native English speaker living in an English-speaking country, Vered Shwartz has experienced both amusing and frustrating moments using language technologies: from relying on inaccurate automatic translation, to failing to activate personal assistants with her foreign accent. English is the world's foremost go-to language for communication, and mastering it past the point of literal translation requires acquiring not only vocabulary and grammar rules, but also figurative language, cultural references, and nonverbal communication. Will language technologies aid us in the quest to master foreign languages and better understand one another, or will they make language learning obsolete?
This study explores the role of ChatGPT in the completeness of collaborative computer-aided design (CAD) tasks requiring varying types of engineering knowledge. In the experiment involving 22 pairs of mechanical engineering students, three different collaborative CAD tasks were undertaken with and without ChatGPT support. The findings indicate that ChatGPT support hinders completeness in collaborative CAD-specific tasks reliant on CAD knowledge but demonstrates limited potential in assisting open-ended tasks requiring domain-specific engineering expertise. While ChatGPT mitigates task-specific challenges by providing general engineering knowledge, it fails to improve overall task completeness. The results underscore the complementary role of AI and human knowledge.
At what time does the afternoon start, at 1 p.m. or 3 p.m.? Language understanding requires the ability to correctly match statements to their real-world meaning. This mapping process is a function of the context, which includes various factors such as location and time as well as the speaker’s and listeners’ backgrounds. For example, an utterance like, “It is hot today,” would mean different things were it expressed in Death Valley versus Alaska. Based on our background and experiences, people have different interpretations for time expressions, color descriptions, geographic expressions, qualities, relative expressions, and more. This ability to map language to real-world meaning is also required from the language technology tools we use. For example, translating a recipe that contains instructions to “preheat the oven to 180 degrees” requires a translation system to understand the implicit scale (e.g. Celsius versus Fahrenheit) based on the source language and the user’s location. To date, no automatic translation systems can do this, and there is little “grounding” in any widely used language technology tool.
Non-compositional phrases such as “by and large” are phrases whose meaning cannot be unlocked by simply translating the combination of words they constitute. In particular, figurative expressions – such as idioms, similes and metaphors – are ubiquitous in English. Among other reasons, figurative expressions are acquired late in the language learning journey because they often capture cultural conventions and social norms associated with the people speaking the language. Figurative expressions are especially prevalent in creative writing, acting as the spice that adds flavor to the writing. Artificial intelligence (AI) writing assistants such as ChatGPT are now capable of editing raw drafts into well-written pieces, to the advantage of native and non-native speakers alike. These AI tools, which have gained their writing skills from exposure to vast amounts of online text, are extremely adept at generating text similar to the texts they have been exposed to. Unfortunately, they have demonstrated shortcomings in creative writing that requires deviating from the norm.
While what is said can be difficult to understand, what is not said may pose an even bigger challenge. Language is efficient, so often what goes without saying is simply not being said. It is left for the reader or listener to interpret underspecified language and resolve ambiguities, a task that we do seamlessly using our personal experience, knowledge about the world, and commonsense reasoning abilities. In many cases, commonsense knowledge helps EFL learners compensate for low language proficiency. However, what is considered “commonsense” is not always universal. Some commonsense knowledge, especially pertaining to social norms, differs between cultures. Can language technologies help bridge this cultural gap? It depends. Chatbots like ChatGPT seem to have broad knowledge about every possible topic in the world. However, ChatGPT learned about the world from reading all the English text on the web, which is primarily coming from the US, and thus it has a North American lens. In addition, despite being “book smart,” it still lacks basic commonsense reasoning abilities that are employed by us to understand social interactions and navigate the world around us.
In contrast to the rest of the book, this chapter discusses not what to say in and how to speak English but rather what is not socially acceptable to speak about in North American culture: from offensive language and profanity to sensitive topics such as sex and politics. These taboo subjects differ by culture, and EFL speakers who come from cultures that are more direct might find themselves saying something inappropriate – just as chatbots can sometimes generate offensive content. The developers of chatbots like ChatGPT have programmed filters to prevent them from generating offensive text. Those filters are based on the norms of the developers themselves, most of whom are based in North America, and this can make a chatbot’s refusal to answer some questions seem excessively careful through the lens of other cultures.
Although the internet has removed geographical boundaries, transforming the world into a global village, English is still the most dominant language online. New forms of online communication such as emoji and memes have become an integral part of internet language. While it’s tempting to think of such visual communication formats as removing the cultural barriers – after all, emoji appear like a universal alphabet – their interpretation may rely on cultural references.