To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 1, we set the scene by examining the dynamics of online offensive language. We examine offensive language across a spectrum, ranging from non-polite expressions to grossly offensive (potentially illegal) speech. We also explore the conceptual links between offensive language and related notions such as impoliteness, hate speech and language aggression. Importantly, this chapter focuses on why understanding offensive language is, above all, a concern best addressed by linguists. To achieve this, we discuss the similarities and differences between grossly offensive and (im)polite language. We specifically focus on pragmatic concepts such as locution, illocution and perlocution to explain how they operate at both ends of the spectrum. Finally, we address the challenges of detecting offensive language in computational approaches to combating online hate, emphasising the vital role of linguistic contributions.
Chapter 3 explains our research methods, data collection and ethics. In this chapter, we focus on how we have addressed the typical challenges of interpreter bias in studying online offensive language. To mitigate these issues, we ground our study in naturally occurring data flagged as offensive by the targets themselves. We also introduce the reader to the world of social media content creators and discuss who the targets in our study are, and provide some demographic information about them. We discuss the use of Sketch Engine as a tool to address our primary objective of exploring the formal, lexical, semantic and discursive strategies involved in the construction of offensive language. We also discuss Wmatrix5, a corpus tool used to explore the semantic dimensions of offensive language. In addition, we explain how quantitative analysis was combined with thematic and linguistic-pragmatic approaches to examine how frequency and context shape offensive meanings. Chapter 3 further outlines our ethical considerations, including the responsible handling of data and the protection of participant anonymity, throughout the research process.
Edited by
Monika Zalnieriute, Law Institute of the Lithuanian Centre for Social Sciences,Agne Limante, Law Institute of the Lithuanian Centre for Social Sciences
Impartiality, broadly meaning the absence of bias and according equal treatment before the law, is a foundational element of judicial decision-making around the world. In this chapter, we consider how the goal of judicial impartiality may be either enhanced and supported or undermined by the use of artificial intelligence. Key developments in legal AI include innovations directed toward courts and decision- makers. These may be process-driven – for example, triaging or decision-supporting systems; in the case of pre-trial processes, judges may need to manage technology-facilitated document discovery. AI systems may also be involved in the production of evidence submitted to the court. Finally, courts and judges themselves may be the subjects of AI tools, such as those which identify patterns in decision-making. As this chapter explores, these different uses all have implications for the way that judicial impartiality is enacted and tested.
This chapter summarises the key findings of the book on offensive language online, with particular emphasis on the perspectives of those targeted. It outlines the book’s multi-layered approach, which integrates corpus linguistics, discourse analysis and pragmatics. The chapter also explores future directions for research, including platform-specific variations, sociolinguistic changes and the value of interdisciplinary methodologies for understanding and addressing offensive language in digital spaces.
Among the key constructs of biomedical research (random error [chance], risk, and bias in the search for causation), bias (or systemic error) is the most formidable source of inefficient and wasteful research, leading to incorrect or exaggerated results. The cause of most disease is complex, owed to many inherent (genetic) and environmental risk factors. It is in studying the interplay of these, each incurring modest risk, that many biases come into play.
Patients living with nociplastic pain are at particular risk of being unfairly judged because their symptoms are subjective and occur without measurable abnormalities. Nociplastic pain, now recognized as a third category of pain alongside nociceptive and neuropathic pain, arises in the absence of tissue damage sufficient to explain the clinical presentation. This “unseen” pain affects at least ten percent of the population, commonly involves the musculoskeletal or visceral systems, and leads to substantial suffering and functional impairment. Validation of these chronic symptoms is often contentious because no objective biomarker or physical sign can reliably confirm their presence or severity. The concept of illness without measurable parameters challenges clinicians and conflicts with legal systems that rely heavily on objective evidence in adjudicating health-related questions. Most clinicians have limited training in medicolegal principles, standards of impartiality, or the ethical distinctions between patient advocacy and objective reporting. These challenges are amplified in cases involving chronic nociplastic pain, where the lack of observable findings complicates clinical interpretation and may conflict with traditional expectations for legal evidence. Bias, whether conscious or unconscious, may occur at multiple stages of the medicolegal process. Inaccurate, incomplete, or biased information from patients, treating clinicians, independent medical experts, insurers, or legal representatives can influence fairness, credibility, and decisional outcomes. This review summarizes key considerations in the adjudication of chronic nociplastic pain and examines how various sources of bias may shape medicolegal decision-making. Improving awareness of these risks and implementing strategies to reduce bias may promote a more reliable and equitable interface between clinical assessment and legal adjudication.
Philosophers often defend appeals to parsimony by invoking its central role in science. I argue that this move fails once we distinguish between two uses of parsimony: non-ideal and ideal. Non-ideal parsimony enjoys strong inductive support in science, since complex models are prone to overfit to predictively irrelevant noise. But philosophical data aren’t significantly noisy in the relevant sense: when our intuitions are unreliable, their unreliability typically reflects systematic bias rather than noise, which parsimony doesn’t mitigate. Philosophers therefore need ideal parsimony, which finds only weak support from science. Thus, the scientific analogy cannot vindicate the philosopher’s use of parsimony.
The 2019 ICH E9(R1) addendum highlights the importance of estimands, including the specification of post-randomization events that may affect the interpretation of clinical trial outcomes (i.e., intercurrent events; ICEs) and strategies to handle these events. Compared to trial protocols, there is limited discussion of estimands in the context of evidence synthesis. We conducted a comprehensive review of the Cochrane Library for pairwise meta-analyses of immuno-, targeted, hormone, and other novel oncology therapies. Dates were restricted to 2021 and onwards to allow time for addendum adoption. Outcomes of interest were progression-free survival (PFS) and overall survival (OS). Information on treatment switching and analytic strategies to address treatment switching were extracted from each meta-analysis and the RCTs they included. Out of 162 oncology reviews published in the Cochrane Library since 2021, eight pairwise meta-analyses and 68 RCTs met selection criteria. Most RCTs were Phase 3 (68%; n = 46) and/or open-label (76%; n = 52). More than half of RCTs explicitly allowed switching (59%; n = 40), while more than one third (38%; n = 26) did not report on treatment switching. Among trials that allowed treatment switching, censoring mechanisms for treatment switching varied in analyses of PFS. No included RCTs censored OS at the time of treatment switching. Despite the high prevalence of treatment switching in included trials, none of the identified meta-analyses addressed treatment switching analytically. Poor reporting regarding treatment switching in the RCTs themselves hinders the utility of aggregate-level meta-analyses. To ensure accurate interpretation of meta-analytic results, improved reporting of ICEs and ICE handling strategies is needed.
This chapter furthers and advances the decolonial process discussed in previous chapters by providing a deeper understanding of the totality of anti-oppressive practice and its embedding into direct clinical work through the clinical supervision process. Special attention is paid to complimentary theories and concepts (e.g. the cycle of socialization, relational-cultural theory, building allyship, dismantling racism, inclusive supervision and followership, etc.) that strengthen anti-oppressive decolonial supervision and leadership practice.
Chapter 8 explains why there has been so much enthusiasm for integrating AI into multiple dimensions of the hiring process, from resume screening to interview bots, despite these endeavors being marred by fundamental flaws, including, in some cases, integrating bias, unreliable pseudoscientific methods, and dehumanizing interactions. In addition to analyzing the incentives that have motivated companies to use flawed, innovative tools, we provide a road map for how to develop and use responsible AI upgrades in the hiring process.
There is little written about ethics consultation in a post-acute environment. Applying ethics consultation expectations from the acute care world would be a disservice to the healthcare continuum and those that support the homecare environment. This chapter aims to expose the challenges that face those caring for patients as guests in their home, in order to open a much-needed dialogue and opportunity for bidirectional learning that ensures these voices are represented. A home-based hospice team requests an ethics consult for a patient that they deem is "unsafe" for the staff to continue to care for. Staff distress arose in a recent joint visit with nursing and social work when there were persons who were described as being aggressive. This case consult went sideways very quickly. The leadership for the team caring for the patient came with a preconceived notion of the outcome and verbalized feeling untrusted by the ethics committee. Similarly, the ethics committee was divided on the case and committee members verbalized “giving up” when a consensus could not be reached. Members of the ethics committee reflect on the various haunting aspects - both individually and collectively - and the need to balance patient rights and staff safety in a post-acute environment. This case brought about significant organizational changes in ethics consultation, which are shared with the reader.
How much do we care when no one is looking? A patient with critical injury and vulnerable to bias—as an uninsured Person of Color experiencing homelessness and social isolation, with a history of mental illness and drug use— experiences barriers to receiving necessary treatment and standard care. When a patient is unable to ask for help, and has no family member or friend to help, what standard of care can they hope to receive? Can the quality of care provided to unrepresented patients represent a hospital’s culture of care? The writer wonders whether to “stay in my lane” and focus only on the ethical question prompting consultation, or if the principles of beneficence and nonmaleficence justify speaking up about substandard care. To mitigate the risk of acting as the “ethics police” by engaging in micromanagement of patient care, the writer describes efforts to expand ethics’ scope to change systemic and cultural attitudes by establishing preventative measures to identify and combat bias and preemptive judgments of futility.
Our chapter explores the ethical and systemic challenges faced by healthcare staff when caring for an adolescent patient boarding in the emergency room (ER). We use the case of Samantha, an indigenous adolescent, brought to the ER after trying to elope from her youth treatment center. Samantha’s prolonged stay in the ER highlights significant gaps in healthcare; society’s struggle to address the needs of vulnerable populations; and the healthcare staff’s efforts to fulfill the duties of beneficence, nonmaleficence, and justice. Samantha’s case underscores the need for improved clinical and institutional processes and support systems. We advocate for better access to ethics support, enhanced community resources, and a more inclusive approach to care that considers the unique needs of marginalized individuals. Our case also reflects on the emotional and moral toll experienced by healthcare providers, exacerbated by systemic injustice and an unclear pathway to access our ethics committee at the time. We hope this case provides insights for healthcare systems to develop comprehensive strategies to support adolescents boarding in the ER ensuring their dignity.
The use of Artificial Intelligence (AI) in Health Technology Assessment (HTA) activities presents an opportunity to enhance the efficiency, accuracy, and speed of HTA processes worldwide. However, the adoption of AI tools in HTA comes with diverse challenges and concerns that must be carefully managed to ensure their responsible, ethical, and effective deployment. The 2025 Health Technology Assessment international Global Policy Forum (GPF) informed GPF members of the integration of AI into HTA activities, with a particular focus on the use of Generative AI (GenAI). With the overarching goal of illuminating and inspiring tangible outputs and actionable recommendations, the event brought together a diverse range of interest holders to explore the opportunities and challenges of AI in HTA. This article summarizes the key discussions and themes that informed the GPF outcomes, including trust, human agency, and risk-based approaches, culminating in a proposed set of priority next steps for the HTA community regarding the integration of GenAI. It also highlights insights into the current state of digital transformation within HTA organizations and the life sciences industry, providing insights into where the field stands and where it is heading.
Publications have become the single most important factor of career evaluation in the social sciences, as well as in most other academic disciplines. This has in turn led some scholars to examine the existence of potential biases in peer-reviewed publications. Teele and Thelen (2017) have shown that political science is not free from such biases. This article examines publication patterns and the peer-review process for the European Journal of Political Research. It relies on data on more than 5000 submissions between 2006 and 2017. I look at possible biases at the different stages of the publication process: submission, desk evalauation, review and acceptance. Results show that the journal's processes are free from bias, but confirm that submission patterns remain different, despite convergence in recent years.
Empirical human rights researchers frequently rely on indexes of physical integrity rights created by the Cingranelli-Richards (CIRI) or the Political Terror Scale (PTS) data projects. Any systematic bias contained within a component used to create CIRI and PTS carries over to the final index. We investigate potential bias in these indexes by comparing differences between PTS scores constructed from different sources, the United States State Department (SD) and Amnesty International (AI). In order to establish best practices, we offer two solutions for addressing bias. First, we recommend excluding data before 1980. The data prior to 1980 are truncated because the SD only created reports for current and potential foreign aid recipients. Including these data with the more systematically included post-1980 data is a key source of bias. Our second solution employs a two-stage instrumented variable technique to estimate and then correct for SD bias. We demonstrate how following these best practices can affect results and inferences drawn from quantitative work by replicating a study of interstate conflict and repression.
Artificial intelligence is reshaping the contemporary world. Trickling deeper into archaeology and history, these technological changes will influence how the past is written about and visualized. Through the evaluation of text and images generated using AI, this article considers the systemic biases present in reconstructed archaeological scenes. We draw on advances in computer science, running large-scale, computational analyses to evaluate patterns in content. We present a case study examining Neanderthal behavior, juxtaposing published archaeological knowledge with images and text made using AI. Our study reveals a low correspondence between scientific literature and artificially intelligent material, which reflects dated knowledge and cultural anachronisms. Used to identify patterns in (mis)representations of the past, the methodology can be applied to understand the distance between scholarly knowledge and any domain of content generated using AI, across any archaeological time depth and beyond the discipline.
Extant work shows that generative AI such as GPT-3.5 and perpetuate social stereotypes and biases. A less explored source of bias is ideology: do GPT models take ideological stances on politically sensitive topics? We develop a novel approach to identify ideological bias and show that it can originate in both the training data and the filtering algorithm. Using linguistic variation across countries with contrasting political attitudes, we evaluate average GPT responses in those languages. GPT output is more conservative in languages conservative societies (polish) and more liberal in languages used in liberal ones (Swedish). These differences persist from GPT-3.5 to GPT-4. We conclude that high-quality, curated training data are essential for reducing bias.
This chapter introduces linear cryptanalysis from the point of view that historically led to its discovery. This “original” description has the advantage of being concrete, but it is not very effective. However, it raises important questions that motivate later chapters.
This chapter explores the historical, legal, and regulatory landscape of employment testing bias and fairness in Canada. Canada’s history of colonization and immigration has resulted in a multicultural society. In 1984, the landmark Abella Report, and the subsequent Employment Equity Act, established key protections for historically disadvantaged groups, shaping modern employment practices. The chapter discusses the jurisdictional complexities of employment law, detailing federal and provincial regulations that prohibit discrimination based on race, sex/gender, disability, and other characteristics. Legal frameworks (e.g., the Canadian Charter of Rights and Freedoms, the Canadian Human Rights Act, and the Employment Equity Act) define bias and fairness in employment testing. Key court case decisions illustrate legal principles guiding test validity and adverse impact. We also examine professional guidelines, burden of proof requirements, regulatory oversight, and emerging challenges such as AI-driven assessments and balancing validity with diversity. The legal landscape continues to evolve, with growing emphasis on fairness, transparency, and inclusion.