To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This Element examines post-apartheid pedagogy in South Africa to uncover philosophical and epistemological foundations on which it is predicated. The analysis reveals quaint epistemologies and their associated philosophical postulations, espousing solipsistic methodologies that position teachers and their students as passive participants in activities rendered abstract and contemplative – an intellectual odyssey and dispassionate pursuit of knowledge devoid of context and human subjectivity. To counteract the effects of such coercive epistemologies and Western orthodoxies, a decolonising approach, prioritising ethical grounding of knowledge and pedagogy is proposed. Inthis decolonising approach to learning and development, students enact the knowledge they embody, and, through such enactment of their culturally situated knowledge practices, students perceive concepts in their process of transformation and, consequently, acquire knowledge as tools for critical engagement with reality -and tools for meaningful pursuit of self-knowledge,agency, and identity development. This title is also available as Open Access on Cambridge Core.
This article critically examines the integration of artificial intelligence (AI) into nuclear decision-making processes and its implications for deterrence strategies in the Third Nuclear Age. While realist deterrence logic assumes that the threat of mutual destruction compels rational actors to act cautiously, AI disrupts this by adding speed, opacity and algorithmic biases to decision-making processes. The article focuses on the case of Russia to explore how different understandings of deterrence among nuclear powers could increase the risk of misperceptions and inadvertent escalation in an AI-influenced strategic environment. I argue that AI does not operate in a conceptual vacuum: the effects of its integration depend on the strategic assumptions guiding its use. As such, divergent interpretations of deterrence may render AI-supported decision making more unpredictable, particularly in high-stakes nuclear contexts. I also consider how these risks intersect with broader arms race dynamics. Specifically, the pursuit of AI-enabled capabilities by global powers is not only accelerating military modernisation but also intensifying the security dilemma, as each side fears falling behind. In light of these challenges, this article calls for greater attention to conceptual divergence in deterrence thinking, alongside transparency protocols and confidence-building measures aimed at mitigating misunderstandings and promoting stability in an increasingly automated military landscape.
Menopause is a natural physiological process, but its effects on the brain remain poorly understood. In England, approximately 15% of women use hormone-replacement therapy (HRT) to manage menopausal symptoms. However, the psychological benefits of HRT are not well established. This study aims to investigate the impact of menopause and HRT on mental health, cognitive function, and brain structure.
Methods
We analyzed data from nearly 125,000 participants in the UK Biobank to assess associations between menopause, HRT use, and outcomes related to mental health, cognition, and brain morphology. Specifically, we focused on gray matter volumes in the medial temporal lobe (MTL) and anterior cingulate cortex (ACC).
Results
Menopause was associated with increased levels of anxiety, depression, and sleep difficulties. Women using HRT reported greater mental health challenges than post-menopausal women not using HRT. Post-hoc analyses revealed that women prescribed HRT had higher levels of pre-existing mental health symptoms. In terms of brain structure, MTL and ACC volumes were smaller in post-menopausal women compared to pre-menopausal women, with the lowest volumes observed in the HRT group.
Conclusions
Our findings suggest that menopause is linked to adverse mental health outcomes and reductions in gray matter volume in key brain regions. The use of HRT does not appear to mitigate these effects and may be associated with more pronounced mental health challenges, potentially due to underlying baseline differences. These results have important implications for understanding the neurobiological effects of HRT and highlighting the unmet need for addressing mental health problems during menopause.
Military decision-making institutions face new challenges and opportunities from increasing artificial intelligence (AI) integration. Military AI adoption is incentivized by competitive pressures and expanding national security needs; thus, we can expect increased complexity due to AI proliferation. Governing this complexity is urgent but lacks clear precedents. This discussion critically re-examines key concerns that AI integration into resort-to-force decision-making organizations introduces. Beside concerns, this article draws attention to new, positive affordances that AI proliferation may introduce. I then propose a minimal AI governance standard framework, adapting private sector insights to the defence context. I argue that adopting AI governance standards (e.g., based on this framework) can foster an organizational culture of accountability, combining technical know-how with the cultivated judgment needed to navigate contested governance concepts. Finally, I hypothesize some strategic implications of the adoption of AI governance programmes by military institutions.
Integrating AI into military decision processes on the resort to force raises new moral challenges. A key question is: How can we assign responsibility in cases where AI systems shape the decision-making process on the resort to force? AI systems do not qualify as moral agents, and due to their opaqueness and the “problem of many hands,” responsibility for decisions made by a machine cannot be attributed to any one individual. To address this socio-technical responsibility gap, I suggest establishing “proxy responsibility” relations. Proxy responsibility means that an actor takes responsibility for the decisions made by another actor or synthetic agent who cannot be attributed with responsibility for their decisions. This article discusses the option to integrate an AI oversight body to establish proxy responsibility relations within decision-making processes regarding the resort to force. I argue that integrating an AI oversight body creates the preconditions necessary for attributing proxy responsibility to individuals.
The integration of AI systems into the military domain is changing the way war-related decisions are made. It binds together three disparate groups of actors – developers, integrators, and users – and creates a relationship between these groups and the machine, embedded in the (pre-)existing organisational and system structures. In this article, we focus on the important, but often neglected, group of integrators within such a socio-technical system. In complex human–machine configurations, integrators carry responsibility for linking the disparate groups of developers and users in the political and military system. To act as the mediating group requires a deep understanding of the other groups’ activities, perspectives and norms. We thus ask which challenges and shortcomings emerge from integrating AI systems into resort-to-force (RtF) decision-making processes, and how to address them. To answer this, we proceed in three steps. First, we conceptualise the relationship between different groups of actors and AI systems as a socio-technical system. Second, we identify challenges within such systems for human–machine teaming in RtF decisions. We focus on challenges that arise (a) from the technology itself, (b) from the integrators’ role in the socio-technical system and (c) from the human–machine interaction. Third, we provide policy recommendations to address these shortcomings when integrating AI systems into RtF decision-making structures.
Paleolake coring initiatives result in large datasets from various proxies taken at different resolutions, ranging from continuous scans to samples collected at coarser intervals. Higher-resolution data (e.g., core-scan X-ray fluorescence [XRF]) can detect short-duration changes in the paleolake and help identify unit boundaries with precision; however, interpreting the causes of such changes may require sampling and more intensive laboratory analysis like X-ray diffraction (XRD). This study applies a published wide and deep learning model, developed for the Olduvai Gorge Coring Project (OGCP) 2014 cores from the Pleistocene Olduvai basin, Tanzania, to reconstruct the mineral assemblages from saline-alkaline paleolake Olduvai using core-scan XRF data and core lithology. A classification model (predicting mineral presence or absence) and a regression model (predicting relative abundances of minerals) yielded predictions for two OGCP cores (2A and 3A), which were compared with published XRD mineral data and detailed core sedimentological descriptions. The models were excellent at identifying dolomite-rich layers, carbonate-rich intervals, intervals of sandstone within claystone, and altered tuffs within claystone and at predicting whether illitic or smectitic clays dominate. The models struggled with less-altered tuffs and with zeolites in non-tuff sediments, especially when XRD identified chabazite and erionite (rather than phillipsite) as the dominant, non-analcime zeolite.
What shapes military attitudes of trust in artificial intelligence (AI) used for strategic-level decision-making? When used in concert with humans, AI is thought to help militaries maintain lethal overmatch of adversaries on the battlefield as well as optimize leaders’ decision-making in the war room. Yet it is unclear what shapes servicemembers’ trust in AI used for strategic-level decision-making. In October 2023, I administered a conjoint survey experiment among an elite sample of officers attending the US Army and Naval War Colleges to assess what shapes servicemembers’ trust in AI used for strategic-level deliberations. I find that their trust in AI used for strategic-level deliberations is shaped by a tightly calibrated set of technical, operational, and oversight considerations. These results provide the first experimental evidence for military attitudes of trust toward AI during crisis escalation, which have important research, policy, and modernization implications.
This article considers the responses of the Indian Workers’ Association (Great Britain) (IWA) to food scarcities in India during the late 1960s. It reveals Maoist optics informed IWA critiques, departing from coexistent appraisals articulated in leftist circles in India. In doing so, the article demonstrates the relevance of worldviews, idioms, and paradigms emanating from global conjunctures beyond places of origin among diaspora. IWA luminaries were embedded in revolutionary anti-colonial networks shaped by decolonization and the global Cold War, and bestowed substance upon Maoism in these contexts. Ultimately, this informed IWA perceptions of causes and solutions to the food ‘crisis’: in their characterizations of reliance on external aid as indicative of post-1947 India’s semi-colonial status; in portrayals of Soviet ‘social imperialism’ in India during the Sino-Soviet split; or in demands for radical land reform based on a selective rendering of the Chinese model, which downplayed the consequences of the ‘Great Leap Forward’.
In this article, I consider the potential integration of artificial intelligence (AI) into resort-to-force decision-making from a Just War perspective. I evaluate two principles from this tradition: (1) the jus ad bellum principle of “reasonable prospect of success” and (2) the more recent jus ad vim principle of “the probability of escalation.” More than any other principles of Just War, these prudential standards seem amenable to the probabilistic reasoning of AI-driven systems. I argue, however, that this optimism in the potential of AI-optimized decision-making is largely misplaced. We need to cultivate a tragic sensibility in war – a recognition of the inescapable limits of foresight, the permanence of uncertainty and the dangers of unconstrained ambition. False confidence in the efficacy of these systems will blind us to their technical limits. It will also, more seriously, obscure the deleterious impact of AI on the process of resort-to-force decision-making; its potential to suffocate the moral and political wisdom so essential to the responsible exercise of violence on the international stage.
This article investigates the profound impact of artificial intelligence (AI) and big data on political and military deliberations concerning the decision to wage war. By conceptualising AI as part of a broader, interconnected technology ecosystem – encompassing data, connectivity, energy, compute capacity and workforce – the article introduces the notion of “architectures of AI” to describe the underlying infrastructure shaping contemporary security and sovereignty. It demonstrates how these architectures concentrate power within a select number of technology companies, which increasingly function as national security actors capable of influencing state decisions on the resort to force. The article identifies three critical factors that collectively alter the calculus of war: (i) the concentration of power across the architectures of AI, (ii) the diffusion of national security decision making, and (iii) the role of AI in shaping public opinion. It argues that, as technology companies amass unprecedented control over digital infrastructure and information flows, most nation states – particularly smaller or less technologically advanced ones – experience diminished autonomy in decisions to use force. The article specifically examines how technology companies can coerce, influence or incentivise the resort-to-force decision making of smaller states, thereby challenging traditional notions of state sovereignty and international security.
The use of artificial intelligence-driven decision-support systems (AI DSS) to assist human calculations on the resort to military force has raised concerns that automation bias may displace human judgments. Such fears are compounded by the complexities and pathologies of organisational decision making. Discussions of AI often revolve around better training AI models with more copious amounts of technical data, but this article poses research questions that shift the focus to a human-centric and institutional approach. How can governments better train human decision makers and restructure institutional settings within which humans operate to minimise the risks of automation bias and deskilling? This article begins by exploring how governments have invested in AI literacy education and capacity-building. Second, it demonstrates how the need to question groupthink and challenge assumptions in decision making becomes even more relevant as the use of AI DSS become more prevalent. Third, human decision makers operate within institutional structures with internal audit trails and organisational cultures, inter-agency networks and intelligence-sharing partnerships that may mitigate the risks of human deskilling. Bolstering these three inter-locking, mutually reinforcing elements of education, challenge functions and institutions offers some avenues for managing automation bias in decisions on the resort to force.
As artificial intelligence (AI) plays an increasing role in operations on battlefields, we should consider how it might also be used in the strategic decisions that happen before a military operation even occurs. One such critical decision that nations must make is whether to use armed force. There is often only a small group of political and military leaders involved in this decision-making process. Top military commanders typically play an important role in these deliberations around whether to use force. These commanders are relied upon for their expertise. They provide information and guidance about the military options available and the potential outcomes of those actions. This article asks two questions: (1) how do military commanders make these judgements? and (2) how might AI be used to assist them in their critical decision-making processes? To address the first, I draw on existing literature from psychology, philosophy, and military organizations themselves. To address the second, I explore how AI might augment the judgment and reasoning of commanders deliberating over the use of force. While there is already a robust body of work exploring the risks of using AI-driven decision-support systems, this article focuses on the opportunities, while keeping those risks firmly in view.
La Viña rock shelter is a relevant archaeological site for understanding the late Middle and Upper Palaeolithic cultural development in northern Iberia as evidenced by the Mousterian, Aurignacian, Gravettian, Solutrean and Magdalenian bone and lithic industries, parietal engravings and human subsistence remains recovered during the 1980s excavations by J. Fortea in the western and central excavation areas. This paper aims to present 16 new radiocarbon dates, which are added to the previous radiocarbon dates obtained, using different analytical methods on bone and charcoal. These are now 57 dates in total. Bayesian models have been applied to assess and discern the chronology of the archaeological sequence in each sector of the rock shelter. The results provide details on the chronostratigraphy of each excavation area, documenting the duration of the different technocultural phases and confirming in-site postdepositional events.
Artificial intelligence (AI) is increasingly being incorporated into military decision making in the form of decision-support systems (DSS). Such systems may offer data-informed suggestions to those responsible for making decisions regarding the resort to force. While DSS are not new in military contexts, we argue that AI-enabled DSS are sources of additional complexity in an already complex resort-to-force decision-making process that – by its very nature – presents the dual potential for both strategic stability and harm. We present three categories of complexity relevant to AI – interactive and nonlinear complexity, software complexity, and dynamic complexity – and examine how such categories introduce or exacerbate risks in resort-to-force decision-making. We then provide policy recommendations that aim to mitigate some of these risks in practice.
Depression is often comorbid with alcohol use problems, and sex differences may further complicate this interplay.
Methods
We conducted a longitudinal study using a large European adolescent cohort assessed at ages 14 (baseline, BL), 16 (follow-up 1, FU1), 19 (follow-up 2, FU2), and 23 (follow-up 3, FU3). Depression and alcohol use were measured using standardized behavioral scales. Cross-lagged analysis, improved Mendelian randomization (MR) analysis, and mediation analysis were conducted to infer the causal interplay.
Results
2110 adolescents were included at baseline (49% male). Depression and alcohol consumption demonstrated a significant positive correlation (rBL = 0.094, pBL = 1.58E-05, 95% CI = [0.052, 0.137]), which gradually diminished over time and eventually became significantly negative. Depression and alcohol use problems remained strongly correlated across three timepoints (r > 0.074, p < 6.76E-03). Cross-lagged analysis suggested that depression predicted future alcohol use problems: βBL-FU1 = 0.058, p = 0.021, 95% CI = [0.009, 0.108]; βFU2-FU3 = 0.142, p = 8.34E-07, 95% CI = [0.113, 0.263]. MR analyses confirmed this causal interplay (rmean = 0.043, longitudinal ppermuation < 0.001). Interestingly, MR analyses also indicated that alcohol consumption might alleviate depression (rmean = −0.022, longitudinal ppermutation = 0.043), particularly in females at FU3, of which the anxiety status and the personality trait neuroticism largely mediated the effect. These findings were validated in an independent matched sample (N = 562) from Human Connectome Project.
Conclusions
Depression may predict future alcohol use problems, whereas moderate alcohol consumption might alleviate depressive symptoms, especially in females.
In sociology, aesthetics have become an important lens for exploring the sensory dimensions of political and economic processes, with research on urban aesthetics contributing significantly to this field. However, much of this work focuses on how aesthetic forms serve the interests of political and economic elites, portraying aesthetic value as a direct product of political ideologies. While these approaches have shown that urban aesthetics are shaped by power struggles, they pay limited theoretical attention to less straightforward aspects of aesthetic politics—such as cases where clashing values, imperatives, and commitments meet. This gap is particularly pronounced in places shaped by violent histories, where the value of urban beauty might be inevitably entangled with loss, ambivalence, and co-existence with unwanted materialities. This article proposes an approach that foregrounds the dilemmas and compromises inherent in urban aesthetic politics, focusing on the varied practices through which people negotiate how to care for urban aesthetic value over time. I develop this approach through a case study of Klaipėda, Lithuania—a city shaped by layered aesthetic transformations, from state annexation to socialist modernisation to post-Soviet nation-building and Europeanisation. Using mixed-methods research, the article highlights differences in how people articulate what counts as good and bad aesthetics and which forms of material care—or neglect—are “appropriate” to sustain the city’s desirable aesthetic appeal. In doing so, the article reveals complex gradations of value underlying seemingly coherent aesthetic ideals of Europeanness.
In a resort-to-force setting, what standard of care must a state follow when using AI to avoid international responsibility for a wrongful act? This article develops three scenarios based around a state-owned autonomous system that erroneously resorts to force (the Flawed AI System, the Poisoned AI System, and the Competitive AI System). It reveals that although we know what the substantive jus ad bellum and international humanitarian law rules are, international law says very little about the standards of care to which a state must adhere to meet its substantive obligations under those bodies of law. The article argues that the baseline standard of care under the jus ad bellum today requires a state to act in good faith and in an objectively reasonable way, and it describes measures states should consider taking to meet that standard when deploying AI or autonomy in their resort-to-force systems. It concludes by explaining how clarifying this standard of care will benefit states by reducing the chance of unintended conflicts.
Studies conducted during the COVID-19 pandemic highlighted that confinement reduced access to services and increased caregivers’ responsibilities and isolation.
Objectives
This study examines the longer-term impacts among 83 unpaid caregivers of older adults from four Canadian provinces.
Methods
Participants completed an online questionnaire between October 2021 and February 2022, and again 6 months later, on the assistance provided, support received, language of services, and psychological well-being. Additionally, eight caregivers participated in a qualitative interview.
Findings
Most home support services were maintained during the pandemic – some with restricted staffing – except for respite and transportation services. Caregivers increased their assistance during the lockdowns, and this higher involvement persisted in 2022. They perceived a negative impact of the pandemic on their health and that of the care recipient. Participants from official language minority communities described additional challenges accessing services in their preferred language.
Discussion
Greater recognition of caregivers’ needs will help support their role as partners within health organizations.