To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In three empirical studies, we compare one syntactic and one semantic approach to agreement preferences in so-called pancake constructions (pcs) in Swedish, as in Senap är starkt ‘Mustard is strong’. pcs are either substance-denoting, naming an inherent property of the subject, or situation-denoting, naming a property of the subject that is linked to some event. These two types were found to differ in predicative agreement patterns when their subjects were modified (e.g. Skånsk senap är … ‘Scanian mustard is’). The studies also indicate that the presence of a modal verb can affect agreement patterns differently in the two types: substance-denoting pcs were affected by modification and modality to a much larger extent than situation-denoting ones. We conclude that the two approaches can explain some patterns, but leave others unexplained, and the results lend partial support to analyses that make a syntactic difference between the two types of pcs.
Creative thinking is a crucial step in the design ideation process, where analogical reasoning plays a vital role in expanding the design concept space. The emergence of Generative AI has brought a significant revolution in co-creative systems, with a growing number of studies on Design-by-Analogy support tools. However, there is a lack of studies investigating the creative performance of Large Language Model (LLM)-generated analogical content and benchmarking of language models in creative tasks such as design ideation. Through this study, we aim to (i) investigate the effect of creativity heuristics by leveraging LLMs to generate analogical stimuli for novice designers in ideation tasks and (ii) evaluate and benchmark language models across analogical creative tasks. We developed a support tool based on the proposed conceptual framework and validated it by conducting controlled ideation experiments with 24 undergraduate design students. Groups assisted with the support tool generated higher-rated ideas, thus validating the proposed framework and the effectiveness of analogical reasoning for augmenting creative output with LLMs. Benchmarking of the models revealed significant differences in the creative performance of analogies across various language models, suggesting that future studies should focus on evaluating language models across creative, subjective tasks.
Obesity and overweight in pregnant women increase pregnancy and neonatal morbidity with a risk of metabolic syndrome for children in later life. Maternal preconceptional bariatric surgery reduces maternal and paediatric outcomes but may induce fetal nutritional deficiencies and intrauterine growth restriction through placental reprogramming. The aim of this study was to describe feto-placental unit modifications induced by obesity, and the effect of bariatric surgery performed before gestation, on a diet-induced obese rat model. One month after surgery, rats of ‘control’, ‘obese’ and ‘bariatric surgery’ groups were mated and then sacrificed at D19 of gestation. Clinical description, immuno-histochemistry and molecular analyses were performed on feto-placental units. Obesity induces placental modifications including lipid accumulations, increased inflammation and oxidative stress. Some of these modifications are partially restored by maternal preconceptional bariatric surgery. On the other hand, a reduction in the expression of markers of glucose transport, insulin function and amino acid transport, after bariatric surgery was observed. This phenotype may lead to fetal caloric restriction, adoption of a ‘thrifty phenotype’ and subsequently fetal growth restriction. These preliminary findings highlight the importance of a close follow-up of women who have undergone bariatric surgery and their children.
Our study aimed to explore risk factors for medium–giant coronary artery aneurysms in children with Kawasaki disease.
Methods:
6,540 eligible children with Kawasaki disease who were diagnosed in Wuhan Children’s Hospital from January 2011 to December 2023 were retrospectively analysed. The clinical and laboratory data were compared between medium–giant group and non–medium–giant group.
Results:
A total of 6,540 patients with Kawasaki disease were included, and 162 (2.5%) developed medium–giant coronary artery aneurysms, of whom 56 (0.9%) were giant. Univariate analysis showed a statistically significant difference between the two groups in 22 variables (P< 0.05). The least absolute shrinkage and selection operator regression analysis revealed that intravenous immunoglobulin resistance, haemoglobin, platelet count, and albumin were the most significant risk factors for medium–giant coronary artery aneurysms. The result of binary logistic regression analysis showed that intravenous immunoglobulin resistance (OR = 6.474, 95%CI = 4.399 ∼ 9.528, P< 0.001), platelet count elevation (OR = 1.003, 95%CI = 1.002 ∼ 1.004, P< 0.001), and albumin reduction (OR = 0.912, 95%CI = 0.879 ∼ 0.946, P< 0.001) were independent risk factors affecting the occurrence of medium–giant coronary artery aneurysms, and the area under the curve of the regression model was 0.75, with a sensitivity of 62.3% and a specificity of 79.2%.
Conclusions:
Intravenous immunoglobulin resistance, platelet counts elevation, and albumin levels reduction may be significant predictors of medium–giant coronary artery aneurysms and can serve as a reference for early diagnosis of medium–giant coronary artery aneurysms.
This article critically examines the integration of artificial intelligence (AI) into nuclear decision-making processes and its implications for deterrence strategies in the Third Nuclear Age. While realist deterrence logic assumes that the threat of mutual destruction compels rational actors to act cautiously, AI disrupts this by adding speed, opacity and algorithmic biases to decision-making processes. The article focuses on the case of Russia to explore how different understandings of deterrence among nuclear powers could increase the risk of misperceptions and inadvertent escalation in an AI-influenced strategic environment. I argue that AI does not operate in a conceptual vacuum: the effects of its integration depend on the strategic assumptions guiding its use. As such, divergent interpretations of deterrence may render AI-supported decision making more unpredictable, particularly in high-stakes nuclear contexts. I also consider how these risks intersect with broader arms race dynamics. Specifically, the pursuit of AI-enabled capabilities by global powers is not only accelerating military modernisation but also intensifying the security dilemma, as each side fears falling behind. In light of these challenges, this article calls for greater attention to conceptual divergence in deterrence thinking, alongside transparency protocols and confidence-building measures aimed at mitigating misunderstandings and promoting stability in an increasingly automated military landscape.
The second Trump administration has shaken the foundations of US leadership in global health, with this column assessing rapid shifts in global health governance. By analyzing how the administration’s anti-science ethos, foreign assistance cuts, and multilateral disengagement have undermined global solidarity, the column considers the destabilizing impacts on global health and examines how other states, regional bodies, and international organizations are responding to this US decline. This examination reveals both strains for global health promotion and resilience within a changed governance landscape.
Menopause is a natural physiological process, but its effects on the brain remain poorly understood. In England, approximately 15% of women use hormone-replacement therapy (HRT) to manage menopausal symptoms. However, the psychological benefits of HRT are not well established. This study aims to investigate the impact of menopause and HRT on mental health, cognitive function, and brain structure.
Methods
We analyzed data from nearly 125,000 participants in the UK Biobank to assess associations between menopause, HRT use, and outcomes related to mental health, cognition, and brain morphology. Specifically, we focused on gray matter volumes in the medial temporal lobe (MTL) and anterior cingulate cortex (ACC).
Results
Menopause was associated with increased levels of anxiety, depression, and sleep difficulties. Women using HRT reported greater mental health challenges than post-menopausal women not using HRT. Post-hoc analyses revealed that women prescribed HRT had higher levels of pre-existing mental health symptoms. In terms of brain structure, MTL and ACC volumes were smaller in post-menopausal women compared to pre-menopausal women, with the lowest volumes observed in the HRT group.
Conclusions
Our findings suggest that menopause is linked to adverse mental health outcomes and reductions in gray matter volume in key brain regions. The use of HRT does not appear to mitigate these effects and may be associated with more pronounced mental health challenges, potentially due to underlying baseline differences. These results have important implications for understanding the neurobiological effects of HRT and highlighting the unmet need for addressing mental health problems during menopause.
Military decision-making institutions face new challenges and opportunities from increasing artificial intelligence (AI) integration. Military AI adoption is incentivized by competitive pressures and expanding national security needs; thus, we can expect increased complexity due to AI proliferation. Governing this complexity is urgent but lacks clear precedents. This discussion critically re-examines key concerns that AI integration into resort-to-force decision-making organizations introduces. Beside concerns, this article draws attention to new, positive affordances that AI proliferation may introduce. I then propose a minimal AI governance standard framework, adapting private sector insights to the defence context. I argue that adopting AI governance standards (e.g., based on this framework) can foster an organizational culture of accountability, combining technical know-how with the cultivated judgment needed to navigate contested governance concepts. Finally, I hypothesize some strategic implications of the adoption of AI governance programmes by military institutions.
Integrating AI into military decision processes on the resort to force raises new moral challenges. A key question is: How can we assign responsibility in cases where AI systems shape the decision-making process on the resort to force? AI systems do not qualify as moral agents, and due to their opaqueness and the “problem of many hands,” responsibility for decisions made by a machine cannot be attributed to any one individual. To address this socio-technical responsibility gap, I suggest establishing “proxy responsibility” relations. Proxy responsibility means that an actor takes responsibility for the decisions made by another actor or synthetic agent who cannot be attributed with responsibility for their decisions. This article discusses the option to integrate an AI oversight body to establish proxy responsibility relations within decision-making processes regarding the resort to force. I argue that integrating an AI oversight body creates the preconditions necessary for attributing proxy responsibility to individuals.
The integration of AI systems into the military domain is changing the way war-related decisions are made. It binds together three disparate groups of actors – developers, integrators, and users – and creates a relationship between these groups and the machine, embedded in the (pre-)existing organisational and system structures. In this article, we focus on the important, but often neglected, group of integrators within such a socio-technical system. In complex human–machine configurations, integrators carry responsibility for linking the disparate groups of developers and users in the political and military system. To act as the mediating group requires a deep understanding of the other groups’ activities, perspectives and norms. We thus ask which challenges and shortcomings emerge from integrating AI systems into resort-to-force (RtF) decision-making processes, and how to address them. To answer this, we proceed in three steps. First, we conceptualise the relationship between different groups of actors and AI systems as a socio-technical system. Second, we identify challenges within such systems for human–machine teaming in RtF decisions. We focus on challenges that arise (a) from the technology itself, (b) from the integrators’ role in the socio-technical system and (c) from the human–machine interaction. Third, we provide policy recommendations to address these shortcomings when integrating AI systems into RtF decision-making structures.
Paleolake coring initiatives result in large datasets from various proxies taken at different resolutions, ranging from continuous scans to samples collected at coarser intervals. Higher-resolution data (e.g., core-scan X-ray fluorescence [XRF]) can detect short-duration changes in the paleolake and help identify unit boundaries with precision; however, interpreting the causes of such changes may require sampling and more intensive laboratory analysis like X-ray diffraction (XRD). This study applies a published wide and deep learning model, developed for the Olduvai Gorge Coring Project (OGCP) 2014 cores from the Pleistocene Olduvai basin, Tanzania, to reconstruct the mineral assemblages from saline-alkaline paleolake Olduvai using core-scan XRF data and core lithology. A classification model (predicting mineral presence or absence) and a regression model (predicting relative abundances of minerals) yielded predictions for two OGCP cores (2A and 3A), which were compared with published XRD mineral data and detailed core sedimentological descriptions. The models were excellent at identifying dolomite-rich layers, carbonate-rich intervals, intervals of sandstone within claystone, and altered tuffs within claystone and at predicting whether illitic or smectitic clays dominate. The models struggled with less-altered tuffs and with zeolites in non-tuff sediments, especially when XRD identified chabazite and erionite (rather than phillipsite) as the dominant, non-analcime zeolite.
What shapes military attitudes of trust in artificial intelligence (AI) used for strategic-level decision-making? When used in concert with humans, AI is thought to help militaries maintain lethal overmatch of adversaries on the battlefield as well as optimize leaders’ decision-making in the war room. Yet it is unclear what shapes servicemembers’ trust in AI used for strategic-level decision-making. In October 2023, I administered a conjoint survey experiment among an elite sample of officers attending the US Army and Naval War Colleges to assess what shapes servicemembers’ trust in AI used for strategic-level deliberations. I find that their trust in AI used for strategic-level deliberations is shaped by a tightly calibrated set of technical, operational, and oversight considerations. These results provide the first experimental evidence for military attitudes of trust toward AI during crisis escalation, which have important research, policy, and modernization implications.
This article considers the responses of the Indian Workers’ Association (Great Britain) (IWA) to food scarcities in India during the late 1960s. It reveals Maoist optics informed IWA critiques, departing from coexistent appraisals articulated in leftist circles in India. In doing so, the article demonstrates the relevance of worldviews, idioms, and paradigms emanating from global conjunctures beyond places of origin among diaspora. IWA luminaries were embedded in revolutionary anti-colonial networks shaped by decolonization and the global Cold War, and bestowed substance upon Maoism in these contexts. Ultimately, this informed IWA perceptions of causes and solutions to the food ‘crisis’: in their characterizations of reliance on external aid as indicative of post-1947 India’s semi-colonial status; in portrayals of Soviet ‘social imperialism’ in India during the Sino-Soviet split; or in demands for radical land reform based on a selective rendering of the Chinese model, which downplayed the consequences of the ‘Great Leap Forward’.
In this article, I consider the potential integration of artificial intelligence (AI) into resort-to-force decision-making from a Just War perspective. I evaluate two principles from this tradition: (1) the jus ad bellum principle of “reasonable prospect of success” and (2) the more recent jus ad vim principle of “the probability of escalation.” More than any other principles of Just War, these prudential standards seem amenable to the probabilistic reasoning of AI-driven systems. I argue, however, that this optimism in the potential of AI-optimized decision-making is largely misplaced. We need to cultivate a tragic sensibility in war – a recognition of the inescapable limits of foresight, the permanence of uncertainty and the dangers of unconstrained ambition. False confidence in the efficacy of these systems will blind us to their technical limits. It will also, more seriously, obscure the deleterious impact of AI on the process of resort-to-force decision-making; its potential to suffocate the moral and political wisdom so essential to the responsible exercise of violence on the international stage.
This article investigates the profound impact of artificial intelligence (AI) and big data on political and military deliberations concerning the decision to wage war. By conceptualising AI as part of a broader, interconnected technology ecosystem – encompassing data, connectivity, energy, compute capacity and workforce – the article introduces the notion of “architectures of AI” to describe the underlying infrastructure shaping contemporary security and sovereignty. It demonstrates how these architectures concentrate power within a select number of technology companies, which increasingly function as national security actors capable of influencing state decisions on the resort to force. The article identifies three critical factors that collectively alter the calculus of war: (i) the concentration of power across the architectures of AI, (ii) the diffusion of national security decision making, and (iii) the role of AI in shaping public opinion. It argues that, as technology companies amass unprecedented control over digital infrastructure and information flows, most nation states – particularly smaller or less technologically advanced ones – experience diminished autonomy in decisions to use force. The article specifically examines how technology companies can coerce, influence or incentivise the resort-to-force decision making of smaller states, thereby challenging traditional notions of state sovereignty and international security.