To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Based on the assumption of locally quasi-steady behaviour, Duran & Moreau (2013 J. Fluid Mech.723, 190–231), assumed that, at a critical nozzle throat, the fluctuations of the Mach number vanish for linear perturbations of a quasi-one-dimensional isentropic flow. This appears to be valid only in the quasi-steady-flow limit. Based on the analytical model of Marble & Candel (1977 J. Sound Vib.55, 225–243) an alternative boundary condition is obtained, which is valid for nozzle geometries with a finite limit of the second spatial derivative of the cross-section on the subsonic side of the throat. When the nozzle geometry does not satisfy this condition, the application of a quasi-one-dimensional theory becomes questionable. The consequences of this for the quasi-one-dimensional modelling of the acoustic response of choked nozzles are discussed for three specific nozzle geometries. Surprisingly, the relative error in the inlet nozzle admittance and acoustic wave transmission coefficient remains below a per cent, when the quasi-steady boundary condition is used at the throat. However, the prediction of the acoustic fluctuations assuming a quasi-steady critical-throat behaviour is incorrect, because the predicted acoustic field is singular at the throat.
Lake-terminating glaciers retreat and thin faster than land-terminating glaciers, yet their long-term dynamics remain underexplored. Using multi source–remote sensing data combined with glacier velocity and elevation change datasets, we investigated their distribution and evolution in the Himalaya and Southeastern Tibet from 1990 to 2020. By 2020, 577 lake-terminating glaciers (2561.5 ± 11.8 km2) had been identified, representing ∼2% of all glaciers by number and ∼10% by area. Of these, 246 glaciers maintained contact with proglacial lakes (Type 1 change), while 331 developed new lakes (Type 2 change). Additionally, 173 glaciers detached from lakes (Type 3 change). Variations in glacier–lake contact strongly modulate glacier dynamics. Type 1 change glaciers experienced the largest area loss (73.8 ± 13.1 km2), whereas Type 2 change glaciers showed the greatest average retreat distance (1.06 ± 0.05 km). Among Type 1 change glaciers (>5 km2) with significant velocity trends, 22% accelerated and 78% decelerated, while all Type 3 change glaciers with significant velocity trends consistently decelerated. These findings underscore the pivotal influence of proglacial lake evolution on glacier dynamics, advancing our understanding of glacier–lake interactions on the Tibetan Plateau and beyond.
Sink flow boundary layers on smooth and rough walls were studied experimentally. In all cases a turbulent, zero-pressure-gradient boundary layer was subject to acceleration with K = 3.2 × 10–6, which suppressed the turbulence in the outer region and produced conditions similar to those in turbulent sink flow cases with lower K. In the smooth-wall case, after the momentum thickness Reynolds number had dropped to about 600, the near-wall turbulence then dropped, resulting in relaminarisation. In the rough-wall cases, the near-wall turbulence was sustained in spite of the strong favourable pressure gradient, and relaminarisation did not occur. A temporary equilibrium appears to occur that is similar to that seen with lower K, in spite of the ratio of the boundary-layer thickness to the roughness height dropping to less than 5. Mean velocity and Reynolds stress profiles, quadrant analysis and turbulence spectra are used to show the development of the boundary layer in response to the pressure gradient and the differences between the rough- and smooth-wall cases. This is believed to be the first study to consider the spatial evolution of constant-K rough-wall boundary layers with K large enough to cause relaminarisation in the smooth-wall case.
Cell death is a defense strategy employed by host cells to combat viral invasion. Viruses can manipulate the host cell death process to facilitate their own dissemination or evade immune surveillance. Ferroptosis, characterized by excessive iron accumulation and lipid peroxidation, is one crucial form of such cell death. Although ferroptosis is primarily associated with tissue/organ damage and tumorigenesis, accumulating evidence suggests that ferroptosis is closely linked to viral infections and their pathogenic mechanisms. This article systematically reviews the metabolic processes associated with ferroptosis, mainly including amino acid metabolism, iron metabolism, lipid peroxidation, and mitochondrial metabolism. It also discusses in detail the interaction between viral infections and ferroptosis, highlighting how viruses exploit the mechanisms of ferroptosis for their own infection and replication. Additionally, the impact of nutritional regulation of ferroptosis on the progression of viral infections is explored. Therefore, understanding the interaction between cellular ferroptosis and viral infections not only provides valuable insights for developing effective antiviral therapeutic strategies but also offers references for the prevention and control of viral infections in animals.
Suicide rates are increasing rapidly among Black children and adolescents, calling for novel approaches to understanding their unique risk factors. The Structural Racism and Suicide Prevention Systems Framework offers a new culturally responsive theory that structural racism is an underlying mechanism for disparities in suicide among ethnoracial marginalized youth. Thus, a deeper analysis of the intersection of racism and systems to better understand suicide risk and create more effective targeted interventions for Black youth is imperative. The current systematic review comprehensively evaluated and synthesized the empirical literature regarding the relationship between structural racism and suicide risk among Black youth. 17 studies from 3 database searches, published between 2013 and 2024 are presented. Results revealed a positive relationship between structural racism and suicidal thoughts and behaviors among Black youth. Systems that particularly facilitate the perpetration of racism toward Black youth include schools, criminal justice, and income inequality. Findings serve as a call to action to incorporate more socioecological models into suicide prevention research focused on Black youth. Understanding the depth and scope of how racism contributes to suicide risk provides key targets for prevention and intervention strategies that are specific to individuals belonging to this group at disparate risk for suicide.
This article investigates female voting behavior in the 2016 US presidential election through the lens of tall poppy syndrome, a theory suggesting that those in less prominent or celebrated roles sometimes seek to undermine individuals who pursue or attain extraordinary public success. Using data from the ANES, VOTER, and CCES surveys and controlling for alternative explanations, I find that women outside the workforce were more likely to vote against Hillary Clinton, indicating that their voting behavior may have been driven by tall poppy syndrome rather than solely by social conservatism. These findings highlight an underexplored factor in voting behavior, suggest widening avenues of partisan polarization, and point to the unique challenges that are faced by women who seek elected office.
Accurate prediction of nondispatchable renewable energy sources is essential for grid stability and price prediction. Regional power supply forecasts are usually indirect through a bottom-up approach of plant-level forecasts, incorporate lagged power values, and do not use the potential of spatially resolved data. This study presented a comprehensive methodology for predicting solar and wind power production at a country scale in France using machine learning models trained with spatially explicit weather data combined with spatial information about production sites’ capacity. A dataset is built spanning from 2012 to 2023, using daily power production data from Réseau de Transport d’Electricité (the national grid operator) as the target variable, with daily weather data from ECMWF Re-Analysis v5, production sites capacity and location, and electricity prices as input features. Three modeling approaches are explored to handle spatially resolved weather data: spatial averaging over the country, dimension reduction through principal component analysis, and a computer vision architecture to exploit complex spatial relationships. The study benchmarks state-of-the-art machine learning models as well as hyperparameter tuning approaches based on cross-validation methods on daily power production data. Results indicate that cross-validation tailored to time series is best suited to reach low error. We found that neural networks tend to outperform traditional tree-based models, which face challenges in extrapolation due to the increasing renewable capacity over time. Model performance ranges from 4% to 10% in normalized root-mean-squared error for midterm horizon, achieving similar error metrics to local models established at a single-plant level, highlighting the potential of these methods for regional power supply forecasting.
Achieving a first pass recanalization (FPR) improves clinical outcomes in patients with basilar artery strokes, but its association with initial infarct burden is unknown. We aimed to study the benefits of FPR for basilar artery strokes by initial infarct burden using the Posterior Circulation Alberta Stroke Program Early CT score (pc-ASPECTS).
Methods:
We retrospectively analyzed the prospective multicentric Endovascular Treatment of Ischemic Stroke registry and included 194 patients diagnosed with an acute basilar artery occlusion who were treated with thrombectomy. Our primary outcome was a modified Rankin Scale (mRS) of 0–3 at 90 days, and our secondary outcomes were an mRS of 4–6 and mortality. We compared the 90-day clinical outcomes of achieving an FPR versus multiple thrombectomy passes based on patients’ initial infarct size on pretreatment MRI: small (pc-ASPECTS = 9–10), medium (pc-ASPECTS = 6–8) and large (pc-ASPECTS <6).
Results:
Patients with a medium or large infarct size had significantly better outcomes (mRS 0–3 at 3 months) if FPR was achieved than if multiple passes were required (RR = 1.61, 95% CI: 1.16, 2.24; p-value = 0.005; and RR = 3.41, 95% CI: 1.54–7.57; p-value = 0.003, respectively). No similar difference was seen among patients with small infarcts. Achieving an FPR was also associated with a significantly lower mortality risk among patients with a moderate infarct size (RR = 0.36, 95% CI: 0.17–0.79; p-value = 0.010) but not with those with small or large infarcts.
Conclusions:
Achieving an FPR significantly improves clinical outcomes in acute stroke patients with basilar artery occlusions undergoing thrombectomy when their infarcts are medium or large. Ongoing research to develop surgical techniques to achieve FPR is crucial to improving patients’ prognoses.
Despite significant advances in Building Information Modeling (BIM) and increased adoption, numerous challenges remain. Discipline-specific BIM software tools with file storage have unresolved interoperability issues and do not capture or express interdisciplinary design intent. This hobbles machines’ ability to process design information. The lack of suitable data representation hinders the application of machine learning and other data-centric applications in building design. We propose Building Information Graphs (BIGs) as an alternative modeling method. In BIGs, discipline-specific design models are compiled as subgraphs in which nodes and edges model objects and their relationships. Additional nodes and edges in a meta-graph link the building objects across subgraphs. Capturing both intradisciplinary and interdisciplinary relationships, BIGs provide a dimension of contextual data for capturing design intent and constraints. BIGs are designed for computation and applications. The explicit relationships enable advanced graph functionalities, such as across-domain change propagation and object-level version control. BIGs preserve multimodal design data (geometry, attributes, and topology) in a graph structure that can be embedded into high-dimensional vectors, in which learning algorithms can detect statistical patterns and support a wide range of downstream tasks, such as link prediction and graph generation. In this position article, we highlight three key challenges: encapsulating and formalizing object relationships, particularly design intent and constraints; designing graph learning techniques; and developing innovative domain applications that leverage graph structures and learning. BIGs represent a paradigm shift in design technologies that bridge artificial intelligence and building design to enable intelligent and generative design tools for architects, engineers, and contractors.
During the post-harvesting process, coffee berries are dried and separated into green commercial beans and husks. The dynamics of dry matter (DM) accumulation in the berry components along the maturation process are important for the definition of the most adequate moment for the harvest, which is genotype-dependent. The DM accumulation dynamics in the berries, beans, and husks in six Coffea canephora genotypes were studied during the fruit maturation process, with the aim of identifying the fruit harvesting stage when the highest bean yield can be obtained. Berry samples were collected every two weeks at nine maturation stages starting from 33 weeks after flowering (green berry stage). Second-order polynomial regressions were used to analyse berry and bean DM accumulation over time, while temporal husk DM accumulation was compared using ANOVA and the Tukey test. DM accumulation was the highest in the berries and beans following the initial sampling, while the highest husk DM accumulation occurred at the final stages of maturation. In general, DM accumulation of all components increased as fruit maturation progressed, attaining the highest DM values in the final stages of red berries, but occurred earlier for early/medium and medium maturation cycle genotypes. The Beira Rio 8 genotype showed the highest DM accumulation in all components. Bamburral and P1 genotypes showed the lowest berry fresh mass (FM) to bean DM ratios. The A1 genotype showed the greatest berry FM to bean DM ratio, being a genotype with the lowest DM and bean mass performances and bean yield. Our data revealed that not only should the absolute berry and bean yield be considered for highly productive genotype selection but also the bean DM dynamics in the characterization of commercial coffee yield.
The study aimed to identify, develop and evaluate the effectiveness of innovative methods, technologies and approaches for the identification of deceased persons during armed conflicts, natural disasters and other emergencies, to improve the accuracy, efficiency and ethics of the identification process. For this purpose, innovative criminalistic and forensic medical methods of deceased identification were analysed, i.e. the specifics of each method and its practical application. As a result, the study determined that the accuracy and speed of identification of the deceased are significantly improved by innovative identification methods such as DNA analysis, forensic anthropology, medical record comparison, big data and artificial intelligence. Their use is especially appropriate in situations where the condition of the bodies makes conventional methods, such as fingerprinting or visual recognition, ineffective. The main obstacles to the identification process are mass graves, the destruction of bodies and the lack of centralized databases of the deceased. Modern laboratory technologies, such as mass spectrometry and three-dimensional reconstruction, are needed to address issues related to the condition of the remains, such as decomposition, fragmentation or thermal damage. However, the lack of adequate logistical support is still a serious problem. Innovative approaches require adherence to legal and ethical standards, such as protecting personal information, respecting cultural and religious customs, and providing families with access to information about the deceased. The coordination of specialists’ efforts and the guarantee of the accuracy of the results largely depend on international standards such as INTERPOL disaster victim identification. An important step in improving efficiency is their integration into national identification systems. Joint protocols and international databases ensure effective coordination between states.
How do cyber attacks aid attempts to generate influence? This article argues that cyber-enabled influence operations (CEIO) are more varied in form than is often recognised by scholars. We describe four kinds of CEIO activities – preparatory attacks, manipulative attacks, attacks in parallel, and influence-enabling – and observe that the type most often referenced by scholars (manipulative attacks) is the one whose utility is most substantially constrained by the clashing logics of cyber and influence operations. Our analysis suggests a clear theoretical basis for understanding cyber-influence interactions, namely that the style of cyber operational targeting is inversely tied to the scale of influence outcomes intended by an attacker. Tactics and the conditions that motivate them change as the scale of interference intended by the attacker varies over time, with tools and approaches that offer utility in one phase failing to do so in another as the environment transforms and interacts with attacker interests.
Diabetes is increasingly recognized as a serious, worldwide public health concern. In this paper, an extreme learning machine (ELM) based on time-domain pulses was introduced to obtain noninvasive glucose detection. To validate the method, time-domain signals from different concentrations of glucose solutions were detected in the model. Considering that the glucose levels of diabetic patients range from 30 to 500 mg/dL, the glucose solution concentration was set to 10−500 mg/dL, with an interval of 10 mg/dL. The received signals were trained using the ELM algorithm, which was able to accurately predict solutions of unknown concentration with an average relative error of 1.45%. The proposed method is rapid to process, simple to operate, and highly accurate for noninvasive glucose detection. The results demonstrated that microwave detection technology combined with the ELM algorithm has the potential to become a valuable tool for noninvasive glucose monitoring in clinical settings.
Words in infant-directed speech (IDS) are often phonetically reduced. This likely renders words harder for infants to learn and recognize. This difficulty might be mitigated by the repetitive nature of IDS, in particular if reduced instances are often preceded by clear instances (i.e., the first-mention effect). To characterize phonetic clarity in American English word repetitions, words were extracted from the IDS of eight mothers and presented to adults (n = 36) who judged their clarity. First mentions of repeated words were found to be clearer than second mentions, though this effect was small. Clarity was rated as greater for less common words and for utterance-final words. Clarity was also greater for words parents thought their child knew. The results help guide intuitions about the phonetic problem infants face when learning their first words.
In recent years, ontological security studies (OSS) have developed an impressive breadth of empirical applications and depth of theoretical advancements. However, despite increasing disciplinary diversity, methodological differences in OSS and the resulting implications have not yet been discussed. Drawing on Jackson’s taxonomy of scientific methodologies, this article outlines that OSS is characterized by considerable methodological diversity cutting across existing distinctions in the field. Greater focus on this diversity is important, as (tacit) underlying methodological assumptions have significant implications concerning the types of knowledge claims that can be advanced. Providing the first systematic discussion of methodological questions in OSS, this article outlines the contours of grounding OSS in neopositivist, critical realist, reflexivist, and analyticist methodologies and provides examples thereof. It then discusses the implications emerging from different methodologies in terms of (1) the production and evaluation of valid knowledge claims about ontological (in)security, (2) the perception of and dealing with ontological and epistemological challenges in the concept of ontological (in)security, and (3) the critical potential of OSS. While highlighting the potential of OSS grounded in analyticism, this article ultimately emphasizes the inherent value of methodological pluralism structured around a common vocabulary enabling meaningful conversations – both within OSS and with International Relations more broadly.
Amnon Rapoport made seminal contributions to research on investment decision-making and individual decision-making under risk. To build on his seminal work, this paper explores the impact of social influence on risk-taking. First, to build predictions for experimental testing, we modify a standard expected utility model by introducing a social norm variable. Using a standard 10-decision paired lottery choice task, we report the results from three experiments with different manipulations to test whether social influence information affects subjects’ own lottery choices. In Experiment 1, we find that participants are more likely to switch to choosing the risky option earlier if they are told that a large majority (>75%) of a large group (N = 100) of others have also chosen the risky option in the past. In Experiment 2, we find there is no effect if the social influence prompt is framed as a small group (N = 10) or the choice of one (N = 1) successful lottery participant, but there is an effect when participants are provided information about the consistently risky choices of one (N = 1) person in the past. In Experiment 3, using an in-person subject pool, we find some mixed effects on risk-taking when the social information is framed as a small group (N = 10) of peers (other students). Altogether, this paper demonstrates that social influence can impact risk-taking in line with a socially normed expected utility model.
Previous research has shown that second language (L2) learners can learn new words incidentally through contextual clues. However, little is known about whether derivational affixes can be learned in a similar manner. Addressing this gap, the current study examined whether English as a Foreign Language (EFL) learners could acquire knowledge of English derivational noun suffixes through contextualized reading. Forty Chinese EFL learners participated in the study, completing offline pre-tests and post-tests to assess improvements in three aspects of suffix knowledge. Participants’ eye movements during reading were also recorded to investigate the relationships between online processing of derived words and suffix learning. The offline test results showed that the learners made significant progress in three aspects of suffix knowledge. Multilevel logistic regression analyses further indicated that improvements in accuracy were significantly predicted by eye fixation measures, learners’ L2 proficiency, and language-level factors. Findings indicate that incidental learning while reading can effectively supplement intentional learning, particularly for English affixes that occur less frequently.
The food system, particularly animal agriculture, is a major contributor to environmental degradation, impacting critical Earth system processes such as climate change, freshwater use and biodiversity loss. There is a growing consensus that a shift from animal-based to plant-based diets is essential for both human health and environmental sustainability. This review explores the integration of sustainability competences into nutrition education, emphasising how systems thinking, strategic thinking, values thinking, futures thinking and interpersonal competences can contribute to the production of improved dietary guidelines. By applying these competences to the criticisms of the Planetary Health Diet, the Nordic Nutrition Recommendations and the Mediterranean diet as examples, this review highlights the tactics used by specific stakeholders to undermine sustainable healthy dietary guidelines. The review paper concludes by advocating for future dietary guidelines that are free of financial conflicts of interest, decolonised and developed through participatory processes in order to ensure that they are equitable, sustainable and aligned with the needs of diverse populations.