To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This research report presents the development and validation of Auto Error Analyzer, a prototype web application designed to automate the calculation of accuracy and its related metrics for measuring second language (L2) production. Building on recent advancements in natural language processing (NLP) and artificial intelligence (AI), Auto Error Analyzer introduces an automated accuracy measurement component, bridging a gap in existing assessment tools, which traditionally require human judgment for accuracy evaluation. By utilizing a state-of-the-art generative AI model (Llama 3.3) for error detection, Auto Error Analyzer analyzes L2 texts efficiently and cost-effectively, producing accuracy metrics (e.g., errors per 100 words). Validation results demonstrate high agreement between the tool’s error counts and human rater judgments (r = .94), with microaverage precision and recall in error detection being high as well (.96 and .94 respectively, F1 = .95), and its T-unit and clause counts matched outputs from established tools like L2SCA. Developed under open science principles to ensure transparency and replicability, the tool aims to support researchers and educators while emphasizing the complementary role of human expertise in language assessment. The possibilities of Auto Error Analyzer for efficient and scalable error analysis, as well as its limitations in detecting context-dependent and first-language (L1)-influenced errors, are also discussed.
We investigate the dynamics, wake instabilities and regime transitions of inertial flow past a transversely rotating angular particle. We first study the transversely rotating cube with a four-fold rotational symmetry axis (RCF4), elucidating the mechanisms of vortex generation and the merging process on the cube surface during rotation. Our results identify novel vortex shedding structures and reveal that the rotation-enhanced merging of streamwise vortex pairs is the key mechanism driving vortex suppression. The flow inertia and particle rotation are demonstrated to be competing factors that influence wake instability. We further analyse the hydrodynamic forces on the rotating cube, with a focus on the Magnus effect, highlighting the influence of sharp edges on key parameters such as lift, drag, rotation coefficients and the shedding frequency. We note that the lift coefficient is independent of flow inertia at a specific rotation rate. We then examine more general angular particles with different numbers of rotational symmetry folds – RTF3 (three-fold tetrahedron), RCF3 (three-fold cube) and ROF4 (four-fold octahedron) – to explore how particle angularity and rotational symmetry affect wake stability, regime transitions and hydrodynamic forces. We show that the mechanisms of vortex generation and suppression observed in RCF4 apply effectively to other angular particles, with the number of rotational symmetry folds playing a crucial role in driving regime transitions. An increased rotational symmetry fold enhances vortex merging and suppression. Particle angularity has a pronounced influence on hydrodynamic forces, with increased angularity intensifying the Magnus effect. Furthermore, the number of effective faces is demonstrated to have a decisive impact on the shedding frequency of the wake structures. Based on the number of effective faces during rotation, we propose a generic model to predict the Strouhal number, applicable to all the angular particles studied. Our results demonstrate that the particle angularity and rotational symmetry can be effectively harnessed to stabilise the wake flow. These findings provide novel insights into the complex interactions between particle geometry, rotation and flow instability, advancing the understanding of the role sharp edges play in inertial flow past rotating angular particles.
Unmanned surface vehicles (USVs) frequently encounter inadequate energy levels while navigating to their destinations, which complicates their successful berthing in intricate harbor environments. A bacterial foraging optimization algorithm (BFO) is proposed that takes energy consumption into account and incorporates multiple constraints (MC-BFO). The energy consumption model is redefined for wind environments, enhancing the sensitivity of USVs to wind conditions. Additionally, a reward function is integrated into the algorithm, and the fitness function is reconstructed to improve the goal orientation of the USV. This approach enables the USV to maintain a reasonable path length while pursuing low energy consumption, resulting in more practical navigation. Constraining the USV’s sailing posture for smoother paths and restricting the USV’s heading and speed near the berthage facilitate safe berthing. Finally, three distinct experimental environments are established to compare the paths generated by MC-BFO, BFO, and genetic algorithm under both downwind and upwind conditions, ensuring consistency in relevant parameters. Data on sailing posture, energy consumption, and path length are collected, generalized, and analyzed. The results indicate that MC-BFO effectively reduces energy consumption while maintaining an acceptable path length, resulting in smoother and more coherent paths compared to traditional segmented planning. In conclusion, this method significantly enhances the quality of the berthing path.
Older adults with treatment-resistant depression (TRD) benefit more from treatment augmentation than switching. It is useful to identify moderators that influence these treatment strategies for personalised medicine.
Aims
Our objective was to test whether age, executive dysfunction, comorbid medical burden, comorbid anxiety or the number of previous adequate antidepressant trials could moderate the superiority of augmentation over switching. A significant moderator would influence the differential effect of augmentation versus switching on treatment outcomes.
Method
We performed a preplanned moderation analysis of data from the Optimizing Outcomes of Treatment-Resistant Depression in Older Adults (OPTIMUM) randomised controlled trial (N = 742). Participants were 60 years old or older with TRD. Participants were either (a) randomised to antidepressant augmentation with aripiprazole (2.5–15 mg), bupropion (150–450 mg) or lithium (target serum drug level 0.6 mmol/L) or (b) switched to bupropion (150–450 mg) or nortriptyline (target serum drug level 80–120 ng/mL). Treatment duration was 10 weeks. The two main outcomes of this analysis were (a) symptom improvement, defined as change in Montgomery–Asberg Depression Rating Scale (MADRS) scores from baseline to week 10 and (b) remission, defined as MADRS score of 10 or less at week 10.
Results
Of the 742 participants, 480 were randomised to augmentation and 262 to switching. The number of adequate previous antidepressant trials was a significant moderator of depression symptom improvement (b = −1.6, t = −2.1, P = 0.033, 95% CI [−3.0, −0.1], where b is the coefficient of the relationship (i.e. effect size), and t is the t-statistic for that coefficient associated with the P-value). The effect was similar across all augmentation strategies. No other putative moderators were significant.
Conclusions
Augmenting was superior to switching antidepressants only in older patients with fewer than three previous antidepressant trials. This suggests that other intervention strategies should be considered following three or more trials.
Critical scholars and intellectuals are often viewed as vanguards of intellectual rigor, moral integrity, and left-leaning/left-liberal politics. In particular, their trajectories tend to be examined from a sympathetic lens: as supporters of lower-class social movements. Unfortunately, this approach overlooks the varied agency of these critical scholars and their complex relationship with the very movements that they often claim to represent. It obscures their potentially unequal socioeconomic status and cultural gap with the movements they engage with. This is not to dismiss their contribution or deny the reality of state repression against some of them, but a more grounded, sober approach to studying these cognitive workers is needed.
This study investigates the value-appropriating, politically-moderating, status-seeking tendency in some parts of critical knowledge production and activism. It advances several claims. First, the increasing neoliberalisation of the research sector exacerbates the process of class differentiation among critical scholars and intellectuals. The majority join the swelling rank of precarious cognitariat, whereas a selected stratum becomes part of the professional managerial class. Second, the latter stratum contains new intellectual actors who enjoy economic, cultural, and, political benefits from their advantaged position at the expense of precarious scholar-activists and marginalised communities, as exemplified in their public celebrity status or appointment into policymaking decisions. Lastly, as an illustration, and a form of self-criticism, I interrogate my position as an early-career researcher of Indonesian politics, show my own role and complicity in the neoliberal research industrial complex, and reflect on possible ways out of this politico-intellectual impasse.
This article examines the practice of post-mortem examination in the Royal Navy during the French Revolutionary and Napoleonic Wars (1793–1815). The professional medical logbooks kept by ship’s surgeons as part of their mandated practice reveal that they turned to pathological anatomy to diagnose their patients – a technique typically associated with French anatomy during this period. I show that these post-mortem dissections blended medicine and surgery together by correlating clinical signs and symptoms of disease with pathological manifestations of disease in the bodies after death. This article also considers the medical culture that existed on these ships that enabled this research, specifically how captains, officers and crew responded to, and interpreted, such medical enquiry on board. By resituating the naval ship as a site of medical experimentation and enquiry, I explore how naval surgeons participated in medical research within the Royal Navy and used the ship space to engage in pathological anatomy before their British civilian counterparts flocked to French hospitals after the wars.
This study proposes a machine-learning-based subgrid scale (SGS) model for very coarse-grid large-eddy simulations (vLES). An issue with SGS modelling for vLES is that, because the energy-containing eddies are not accurately resolved by the computational grid, the resolved turbulence deviates from the physically accurate turbulence. This limits the use of supervised machine-learning models commonly trained using pairs of direct numerical simulation (DNS) and filtered DNS data. The proposed methodology utilises both unsupervised learning (cycle-consistency generative adversarial network (GAN)) and supervised learning (conditional GAN) to construct a machine-learning pipeline. The unsupervised learning part of the proposed method first transforms the non-physical vLES flow field to resemble a physically accurate flow field. The second supervised learning part employs super-resolution of turbulence to predict the SGS stresses. The proposed pipeline is trained using a fully developed turbulent channel at the friction Reynolds number of approximately 1000. The a priori validation shows that the proposed unsupervised–supervised pipeline successfully learns to predict the accurate SGS stresses, while a typical supervised-only model shows significant discrepancies. In the a posteriori test, the proposed unsupervised–supervised-pipeline SGS model for vLES using a progressively coarse grid yields good agreement of the mean velocity and Reynolds shear stress with the reference data at both the trained Reynolds number 1000 and the untrained higher Reynolds number 2000, showing robustness against varying Reynolds numbers. A budget analysis of the Reynolds stresses reveals that the proposed unsupervised–supervised-pipeline SGS model predicts a significant amount of SGS backscatter, which results in the strengthened near-wall Reynolds shear stress and the accurate prediction of mean velocity.
Automatization is the learning process by which controlled, effortful second language (L2) processing becomes automatic, fast, and effortless through practice – a critical transition for L2 development. Achieving automaticity allows learners to progress from laborious language use to fluent, real-time communication by freeing limited cognitive resources. This research timeline synthesizes four decades of laboratory and classroom research on automatization, bridging cognitive learning theories with pedagogical practice. We trace five key research strands: (1) cognitive mechanisms, including the explicit-implicit knowledge interface; (2) skill development trajectories across phonological, lexical, morphosyntax, and pragmatics domains; (3) instructional approaches promoting automatization of knowledge and skills through deliberate and systematic practice; (4) methodological advances in measuring automaticity (e.g., reaction time, coefficient of variation, neural measures); and (5) individual differences in long-term memory systems (declarative and procedural memory). This timeline offers a comprehensive perspective on how automatization research has significantly advanced our understanding of L2 learning.
Ostrinia furnacalis Guenée (Lepidoptera: Crambidae) is a key lepidopteran pest affecting maize production across Asia. While its general biology has been well studied, the phenomenon of pupal ring formation remains poorly understood. This study examined the factors influencing pupal ring formation under controlled laboratory conditions. Results showed that pupal rings were formed exclusively when larvae were reared on an artificial diet, with no ring formation observed on corn-stalks. Females exhibited a significantly higher tendency to participate in ring formation than males. Additionally, male participation increased proportionally with the number of rings formed, a pattern not observed in females. The size of the rearing arena significantly influenced ring formation, with smaller arenas (6 cm diameter) promoting more frequent pairing, particularly among females. Temperature also played a significant role: lower participation rates were recorded at 22 °C compared to 25 °C and 28 °C, although the number of rings formed did not differ significantly across temperatures. Developmental stage and sex composition further influenced pairing behaviour; pupal rings formed only among individuals of similar maturity, and male participation was significantly reduced in all-male groups compared to mixed-sex groups. These findings suggest that pupal ring formation in O. furnacalis is modulated by dietary substrate, larval sex, environmental conditions, and developmental synchrony, offering new insights into the behavioural ecology of this pest.
Over the past few decades, numerous N-phase incompressible diffuse-interface flow models with non-matching densities have been proposed. Despite aiming to describe the same physics, these models are generally distinct, and an overarching modelling framework is absent. This paper provides a unified framework for N-phase incompressible Navier–Stokes Cahn–Hilliard Allen–Cahn mixture models with a single momentum equation. The framework emerges naturally from continuum mixture theory, exhibits an energy-dissipative structure, and is invariant to the choice of fundamental variables. This opens the door to exploring connections between existing N-phase models and facilitates the computation of N-phase flow models rooted in continuum mixture theory.
From global tourism and free movement to refugees and climate-related displacement, human mobility is both a driver and an effect of what we think of as globalisation. Yet, the role of international law in constituting human mobility remains critically undervalued. In this contribution, we call for a radical rethinking of the role of international law in shaping our globe through the tenets of the mobilities paradigm in the social sciences. More specifically, we argue for the adoption of a mobile ontology of international law, which pits the constant flow of persons, goods and capital against dominant globalisation narratives predicting the end of place to take a focus on re-territorialisations of power. Taking human mobility as our starting point, we first show how mobility has been central to the foundation of key building blocks of international law. Second, we turn to the example of the global tourism regime to explore how law recursively disperses mobility around the world. Third and finally, we argue that the relationship between international law and human mobility is co-constitutive, as constant shifts in mobilities create unexpected effects, which in turn prompt further evolutions in law. We conclude by reflecting on the space for empirical and critical investigation that may open up by re-imaging (international) law as quintessentially mobile.
Although the role of computed tomography (CT) in vocal fold paralysis is well established, its utility in vocal fold motion impairment remains controversial. We aimed to examine the utility of CT in the aetiological assessment of patients with unexplained vocal fold motion impairment and to identify the underlying pathological causes.
Methods
We retrospectively reviewed the records of consecutive adults with vocal fold motion impairment who underwent neck CT between June 2010 and March 2023. The CT findings were correlated with management and final diagnoses.
Results
Computed tomography helped to identify the cause of vocal fold motion impairment in 119 of 177 patients (diagnostic yield, 67.23 per cent). The accuracy, sensitivity and specificity of CT in detecting the underlying causes of vocal fold motion impairment were 96.05, 99.17 and 89.47 per cent, respectively. The leading cause of vocal fold motion impairment was malignancy, followed by idiopathic disease.
Conclusion
Computed tomography is highly recommended in patients with unexplained vocal fold motion impairment because of its high accuracy and high diagnostic yield.
The interaction between the dynamics of a flame front and the acoustic field within a combustion chamber represents an aerothermochemical problem with the potential to generate hazardous instabilities, which limit burner performance by constraining design and operational parameters. The experimental configuration described here involves a laminar premixed flame burning in an open–closed slender tube, which can also be studied through simplified modelling. The constructive coupling of the chamber acoustic modes with the flame front can be affected via strategic placement of porous plugs, which serve to dissipate thermoacoustic instabilities. These plugs are lattice-based, 3-D-printed using low-force stereolithography, allowing for complex geometries and optimal material properties. A series of porous plugs was tested, with variations in their porous density and location, in order to assess the effects of these variables on viscous dissipation and acoustic eigenmode variation. Pressure transducers and high-speed cameras are used to measure oscillations of a stoichiometric methane–air flame ignited at the tube’s open end. The findings indicate that the porous medium is effective in dissipating both pressure amplitude and flame-front oscillations, contingent on the position of the plug. Specifically, the theoretical fluid mechanics model is developed to calculate frequency shifts and energy dissipation as a function of plug properties and positioning. The theoretical predictions show a high degree of agreement with the experimental results, thereby indicating the potential of the model for the design of dissipators of this nature and highlighting the first-order interactions of acoustics, viscous flow in porous media and heat transfer processes.
We consider stationary configurations of points in Euclidean space that are marked by positive random variables called scores. The scores are allowed to depend on the relative positions of other points and outside sources of randomness. Such models have been thoroughly studied in stochastic geometry, e.g. in the context of random tessellations or random geometric graphs. It turns out that in a neighborhood of a point with an extreme score it is possible to rescale positions and scores of nearby points to obtain a limiting point process, which we call the tail configuration. Under some assumptions on dependence between scores, this local limit determines the global asymptotics for extreme scores within increasing windows in $\mathbb{R}^d$. The main result establishes the convergence of rescaled positions and clusters of high scores to a Poisson cluster process, quantifying the idea of the Poisson clumping heuristic by Aldous (1989, in the point process setting). In contrast to the existing results, our framework allows for explicit calculation of essentially all extremal quantities related to the limiting behavior of extremes. We apply our results to models based on (marked) Poisson processes where the scores depend on the distance to the kth nearest neighbor and where scores are allowed to propagate through a random network of points depending on their locations.