To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The paper proposes and studies new classical, type-free theories of truth and determinateness with unprecedented features. The theories are fully compositional, strongly classical (namely, their internal and external logics are both classical), and feature a defined determinateness predicate satisfying desirable and widely agreed principles. The theories capture a conception of truth and determinateness according to which the generalizing power associated with the classicality and full compositionality of truth is combined with the identification of a natural class of sentences—the determinate ones—for which clear-cut semantic rules are available. Our theories can also be seen as the classical closures of Kripke–Feferman truth: their $\omega $-models, which we precisely pin down, result from including in the extension of the truth predicate the sentences that are satisfied by a Kripkean closed-off fixed-point model. The theories compare to recent theories proposed by Fujimoto and Halbach, featuring a primitive determinateness predicate. In the paper we show that our theories entail all principles of Fujimoto and Halbach’s theories, and are proof-theoretically equivalent to Fujimoto and Halbach’s $\mathsf {CD}^{+}$. We also show establish some negative results on Fujimoto and Halbach’s theories: such results show that, unlike what happens in our theories, the primitive determinateness predicate prevents one from establishing clear and unrestricted semantic rules for the language with type-free truth.
Generative artificial intelligence (GenAI) has been heralded by some as a transformational force in education. It is argued to have the potential to reduce inequality and democratize the learning experience, particularly in the Global South. Others warn of the dangers of techno-solutionism, dehumanization of learners, and a widening digital divide. The reality, as so often, may be more complicated than this juxtaposition suggests. In our study, we investigated the ways in which GenAI can contribute to independent language learning in the context of Pakistan. We were particularly interested in the roles of five variables that have been shown to be particularly salient in this and similar contexts: learners’ Generative Artificial Intelligence-mediated Informal Digital Learning of English (GenAI-IDLE) participation, AI Literacy, Foreign Language Enjoyment (FLE) and Foreign Language Boredom (FLB), and their second language Willingness to Communicate (L2 WTC). Employing a structural equation modelling approach, we surveyed 359 Pakistani English as a foreign language (EFL) learners to investigate their interrelationships between variables. The results demonstrate that EFL learners’ GenAI-IDLE activity directly and positively influences AI literacy and FLE. Students’ AI literacy and FLE play a chain-mediating role in the relationship between GenAI-IDLE participation and L2 WTC. However, FLB lacks predictive power over L2 WTC. We discuss the implications of these results for language learning, in particular in low-resource contexts.
Complete exploration of design spaces is often computationally prohibitive. Classical search methods offer a solution but are limited by challenges like local optima and an inability to traverse dislocated design spaces. Quantum computing (QC) offers a potential solution by leveraging quantum phenomena to achieve computational speed-ups. However, the practical capability of current QC platforms to deliver these advantages remains unclear. To investigate this, we apply and compare two quantum approaches – the Gate-Based Grover’s algorithm and quantum annealing (QA) – to a generic tile placement problem. We benchmark their performance on real quantum hardware (IBM and D-Wave, respectively) against a classical brute-force search. QA on D-Wave’s hardware successfully produced usable results, significantly outperforming a classical brute-force approach (0.137 s vs 14.8 s) at the largest scale tested. Conversely, Grover’s algorithm on IBM’s gate-based hardware was dominated by noise and failed to yield solutions. While successful, the QA results exhibited a hardware-induced bias, where equally optimal solutions were not returned with the same probability (coefficient of variation: 0.248–0.463). These findings suggest that for near-term engineering applications, QA shows more immediate promise than current gate-based systems. This study’s contribution is a direct comparison of two physically implemented quantum approaches, offering practical insights, reformulation examples and clear recommendations on the utilisation of QC in engineering design.
Disability and inclusivity are progressive topics that have evolved in response to societal experiences, as evidenced by the social model of disability, which has been endorsed as a replacement for the conventional individual model of disability. However, many still regard disability as an individual rather than an environmental problem, which fosters stigmatization of people with disabilities. Addressing this requires deeper knowledge to inform experience design that raises awareness of disability and the importance of social inclusion. The authors conducted a co-design experiment focusing on how to fill the communication gap between deaf and hearing people. Six teams, each comprising one deaf and two hearing participants, were observed to identify the salient characteristics of two contrastive approaches: LESS, a deaf-oriented audio environment with decreased audio stimuli, and MORE, a hearing-oriented audio environment with no decreased auditory stimuli. The results were cross-analyzing quantitative and qualitative data with interaction mapping. The analysis found that the LESS approach helps people feel no barriers, while the MORE approach enables them to challenge prior understandings of the issue. This study will contribute to designing an experience-based awareness-raising activity, suggesting where the gap exists and how it should be filled in the context of diversity, equity and inclusion.
In many economies, youth unemployment rates over the past two decades have exceeded 10 percentage points, highlighting that not all youth successfully transition successfully from schooling to employment. Equally disturbing are the high rates of young adults not observed in employment, education, or training, a rate commonly referred to as “NEET.” There is not a single pathway for successful transitions. Understanding these pathways and the influences of geographic location, employment opportunities, and family and community characteristics that contribute to positive transitions is crucial. While abundant data exists to support this understanding, it is often siloed and not easily combined to inform schools, communities, and policymakers about effective strategies and necessary changes. Researchers prefer working with datasets, while many stakeholders favor results presented through storytelling and visualizations. This paper introduces YouthView, an innovative online platform designed to provide comprehensive insights into youth transition challenges and opportunities. YouthView integrates information from datasets on youth disadvantage indicators, employment, skills demand, and job vacancy at regional levels. The platform features two modes: a guided storytelling mode with selected visualizations, and an open-ended suite of exploratory dashboards for in-depth data analysis. This dual approach enables policymakers, community organizations, and education providers to gain a nuanced understanding of the challenges faced by different communities. By illuminating spatial patterns, socioeconomic disparities, and relationships between disadvantage factors and labor market dynamics, YouthView facilitates informed decision-making and the development of targeted interventions, ultimately contributing to improved youth economic outcomes and expanded opportunities in areas of greatest need.
Extreme precipitation events are projected to increase both in frequency and intensity due to climate change. High-resolution climate projections are essential to effectively model the convective phenomena responsible for severe precipitation and to plan any adaptation and mitigation action. Existing numerical methods struggle with either insufficient accuracy in capturing the evolution of convective dynamical systems, due to the low resolution, or are limited by the excessive computational demands required to achieve kilometre-scale resolution. To fill this gap, we propose a novel deep learning regional climate model (RCM) emulator called graph neural networks for climate downscaling (GNN4CD) to estimate high-resolution precipitation. The emulator is innovative in architecture and training strategy, using graph neural networks (GNNs) to learn the downscaling function through a novel hybrid imperfect framework. GNN4CD is initially trained to perform reanalysis to observation downscaling and then used for RCM emulation during the inference phase. The emulator is able to estimate precipitation at very high resolution both in space ($ 3 $km) and time ($ 1 $h), starting from lower-resolution atmospheric data ($ \sim 25 $km). Leveraging the flexibility of GNNs, we tested its spatial transferability in regions unseen during training. The model trained on northern Italy effectively reproduces the precipitation distribution, seasonal diurnal cycles, and spatial patterns of extreme percentiles across all of Italy. When used as an RCM emulator for the historical, mid-century, and end-of-century time slices, GNN4CD shows the remarkable ability to capture the shifts in precipitation distribution, especially in the tail, where changes are most pronounced.
A seminal result of Komlós, Sárközy, and Szemerédi states that any $n$-vertex graph $G$ with minimum degree at least $(1/2+\alpha )n$ contains every $n$-vertex tree $T$ of bounded degree. Recently, Pham, Sah, Sawhney, and Simkin extended this result to show that such graphs $G$ in fact support an optimally spread distribution on copies of a given $T$, which implies, using the recent breakthroughs on the Kahn-Kalai conjecture, the robustness result that $T$ is a subgraph of sparse random subgraphs of $G$ as well. Pham, Sah, Sawhney, and Simkin construct their optimally spread distribution by following closely the original proof of the Komlós-Sárközy-Szemerédi theorem which uses the blow-up lemma and the Szemerédi regularity lemma. We give an alternative, regularity-free construction that instead uses the Komlós-Sárközy-Szemerédi theorem (which has a regularity-free proof due to Kathapurkar and Montgomery) as a black box. Our proof is based on the simple and general insight that, if $G$ has linear minimum degree, almost all constant-sized subgraphs of $G$ inherit the same minimum degree condition that $G$ has.
This paper presents a novel robust control method for a hip-assist exoskeleton robot’s joint module, addressing dynamic performance under variable loads. The proposed approach integrates traditional PID control with robust, model-based strategies, utilizing the system’s dynamic model and a Lyapunov-based robust controller to handle uncertainties. This method not only enhances traditional PID control but also offers practical advantages in implementation. Theoretical analysis confirms the system’s uniform boundedness and ultimate boundedness. A Matlab prototype was developed for simulation, demonstrating the control scheme’s feasibility and effectiveness. Numerical simulations show that the proposed fractional-order hybrid PD (FHPD) controller significantly reduces tracking error by 58.70% compared to the traditional PID controller, 55.41% compared to the MPD controller, and 32.32% compared to ADRC, highlighting its superior tracking performance and stability.
Pipeline inspection robots play a crucial role in maintaining the integrity of pipeline systems across various industries. In this paper, a novel pipeline inspection robot is designed based on a four degrees-of-freedom (DOF) generalized parallel mechanism (GPM). First, a four DOF mechanism is introduced using numerical and graph synthesis. The design employs numerical and graph synthesis methods to achieve an ideal symmetric configuration, enhancing the robot’s adaptability and mobility. The coupling mid-platform, inspired by parallelogram mechanisms, enables synchronized contraction motion, allowing the robot to adjust to different pipe diameters. Then, the constraints of the pipeline inspection robot in the elbow are analyzed based on task requirements. Through kinematic and performance analyses using screw theory, the mechanism’s feasibility in practical applications is confirmed. Theoretical analysis, simulations, and experiments demonstrate the robot’s ability to achieve active steering in T-branches and elbows. Experimental validation in straight and bent pipes shows that the robot meets the expected speed targets and can successfully navigate complex pipeline environments. This research highlights the potential of GPMs in advancing the capabilities of pipeline inspection robots for real-world applications.
This work proposes an optimization approach for the time-consuming parts of Light Detection and Ranging (LiDAR) data processing and IMU-LiDAR data fusion in the LiDAR-inertial odometry (LIO) method. Two key novelties enable faster and more accurate navigation in complex, noisy environments. Firstly, to improve map update and point cloud registration efficiency, we employ a sparse voxel maps with a new update function to construct a local map around the mobile robot and utilize an improved Generalized Iterative Closest Point algorithm based on sparse voxels to achieve LiDAR point clouds association, thereby boosting both map updating and computational speed. Secondly, to enhance real-time accuracy, this paper analyzes the residuals and covariances of both IMU and LiDAR data in a tightly coupled manner, and achieves system state estimation by fusing sensor information through Gauss-Newton method, effectively mitigating localization deviations by appropriately weighting the LiDAR covariances. The performance of our method is evaluated against advanced LIO algorithms using eight open datasets and five self-collected campus datasets. Results show a 24.7–60.1% reduction in average processing time per point cloud frame, along with improved robustness and higher precision motion trajectory estimation in most cluttered and complex indoor and outdoor environments.
This short research article interrogates the rise of digital platforms that enable ‘synthetic afterlives’, with a focus on how deathbots – AI-driven avatar interactions grounded in personal data and recordings – reshape memory practices. Drawing on socio-technical walkthroughs of four platforms – Almaya, HereAfter, Séance AI, and You, Only Virtual – we analyse how they frame, archive, and algorithmically regenerate memories. Our findings reveal a central tension: between preserving the past as a fixed archive and continually reanimating it through generative AI. Our walkthroughs demonstrate how these services commodify remembrance, reducing memory to consumer-driven interactions designed for affective engagement while obscuring the ethical, epistemological and emotional complexities of digital commemoration. In doing so, they enact reductive forms of memory that are embedded within platform economies and algorithmic imaginaries.
With the growing amount of historical infrastructure data available to engineers, data-driven techniques have been increasingly employed to forecast infrastructure performance. In addition to algorithm selection, data preprocessing strategies for machine learning implementations plays an equally important role in ensuring accuracy and reliability. The present study focuses on pavement infrastructure and identifies four categories of strategies to preprocess data for training machine-learning-based forecasting models. The Long-Term Pavement Performance (LTPP) dataset is employed to benchmark these categories. Employing random forest as the machine learning algorithm, the comparative study examines the impact of data preprocessing strategies, the volume of historical data, and forecast horizon on the accuracy and reliability of performance forecasts. The strengths and limitations of each implementation strategy are summarized. Multiple pavement performance indicators are also analysed to assess the generalizability of the findings. Based on the results, several findings and recommendations are provided for short-to medium-term infrastructure management and decision-making: (i) in data-scarce scenarios, strategies that incorporate both explanatory variables and historical performance data provides better accuracy and reliability, (ii) to achieve accurate forecasts, the volume of historical data should at least span a time duration comparable to the intended forecast horizon, and (iii) for International Roughness Index and transverse crack length, a forecast horizon up to 5 years is generally achievable, but forecasts beyond a three-year horizon are not recommended for longitudinal crack length. These quantitative guidelines ultimately support more effective and reliable application of data-driven techniques in infrastructure performance forecasting.
Vibration control in structures is essential to mitigate undesired dynamic responses, thereby enhancing stability, safety, and performance under varying loading conditions. Mechanical metamaterials have emerged as effective solutions, enabling tailored dynamic properties for vibration attenuation. This study introduces a convolutional autoencoder framework for the inverse design of local resonators embedded in mechanical metamaterials. The model learns from the dynamic behaviour of primary structures coupled with ideal absorbers to predict the geometric parameters of resonators that achieve desired vibration control performance. Unlike conventional approaches requiring full numerical models, the proposed method operates as a data-driven tool, where the target frequency to be mitigated is provided as input, and the model directly outputs the resonator geometry. A large dataset, generated through physics-informed simulations of ideal absorber dynamics, supports training while incorporating both spectral and geometric variability. Within the architecture, the encoder maps input receptance spectra to resonator geometries, while the decoder reconstructs the target receptance response, ensuring dynamic consistency. Once trained, the framework predicts resonator configurations that satisfy predefined frequency targets with high accuracy, enabling efficient design of passive controllers of the syntonized mass type. This study specifically demonstrates the application of the methodology to resonators embedded in wind turbine metastructures, a critical context for mitigating structural vibrations and improving operational efficiency. Results confirm strong agreement between predicted and target responses, underscoring the potential of deep learning techniques to support on-demand inverse design of mechanical metamaterials for smart vibration control in wind energy and related engineering applications.
Here we consider the hypergraph Turán problem in uniformly dense hypergraphs as was suggested by Erdős and Sós. Given a $3$-graph $F$, the uniform Turán density $\pi _{\boldsymbol{\therefore }}(F)$ of $F$ is defined as the supremum over all $d\in [0,1]$ for which there is an $F$-free uniformly $d$-dense $3$-graph, where uniformly $d$-dense means that every linearly sized subhypergraph has density at least $d$. Recently, Glebov, Král’, and Volec and, independently, Reiher, Rödl, and Schacht proved that $\pi _{\boldsymbol{\therefore }}(K_4^{(3)-})=\frac {1}{4}$, solving a conjecture by Erdős and Sós. Despite substantial attention, the uniform Turán density is still only known for very few hypergraphs. In particular, the problem due to Erdős and Sós to determine $\pi _{\boldsymbol{\therefore }}(K_4^{(3)})$ remains wide open.
In this work, we determine the uniform Turán density of the $3$-graph on five vertices that is obtained from $K_4^{(3)-}$ by adding an additional vertex whose link forms a matching on the vertices of $K_4^{(3)-}$. Further, we point to two natural intermediate problems on the way to determining $\pi _{\boldsymbol{\therefore }}(K_4^{(3)})$, and solve the first of these.
The adoption of corpus technology in school classroom settings remains limited, largely due to insufficient technological pedagogical content knowledge (TPACK) training for pedagogical corpus use. To address this gap, we investigated how teacher education in corpus-based language pedagogy (CBLP), a subdomain of TPACK for corpus technology tailored to language teachers, influenced student TESOL teachers’ self-efficacy for independent language learning and teaching. Employing a mixed-methods approach, including a CBLP training intervention (n = 120), survey data (n = 96), and interviews (n = 8) with student teachers at a university in Hong Kong SAR, China, the research validates a theoretical model through confirmatory factor analysis and structural equation modelling. Results demonstrate that corpus literacy (CL) is foundational for effective CBLP implementation and development of independent learning self-efficacy, which in turn fosters innovative, resource-rich instructional strategies. CBLP also enhances teachers’ self-efficacy for student engagement, fostering more interactive and motivating classrooms. These findings emphasise the value of embedding CL and CBLP within TESOL teacher-education programmes to prepare future language teachers for self-efficacy within dynamic, technology-enhanced classrooms.
Excellent products often contain profound cultural connotations. To improve the quality of cultural products, it is important to study how typical cultural carriers can be more promptly and efficiently identified and incorporated into products through a detailed and easy-to-use design process. In this article, we propose an approach from three different levels to assist designers in incorporating cultural features into products, including: (1) the integrated framework of the composition and division of cultural carriers, (2) the extraction and translation model from cultural carriers, cultural elements to cultural features and (3) the cultural product design process. The proposed approach was applied in a large and complex cultural product case, that is, inter-city train design. The evaluation of the recognition of culture features indicated that the approach contributed to conferring culture on products through thoughtful design and could ensure that the product schemes reflect cultural features as well as interesting cultural connotations.
It is of great importance to integrate human-centered design concepts at the core of both algorithmic research and the implementation of applications. In order to do so, it is essential to gain an understanding of human–computer interaction and collaboration from the perspective of the user. To address this issue, this chapter initially presents a description of the process of human–AI interaction and collaboration, and subsequently proposes a theoretical framework for it. In accordance with this framework, the current research hotspots are identified in terms of interaction quality and interaction mode. Among these topics, user mental modeling, interpretable AI, trust, and anthropomorphism are currently the subject of academic interest with regard to interaction quality. The level of interaction mode encompasses a range of topics, including interaction paradigms, role assignment, interaction boundaries, and interaction ethics. To further advance the related research, this chapter identifies three areas for future exploration: cognitive frameworks about Human–AI Interaction, adaptive learning, and the complementary strengths of humans and AI.
In the technological wave of the twenty-first century, artificial intelligence (AI), as a transformative technology, is rapidly reshaping our society, economy, and daily life. Since the concept of AI was first proposed, this field has experienced many technological innovations and application expansions. Artificial intelligence has experienced three booms in the past half century and has developed rapidly. In the 1960s, marked by the Turing test, the application of knowledge reasoning systems and other technologies set off the first boom. Computer scientists at that time began to explore how to let computers simulate human intelligence. Early AI research focused on rule systems and logical reasoning. The rise of expert systems and artificial neural networks brought a second wave of enthusiasm (McDermott, 1982). The third boom is marked by deep learning and big data, especially the widespread application of artificial intelligence-generated content represented by ChatGPT. During this period, AI technology shifted from traditional rule systems to methods that relied on algorithms to learn patterns from data. The rise of deep learning enabled AI to achieve significant breakthroughs in areas such as image recognition and natural language processing.