To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Data assimilation is a core component of numerical weather prediction systems. The large quantity of data processed during assimilation requires the computation to be distributed across increasingly many compute nodes; yet, existing approaches suffer from synchronization overhead in this setting. In this article, we exploit the formulation of data assimilation as a Bayesian inference problem and apply a message-passing algorithm to solve the spatial inference problem. Since message passing is inherently based on local computations, this approach lends itself to parallel and distributed computation. In combination with a GPU-accelerated implementation, we can scale the algorithm to very large grid sizes while retaining good accuracy and compute and memory requirements.
Surrogate models of turbulent diffusive flames could play a strategic role in the design of liquid rocket engine combustion chambers. The present article introduces a method to obtain data-driven surrogate models for coaxial injectors, by leveraging an inductive transfer learning strategy over a U-Net with available multifidelity Large Eddy Simulations (LES) data. The resulting models preserve reasonable accuracy while reducing the offline computational cost of data-generation. First, a database of about 100 low-fidelity LES simulations of shear-coaxial injectors, operating with gaseous oxygen and gaseous methane as propellants, has been created. The design of experiments explores three variables: the chamber radius, the recess-length of the oxidizer post, and the mixture ratio. Subsequently, U-Nets were trained upon this dataset to provide reasonable approximations of the temporal-averaged two-dimensional flow field. Despite the fact that neural networks are efficient non-linear data emulators, in purely data-driven approaches their quality is directly impacted by the precision of the data they are trained upon. Thus, a high-fidelity (HF) dataset has been created, made of about 10 simulations, to a much greater cost per sample. The amalgamation of low and HF data during the the transfer-learning process enables the improvement of the surrogate model’s fidelity without excessive additional cost.
Currently, artificial intelligence (AI) is integrated across various segments of the public sector, in a scattered and fragmented manner, aiming to enhance the quality of people’s lives. While AI adoption has proven to have a great impact, there are several aspects that hamper its utilization in public administration. Therefore, a large set of initiatives is designed to play a pivotal role in promoting the adoption of reliable AI, including documentation as a key driver. The AI community has been proactively recommending a variety of initiatives aimed at promoting the adoption of documentation practices. While currently proposed AI documentation artifacts play a crucial role in increasing the transparency and accountability of various facts about AI systems, we propose a code-bound declarative documentation framework that aims to support the responsible deployment of AI-based solutions. Our proposed framework aims to address the need to shift the focus from data and models being considered in isolation to the reuse of AI solutions as a whole. By introducing a formalized approach to describing adaptation and optimization techniques, we aim to enhance existing documentation alternatives. Furthermore, its utilization in the public administration aims to foster the rapid adoption of AI-based applications due to the open access to common use cases in the public sector. We further showcase our proposal with a public sector-specific use case, such as a legal text classification task, and demonstrate how the AI Product Card enables its reuse through the interactions of the formal documentation specifications with the modular code references.
Published in collaboration with The British Universities Industrial Relations Association (BUIRA), this book critically reviews the future of Industrial Relations (IR) in a changing work landscape and traces its historical evolution. Essential for academics, students and trade unions, it explores IR's significant changes over the past decade and its ongoing influence on our lives.
Wind speed at the sea surface is a key quantity for a variety of scientific applications and human activities. For its importance, many observation techniques exist, ranging from in situ to satellite observations. However, none of such techniques can capture the spatiotemporal variability of the phenomenon at the same time. Reanalysis products, obtained from data assimilation methods, represent the state-of-the-art for sea-surface wind speed monitoring but may be biased by model errors and their spatial resolution is not competitive with satellite products. In this work, we propose a scheme based on both data assimilation and deep learning concepts to process spatiotemporally heterogeneous input sources to reconstruct high-resolution time series of spatial wind speed fields. This method allows to us make the most of the complementary information conveyed by the different sea-surface information typically available in operational settings. We use synthetic wind speed data to emulate satellite images, in situ time series and reanalyzed wind fields. Starting from these pseudo-observations, we run extensive numerical simulations to assess the impact of each input source on the model reconstruction performance. We show that our proposed framework outperforms a deep learning–based inversion scheme and can successfully exploit the spatiotemporal complementary information of the different input sources. We also show that the model can learn the possible bias in reanalysis products and attenuate it in the output reconstructions.
This article addresses a critical gap in international research concerning digital literacies and empowerment among adults who are English as an additional language (EAL) learners. In the Australian context, where digital communication and services are embedded in all aspects of life and work, proficiency in digital literacies, including advanced technologies like generative artificial intelligence (AI), is vital for working and living in Australia. Despite the increasing prevalence and significance of generative AI platforms such as ChatGPT, there is a notable absence of dedicated programs to assist EAL learners in understanding and utilising generative AI, potentially impacting their employability and everyday life. This article presents findings from a larger study conducted within training providers, spanning adult educational institutions nationwide. Through analysis of data gathered from surveys and focus groups, the article investigates the knowledge and attitudes of students, educators, and leaders regarding integrating generative AI into the learning program for adult EAL learners. The results reveal a hesitance among educators, particularly concerning beginning language learners, in incorporating generative AI into educational programs. Conversely, many adult learners demonstrate enthusiasm for learning about its potential benefits despite having limited understanding. These disparities underscore the pressing need for comprehensive professional development for educators and program leaders. The findings also highlight the need to develop the AI literacy of learners to foster their understanding and digital empowerment. The article concludes by advocating for a systemic approach to include generative AI as an important part of learning programs with students often from adult migrant and refugee backgrounds.
Drawing upon Darvin and Norton’s (2015) model of investment, this article examines how Xing and Jimmy (both pseudonyms) as two male Chinese English as a foreign language learners from rural migrant backgrounds negotiate their identities and assemble their social and cultural resources to invest in autonomous digital literacies for language learning and the assertion of a legitimate place in urban spaces. Employing a connective ethnographic design, this study collected data through interviews, reflexive journals, digital artifacts, and on-campus observations. Data were analyzed using an inductive thematic approach as well as within- and cross-case data analysis methods. The findings indicate that Xing and Jimmy experienced a profound sense of alienation and exclusion as they migrated from under-resourced rural spaces to the urban elite field. The unequal power relations in urban classrooms subjected them to marginalized and inadequate rural identities by denying them the right to speak and be heard. However, engaging with digital literacies in the wild allowed these migrant learners to access a wide range of linguistic, cultural, and symbolic resources, empowering them to reframe their identities as legitimate English speakers. The acquisition of such legitimacy enabled them to challenge the prevailing rural–urban exclusionary ideologies to claim the right to speak. This article closes by offering implications for empowering rural migrant students as socially competent members of the Chinese higher education system in the digital age.
This three-year longitudinal case study explored how trilingual Uyghur intranational migrant students utilized digital technologies to learn languages and negotiate their identities in Han-dominant environments during their internal migrations within China, a topic that has been scarcely researched before. Adopting a poststructuralist perspective of identity, the study traced four Uyghur students who migrated from underdeveloped southern Xinjiang to northern Xinjiang for junior high school education, and to more developed cities in eastern and southern China for senior high school education and higher education. A qualitative approach was adopted, utilizing semi-structured interviews, class and campus observations, daily conversations, WeChat conversations, participants’ reflections, and assignments. Findings reveal that Uyghur minority students utilized digital technologies to bridge the English proficiency gap with Han students, negotiate their marginalized identities, integrate into the mainstream education system, and extend the empowerment to other ethnic minority students. This was in sharp contrast to the significant challenges and identity crises they faced when they did not have access to digital technologies to learn Mandarin in boarding secondary schools. An unprecedented finding is that, with digital empowerment, Uyghur minority students could achieve accomplishments that were even difficult for Han students to attain and gain upward social mobility by finding employment in Han-dominant first-tier cities. The implications of utilizing digital technologies to support intranational migrant ethnic minority students’ language learning and identity development are discussed.
The growing demand for global wind power production, driven by the critical need for sustainable energy sources, requires reliable estimation of wind speed vertical profiles for accurate wind power prediction and comprehensive wind turbine performance assessment. Traditional methods relying on empirical equations or similarity theory face challenges due to their restricted applicability beyond the surface layer. Although recent studies have utilized various machine learning techniques to vertically extrapolate wind speeds, they often focus on single levels and lack a holistic approach to predicting entire wind profiles. As an alternative, this study introduces a proof-of-concept methodology utilizing TabNet, an attention-based sequential deep learning model, to estimate wind speed vertical profiles from coarse-resolution meteorological features extracted from a reanalysis dataset. To ensure that the methodology is applicable across diverse datasets, Chebyshev polynomial approximation is employed to model the wind profiles. Trained on the meteorological features as inputs and the Chebyshev coefficients as targets, the TabNet more-or-less accurately predicts unseen wind profiles for different wind conditions, such as high shear, low shear/well-mixed, low-level jet, and high wind. Additionally, this methodology quantifies the correlation of wind profiles with prevailing atmospheric conditions through a systematic feature importance assessment.
The paper presents a novel control method aimed at enhancing the trajectory tracking accuracy of two-link mechanical systems, particularly nonlinear systems that incorporate uncertainties such as time-varying parameters and external disturbances. Leveraging the Udwadia–Kalaba equation, the algorithm employs the desired system trajectory as a servo constraint. First, the system’s constraints to construct its dynamic equation and apply generalized constraints from the constraint equation to an unconstrained system. Second, we design a robust approximate constraint tracking controller for manipulator control and establish its stability using Lyapunov’s law. Finally, we numerically simulate and experimentally validate the controller on a collaborative platform using model-based design methods.
Treating inertial measurement unit (IMU) measurements as inputs to a motion model and then preintegrating these measurements have almost become a de facto standard in many robotics applications. However, this approach has a few shortcomings. First, it conflates the IMU measurement noise with the underlying process noise. Second, it is unclear how the state will be propagated in the case of IMU measurement dropout. Third, it does not lend itself well to dealing with multiple high-rate sensors such as a lidar and an IMU or multiple asynchronous IMUs. In this paper, we compare treating an IMU as an input to a motion model against treating it as a measurement of the state in a continuous-time state estimation framework. We methodically compare the performance of these two approaches on a 1D simulation and show that they perform identically, assuming that each method’s hyperparameters have been tuned on a training set. We also provide results for our continuous-time lidar-inertial odometry in simulation and on the Newer College Dataset. In simulation, our approach exceeds the performance of an imu-as-input baseline during highly aggressive motion. On the Newer College Dataset, we demonstrate state-of-the art results. These results show that continuous-time techniques and the treatment of the IMU as a measurement of the state are promising areas of further research. Code for our lidar-inertial odometry can be found at: https://github.com/utiasASRL/steam_icp.
Some effects are considered to be higher level than others. High-level effects provide expressive and succinct abstraction of programming concepts, while low-level effects allow more fine-grained control over program execution and resources. Yet, often it is desirable to write programs using the convenient abstraction offered by high-level effects, and meanwhile still benefit from the optimizations enabled by low-level effects. One solution is to translate high-level effects to low-level ones.
This paper studies how algebraic effects and handlers allow us to simulate high-level effects in terms of low-level effects. In particular, we focus on the interaction between state and nondeterminism known as the local state, as provided by Prolog. We map this high-level semantics in successive steps onto a low-level composite state effect, similar to that managed by Prolog’s Warren Abstract Machine. We first give a translation from the high-level local-state semantics to the low-level global-state semantics, by explicitly restoring state updates on backtracking. Next, we eliminate nondeterminism altogether in favour of a lower-level state containing a choicepoint stack. Then we avoid copying the state by restricting ourselves to incremental, reversible state updates. We show how these updates can be stored on a trail stack with another state effect. We prove the correctness of all our steps using program calculation where the fusion laws of effect handlers play a central role.
Biped wall-climbing robots (BWCRs) serve as viable alternatives to human workers for inspection and maintenance tasks within three-dimensional (3D) curtain wall environments. However, autonomous climbing in such environments presents significant challenges, particularly related to localization and navigation. This paper presents a pioneering navigation framework tailored for BWCRs to navigate through 3D curtain wall environments. The framework comprises three essential stages: Building Information Model (BIM)-based map extraction, 3D climbing path planning (based on our previous work), and path tracking. An algorithm is developed to extract a detailed 3D map from the BIM, including structural elements such as walls, frames, and ArUco markers. This generated map is input into a proposed path planner to compute a viable climbing motion. For path tracking during actual climbing, an ArUco marker-based global localization method is introduced to estimate the pose of the robot, enabling adjustments to the target foothold by comparing desired and actual poses. The conducted experiments validate the feasibility and efficacy of the proposed navigation framework and associated algorithms, aiming to enhance the autonomous climbing capability of BWCRs.
Risk-based surveillance is now a well-established paradigm in epidemiology, involving the distribution of sampling efforts differentially in time, space, and within populations, based on multiple risk factors. To assess and map the risk of the presence of the bacterium Xylella fastidiosa, we have compiled a dataset that includes factors influencing plant development and thus the spread of such harmful organism. To this end, we have collected, preprocessed, and gathered information and data related to land types, soil compositions, and climatic conditions to predict and assess the probability of risk associated with X. fastidiosa in relation to environmental features. This resource can be of interest to researchers conducting analyses on X. fastidiosa and, more generally, to researchers working on geospatial modeling of risk related to plant infectious diseases.
Both energy performance certificates (EPCs) and thermal infrared (TIR) images play key roles in mapping the energy performance of the urban building stock. In this paper, we developed parametric building archetypes using an EPC database and conducted temperature clustering on TIR images acquired from drones and satellite datasets. We evaluated 1,725 EPCs of existing building stock in Cambridge, UK, to generate energy consumption profiles. Drone-based TIR images of individual buildings in two Cambridge University colleges were processed using a machine learning pipeline for thermal anomaly detection and investigated the influence of two specific factors that affect the reliability of TIR for energy management applications: ground sample distance (GSD) and angle of view (AOV). The EPC results suggest that the construction year of the buildings influences their energy consumption. For example, modern buildings were over 30% more energy-efficient than older ones. In parallel, older buildings were found to show almost double the energy savings potential through retrofitting compared to newly constructed buildings. TIR imaging results showed that thermal anomalies can only be properly identified in images with a GSD of 1 m/pixel or less. A GSD of 1-6 m/pixel can detect hot areas of building surfaces. We found that a GSD > 6 m/pixel cannot characterize individual buildings but does help identify urban heat island effects. Additional sensitivity analysis showed that building thermal anomaly detection is more sensitive to AOV than to GSD. Our study informs newer approaches to building energy diagnostics using thermography and supports decision-making for large-scale retrofitting.
Recent advancements in Earth system science have been marked by the exponential increase in the availability of diverse, multivariate datasets characterised by moderate to high spatio-temporal resolutions. Earth System Data Cubes (ESDCs) have emerged as one suitable solution for transforming this flood of data into a simple yet robust data structure. ESDCs achieve this by organising data into an analysis-ready format aligned with a spatio-temporal grid, facilitating user-friendly analysis and diminishing the need for extensive technical data processing knowledge. Despite these significant benefits, the completion of the entire ESDC life cycle remains a challenging task. Obstacles are not only of a technical nature but also relate to domain-specific problems in Earth system research. There exist barriers to realising the full potential of data collections in light of novel cloud-based technologies, particularly in curating data tailored for specific application domains. These include transforming data to conform to a spatio-temporal grid with minimum distortions and managing complexities such as spatio-temporal autocorrelation issues. Addressing these challenges is pivotal for the effective application of Artificial Intelligence (AI) approaches. Furthermore, adhering to open science principles for data dissemination, reproducibility, visualisation, and reuse is crucial for fostering sustainable research. Overcoming these challenges offers a substantial opportunity to advance data-driven Earth system research, unlocking the full potential of an integrated, multidimensional view of Earth system processes. This is particularly true when such research is coupled with innovative research paradigms and technological progress.
Automatic license plate recognition (ALPR) systems are increasingly used to solve issues related to surveillance and security. However, these systems assume constrained recognition scenarios, thereby restricting their practical use. Therefore, we address in this article the challenge of recognizing vehicle license plates (LPs) from the video feeds of a mobile security robot by proposing an efficient two-stage ALPR system. Our ALPR system combines the on-the-shelf YOLOv7x model with a novel LP recognition model, called vision transformer-based LP recognizer (ViTLPR). ViTLPR is based on the self-attention mechanism to read character sequences on LPs. To ease the deployment of our ALPR system on mobile security robots and improve its inference speed, we also propose an optimization strategy. As an additional contribution, we provide an ALPR dataset, named PGTLP-v2, collected from surveillance robots patrolling several plants. The PGTLP-v2 dataset has multiple features to cover chiefly the in-the-wild scenario. To evaluate the effectiveness of our ALPR system, experiments are carried out on the PGTLP-v2 dataset and five benchmark ALPR datasets collected from different countries. Extensive experiments demonstrate that our proposed ALPR system outperforms state-of-the-art baselines.
Sea Surface Height Anomaly (SLA) is a signature of the mesoscale dynamics of the upper ocean. Sea surface temperature (SST) is driven by these dynamics and can be used to improve the spatial interpolation of SLA fields. In this study, we focused on the temporal evolution of SLA fields. We explored the capacity of deep learning (DL) methods to predict short-term SLA fields using SST fields. We used simulated daily SLA and SST data from the Mercator Global Analysis and Forecasting System, with a resolution of (1/12)° in the North Atlantic Ocean (26.5–44.42°N, −64.25–41.83°E), covering the period from 1993 to 2019. Using a slightly modified image-to-image convolutional DL architecture, we demonstrated that SST is a relevant variable for controlling the SLA prediction. With a learning process inspired by the teaching-forcing method, we managed to improve the SLA forecast at 5 days by using the SST fields as additional information. We obtained predictions of 12 cm (20 cm) error of SLA evolution for scales smaller than mesoscales and at time scales of 5 days (20 days) respectively. Moreover, the information provided by the SST allows us to limit the SLA error to 16 cm at 20 days when learning the trajectory.