To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Machine learning (ML) techniques have emerged as a powerful tool for predicting weather and climate systems. However, much of the progress to date focuses on predicting the short-term evolution of the atmosphere. Here, we look at the potential for ML methodology to predict the evolution of the ocean. The presence of land in the domain is a key difference between ocean modeling and previous work looking at atmospheric modeling. Here, we look to train a convolutional neural network (CNN) to emulate a process-based General Circulation Model (GCM) of the ocean, in a configuration which contains land. We assess performance on predictions over the entire domain and near to the land (coastal points). Our results show that the CNN replicates the underlying GCM well when assessed over the entire domain. RMS errors over the test dataset are low in comparison to the signal being predicted, and the CNN model gives an order of magnitude improvement over a persistence forecast. When we partition the domain into near land and the ocean interior and assess performance over these two regions, we see that the model performs notably worse over the near land region. Near land, RMS scores are comparable to those from a simple persistence forecast. Our results indicate that ocean interaction with land is something the network struggles with and highlight that this is may be an area where advanced ML techniques specifically designed for, or adapted for, the geosciences could bring further benefits.
Nature-based solutions are becoming increasingly recognized as effective tools for addressing various environmental problems. This study presents a novel approach to selecting optimal blue–green infrastructure (BGI) solutions tailored to the unique environmental and climatic challenges of Istanbul, Türkiye. The primary objective is to utilize a Bayesian Belief Network (BBN) model for assisting in the identification of the most effective BGI solutions, considering the city’s distinct environmental conditions and vulnerabilities to climate change. Our methodology integrates comprehensive data collection, including meteorological and land use data, and employs a BBN model to analyze and weigh the complex network of factors influencing BGI suitability. Key findings reveal the model’s capacity to effectively predict BGI applicability across diverse climate scenarios, with quantitative results demonstrating a notable enhancement in decision-making processes for urban sustainability. Quantitative results from our model reveal a significant improvement in decision-making accuracy, with a predictive accuracy rate of 82% in identifying suitable BGI solutions for various urban scenarios. This enhancement is particularly notable in densely populated districts, where our model predicted a 25% greater efficiency in stormwater management and urban heat island mitigation compared to traditional planning methods. The study also acknowledges the limitations, such as data scarcity and the need for further model refinement. The results highlight the model’s potential for application in other complex urban areas, making it a valuable tool for improving urban sustainability and climate change adaptation. This study shows the importance of incorporating detailed meteorological and local climate zones data into urban planning processes and suggests that similar methodologies could be beneficial for addressing environmental challenges in diverse urban settings.
This article addresses the challenges of assessing pedestrian-level wind conditions in urban environments using a deep learning approach. The influence of large buildings on urban wind patterns has significant implications for thermal comfort, pollutant transport, pedestrian safety, and energy usage. Traditional methods, such as wind tunnel testing, are time-consuming and costly, leading to a growing interest in computational methods like computational fluid dynamics (CFD) simulations. However, CFD still requires a significant time investment for such studies, limiting the available time for design modification prior to lockdown. This study proposes a deep learning surrogate model based on a MLP-mixer architecture to predict mean flow conditions for complex arrays of buildings. The model is trained on a diverse dataset of synthetic geometries and corresponding CFD simulations, demonstrating its effectiveness in capturing intricate wind dynamics. The article discusses the model architecture and data preparation and evaluates its performance qualitatively and quantitatively. Results show promising capabilities in replicating key wind features with a mean error of 0.3 m/s and rarely exceeding 0.75 m/s, making the proposed model a valuable tool for early-stage urban wind modelling.
Comprehensive housing stock information is crucial for informing the development of climate resilience strategies aiming to reduce the adverse impacts of extreme climate hazards in high-risk regions like the Caribbean. In this study, we propose an end-to-end workflow for rapidly generating critical baseline exposure data using very high-resolution drone imagery and deep learning techniques. Specifically, our work leverages the segment anything model (SAM) and convolutional neural networks (CNNs) to automate the generation of building footprints and roof classification maps. We evaluate the cross-country generalizability of the CNN models to determine how well models trained in one geographical context can be adapted to another. Finally, we discuss our initiatives for training and upskilling government staff, community mappers, and disaster responders in the use of geospatial technologies. Our work emphasizes the importance of local capacity building in the adoption of AI and Earth Observation for climate resilience in the Caribbean.
High-resolution simulations such as the ICOsahedral Non-hydrostatic Large-Eddy Model (ICON-LEM) can be used to understand the interactions among aerosols, clouds, and precipitation processes that currently represent the largest source of uncertainty involved in determining the radiative forcing of climate change. Nevertheless, due to the exceptionally high computing cost required, this simulation-based approach can only be employed for a short period within a limited area. Despite the potential of machine learning to alleviate this issue, the associated model and data uncertainties may impact its reliability. To address this, we developed a neural network (NN) model powered by evidential learning, which is easy to implement, to assess both data (aleatoric) and model (epistemic) uncertainties applied to satellite observation data. By differentiating whether uncertainties stem from data or the model, we can adapt our strategies accordingly. Our study focuses on estimating the autoconversion rates, a process in which small droplets (cloud droplets) collide and coalesce to become larger droplets (raindrops). This process is one of the key contributors to the precipitation formation of liquid clouds, crucial for a better understanding of cloud responses to anthropogenic aerosols and, subsequently, climate change. We demonstrate that incorporating evidential regression enhances the model’s credibility by accounting for uncertainties without compromising performance or requiring additional training or inference. Additionally, the uncertainty estimation shows good calibration and provides valuable insights for future enhancements, potentially encouraging more open discussions and exploration, especially in the field of atmospheric science.
Edge AI is the fusion of edge computing and artificial intelligence (AI). It promises responsiveness, privacy preservation, and fault tolerance by moving parts of the AI workflow from centralized cloud data centers to geographically dispersed edge servers, which are located at the source of the data. The scale of edge AI can vary from simple data preprocessing tasks to the whole machine learning stack. However, most edge AI implementations so far are limited to urban areas, where the infrastructure is highly dependable. This work instead focuses on a class of applications involved in environmental monitoring in remote, rural areas such as forests and rivers. Such applications have additional challenges, including failure proneness and access to the electricity grid and communication networks. We propose neuromorphic computing as a promising solution to the energy, communication, and computation constraints in such scenarios and identify directions for future research in neuromorphic edge AI for rural environmental monitoring. Proposed directions are distributed model synchronization, edge-only learning, aerial networks, spiking neural networks, and sensor integration.
Atmospheric models used for weather and climate prediction are traditionally formulated in a deterministic manner. In other words, given a particular state of the resolved scale variables, the most likely forcing from the subgrid scale processes is estimated and used to predict the evolution of the large-scale flow. However, the lack of scale separation in the atmosphere means that this approach is a large source of error in forecasts. Over recent years, an alternative paradigm has developed: the use of stochastic techniques to characterize uncertainty in small-scale processes. These techniques are now widely used across weather, subseasonal, seasonal, and climate timescales. In parallel, recent years have also seen significant progress in replacing parametrization schemes using machine learning (ML). This has the potential to both speed up and improve our numerical models. However, the focus to date has largely been on deterministic approaches. In this position paper, we bring together these two key developments and discuss the potential for data-driven approaches for stochastic parametrization. We highlight early studies in this area and draw attention to the novel challenges that remain.
Stochastic generators are useful for estimating climate impacts on various sectors. Projecting climate risk in various sectors, e.g. energy systems, requires generators that are accurate (statistical resemblance to ground-truth), reliable (do not produce erroneous examples), and efficient. Leveraging data from the North American Land Data Assimilation System, we introduce TemperatureGAN, a Generative Adversarial Network conditioned on months, regions, and time periods, to generate 2 m above ground atmospheric temperatures at an hourly resolution. We propose evaluation methods and metrics to measure the quality of generated samples. We show that TemperatureGAN produces high-fidelity examples with good spatial representation and temporal dynamics consistent with known diurnal cycles.
Airborne radar sensors capture the profile of snow layers present on top of an ice sheet. Accurate tracking of these layers is essential to calculate their thicknesses, which are required to investigate the contribution of polar ice cap melt to sea-level rise. However, automatically processing the radar echograms to detect the underlying snow layers is a challenging problem. In our work, we develop wavelet-based multi-scale deep learning architectures for these radar echograms to improve snow layer detection. These architectures estimate the layer depths with a mean absolute error of 3.31 pixels and 94.3% average precision, achieving higher generalizability as compared to state-of-the-art snow layer detection networks. These depth estimates also agree well with physically drilled stake measurements. Such robust architectures can be used on echograms from future missions to efficiently trace snow layers, estimate their individual thicknesses, and thus support sea-level rise projection models.
Climate models are biased with respect to real-world observations. They usually need to be adjusted before being used in impact studies. The suite of statistical methods that enable such adjustments is called bias correction (BC). However, BC methods currently struggle to adjust temporal biases. Because they mostly disregard the dependence between consecutive time points. As a result, climate statistics with long-range temporal properties, such as the number of heatwaves and their frequency, cannot be corrected accurately. This makes it more difficult to produce reliable impact studies on such climate statistics. This article offers a novel BC methodology to correct temporal biases. This is made possible by rethinking the philosophy behind BC. We will introduce BC as a time-indexed regression task with stochastic outputs. Rethinking BC enables us to adapt state-of-the-art machine learning (ML) attention models and thereby learn different types of biases, including temporal asynchronicities. With a case study of number of heatwaves in Abuja, Nigeria and Tokyo, Japan, we show more accurate results than current climate model outputs and alternative BC methods.
Bias correction is a critical aspect of data-centric climate studies, as it aims to improve the consistency between observational data and simulations by climate models or estimates by remote sensing. Satellite-based estimates of climatic variables like precipitation often exhibit systematic bias when compared to ground observations. To address this issue, the application of bias correction techniques becomes necessary. This research work examines the use of deep learning to reduce the systematic bias of satellite estimations at each grid location while maintaining the spatial dependency across grid points. More specifically, we try to calibrate daily precipitation values of tropical rainfall measuring mission based TRMM_3B42_Daily precipitation data over Indian landmass with ground observations recorded by India Meteorological Department (IMD). We have focused on the precipitation estimates of the Indian Summer Monsoon Rainfall (ISMR) period (June–September) since India gets more than 75% of its annual rainfall in this period. We have benchmarked these deep learning methods against standard statistical methods like quantile mapping and quantile delta mapping on the above datasets. The comparative analysis shows the effectiveness of the deep learning architecture in bias correction.
Engineering machines are becoming increasingly complex and possess more control variables, increasing the complexity and versatility of the control systems. Different configurations of the control system, named a policy, can result in similar output behavior but with different resource or component life usage. There is therefore an opportunity to find optimal policies with respect to economic decisions. While many solutions have been proposed to find such economic policy decisions at the asset level, we consider this problem at the fleet level. In this case, the optimal operation of each asset is affected by the state of all other assets in the fleet. Challenges introduced by considering multiple assets include the construction of economic multi-objective optimization criteria, handling rare events such as failures, application of fleet-level constraints, and scalability. The proposed solution presents a framework for economic fleet optimization. The framework is demonstrated for economic criteria relating to resource usage, component lifing, and maintenance scheduling, but is generically extensible. Direct optimization of lifetime distributions is considered in order to avoid the computational burden of discrete event simulation of rare events. Results are provided for a real-world case study targeting the optimal economic operation of a fleet of aerospace gas turbine engines.
It is impossible to view the news at present without hearing talk of crisis: the economy, the climate, the pandemic. This book asks how these larger societal issues lead to a crisis with work, making it ever more precarious, unequal and intense. Experts diagnose the nature of the problem and offer a programme for transcending above the crises.
This paper proposes to solve the vortex gust mitigation problem on a 2D, thin flat plate using onboard measurements. The objective is to solve the discrete-time optimal control problem of finding the pitch rate sequence that minimizes the lift perturbation, that is, the criterion where is the lift coefficient obtained by the unsteady vortex lattice method. The controller is modeled as an artificial neural network, and it is trained to minimize using deep reinforcement learning (DRL). To be optimal, we show that the controller must take as inputs the locations and circulations of the gust vortices, but these quantities are not directly observable from the onboard sensors. We therefore propose to use a Kalman particle filter (KPF) to estimate the gust vortices online from the onboard measurements. The reconstructed input is then used by the controller to calculate the appropriate pitch rate. We evaluate the performance of this method for gusts composed of one to five vortices. Our results show that (i) controllers deployed with full knowledge of the vortices are able to mitigate efficiently the lift disturbance induced by the gusts, (ii) the KPF performs well in reconstructing gusts composed of less than three vortices, but shows more contrasted results in the reconstruction of gusts composed of more vortices, and (iii) adding a KPF to the controller recovers a significant part of the performance loss due to the unobservable gust vortices.
This paper presents a climbing robot (CR) designed for the purpose of pipeline maintenance, with capability to avoid the risks inherent in manual operations. In the design process, a three degree of freedom (DOF) parallel mechanism coupled with a remote center of motion (RCM) mechanism linkage mechanism were designed to serve as the CR’s climbing mechanism, which met the specific demands for climbing movements. The modified Kutzbach–Grübler formula and the screw theory were applied to calculate the DOFs of the CR. Then, the inverse and forward position analysis for the CR was derived. Furthermore, velocity and acceleration analysis of parallel mechanism were conducted and derived the Jacobian matrix, through which the singularity of parallel mechanism was analyzed. In order to evaluate kinematic performance of parallel mechanism, the motion/force transmission index (LTI) of workspace was calculated, which directed the followed dimensional optimization process. According to the optimization result, a prototype was constructed and a series of motion experiments were carried out to validate its climbing capability.
In practice, nondestructive testing (NDT) procedures tend to consider experiments (and their respective models) as distinct, conducted in isolation, and associated with independent data. In contrast, this work looks to capture the interdependencies between acoustic emission (AE) experiments (as meta-models) and then use the resulting functions to predict the model hyperparameters for previously unobserved systems. We utilize a Bayesian multilevel approach (similar to deep Gaussian Processes) where a higher-level meta-model captures the inter-task relationships. Our key contribution is how knowledge of the experimental campaign can be encoded between tasks as well as within tasks. We present an example of AE time-of-arrival mapping for source localization, to illustrate how multilevel models naturally lend themselves to representing aggregate systems in engineering. We constrain the meta-model based on domain knowledge, then use the inter-task functions for transfer learning, predicting hyperparameters for models of previously unobserved experiments (for a specific design).
In this paper, we analyze a polling system on a circle. Random batches of customers arrive at a circle, where each customer, independently, obtains a location that is uniformly distributed on the circle. A single server cyclically traverses the circle to serve all customers. Using mean value analysis, we derive the expected number of waiting customers within a given distance of the server. We exploit this to obtain closed-form expressions for both the mean batch sojourn time and the mean time to delivery.
Gradual typing provides a model for when a legacy language with less precise types interacts with a newer language with more precise types. Casts mediate between types of different precision, allocating blame when a value fails to conform to a type. The blame theorem asserts that blame always falls on the less-precisely typed side of a cast. One instance of interest is when a legacy language (such as Java) permits null values at every type, while a newer language (such as Scala or Kotlin) explicitly indicates which types permit null values. Nieto et al. in 2020 introduced a gradually typed calculus for just this purpose. The calculus requires three distinct constructors for function types and a non-standard proof of the blame theorem; it can embed terms from the legacy language into the newer language (or vice versa) only when they are closed. Here, we define a simpler calculus that is more orthogonal, with one constructor for function types and one for possibly nullable types, and with an entirely standard proof of the blame theorem; it can embed terms from the legacy language into the newer language (and vice versa) even if they are open. All results in the paper have been mechanized in Coq.
This commentary explores MENA”s AI governance, addressing gaps, showcasing successful strategies, and comparing national approaches. It emphasizes current deficiencies, highlights regional contributions to global AI governance, and offers insights into effective frameworks. The study reveals distinctions and trends in MENA”s national AI strategies, serving as a concise resource for policymakers and industry stakeholders.