To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Solving a decision theory problem usually involves finding the actions, among a set of possible ones, which optimize the expected reward, while possibly accounting for the uncertainty of the environment. In this paper, we introduce the possibility to encode decision theory problems with Probabilistic Answer Set Programming under the credal semantics via decision atoms and utility attributes. To solve the task, we propose an algorithm based on three layers of Algebraic Model Counting, that we test on several synthetic datasets against an algorithm that adopts answer set enumeration. Empirical results show that our algorithm can manage non-trivial instances of programs in a reasonable amount of time.
Forecasting international migration is a challenge that, despite its political and policy salience, has seen a limited success so far. In this proof-of-concept paper, we employ a range of macroeconomic data to represent different drivers of migration. We also take into account the relatively consistent set of migration policies within the European Common Market, with its constituent freedom of movement of labour. Using panel vector autoregressive (VAR) models for mixed-frequency data, we forecast migration in the short- and long-term horizons for 26 of the 32 countries within the Common Market. We demonstrate how the methodology can be used to assess the possible responses of other macroeconomic variables to unforeseen migration events—and vice versa. Our results indicate reasonable in-sample performance of migration forecasts, especially in the short term, although with varying levels of accuracy. They also underline the need for taking country-specific factors into account when constructing forecasting models, with different variables being important across the regions of Europe. For the longer term, the proposed methods, despite high prediction errors, can still be useful as tools for setting coherent migration scenarios and analysing responses to exogenous shocks.
Cable-guiding mechanisms (CGMs) and the stiffness characteristics directly influence the dynamic features of the cable-driven upper limb rehabilitation robot (PCUR), which will affect PCUR’s performance. This paper introduces a novel CGM design. Given the precision and movement stability considerations of the mechanism, an analytical model is developed. Using this model, we analyze the error of the CGM and derive velocity and acceleration mappings from the moving platform to the cables. Continuity of cable trajectory and tension is rigorously demonstrated. Subsequently, a mathematical model for PCUR stiffness is formulated. Utilizing MATLAB/Simscape Multibody, simulation models for the CGM and stiffness characteristics are constructed. The feasibility of the proposed CGM design is validated through simulation and experimentation, while the influence of stiffness characteristics on PCUR motion stability is comprehensively analyzed.
This paper proposes an options pricing model that incorporates stochastic volatility, stochastic interest rates, and stochastic jump intensity. Market shocks are modeled using a jump process, with each jump governed by an asymmetric double-exponential distribution. The model also integrates a Markov regime-switching framework for volatility and the risk-free rate, allowing the market to alternate between a finite number of distinct economic states. A closed-form solution for European option pricing is derived. To demonstrate the significance of the proposed model, a comparison with various other models is performed, and the sensitivity of the various model parameters is illustrated.
Effective enforcement of laws and regulations hinges heavily on robust inspection policies. While data-driven approaches to testing the effectiveness of these policies are gaining popularity, they suffer significant drawbacks, particularly a lack of explainability and generalizability. This paper proposes an approach to crafting inspection policies that combines data-driven insights with behavioral theories to create an agent-based simulation model that we call a theory-infused phenomenological agent-based model (TIP-ABM). Moreover, this approach outlines a systematic process for combining theories and data to construct a phenomenological ABM, beginning with defining macro-level empirical phenomena. Illustrated through a case study of the Dutch inland shipping sector, the proposed methodology enhances explainability by illuminating inspectors’ tacit knowledge while iterating between statistical data and underlying theories. The broader generalizability of the proposed approach beyond the inland shipping context requires further research.
Currently, methods for mapping agricultural crops have been predominantly developed for a number of the most important and popular crops. These methods are often based on remote sensing data, scarce information about the location and boundaries of fields of a particular crop, and involve analyzing phenological changes throughout the growing season by utilizing vegetation indices, e.g., the normalized difference vegetation index. However, this approach encounters challenges when attempting to distinguish fields with different crops growing in the same area or crops that share similar phenology. This complicates the reliable identification of the target crops based solely on vegetation index patterns. This research paper aims to investigate the potential of advanced techniques for crop mapping using satellite data and qualitative information. These advanced approaches involve interpreting features in satellite images in conjunction with cartographic, statistical, and climate data. The study focuses on data collection and mapping of three specific crops: lavender, almond, and barley, and relies on various sources of information for crop detection, including satellite image characteristics, regional statistical data detailing crop areas, and phenological information, such as flowering dates and the end of the growing season in specific regions. As an example, the study attempts to visually identify lavender fields in Bulgaria and almond orchards in the USA. We test several state-of-the-art methods for semantic segmentation (U-Net, UNet++, ResUnet). The best result was achieved by a ResUnet model (96.4%). Furthermore, the paper explores how vegetation indices can be leveraged to enhance the precision of crop identification, showcasing their advanced capabilities for this task.
In this paper, we propose a network model to explain the implications of the pressure to share resources. Individuals use the network to establish social interactions that allow them to increase their income. They also use the network as a safety and to ask for assistance in case of need. The network is therefore a system characterized by social pressure to share and redistribute surplus of resources among members. The main result is that the potential redistributive pressure from other network members causes individuals to behave inefficiently. The number of social interactions used to employ workers displays a non-monotonic pattern with respect to the number of neighbors (degree): it increases for intermediate degree and decreases for high degree. Respect to a benchmark case without social pressure, individuals with few (many) network members interact more (less). Finally, we show that these predictions are consistent with the results obtained in a set of field experiments run in rural Tanzania.
Deep learning (DL) has become the most effective machine learning solution for addressing and accelerating complex problems in various fields, from computer vision and natural language processing to many more. Training well-generalized DL models requires large amounts of data which allows the model to learn the complexity of the task it is being trained to perform. Consequently, performance optimization of the deep-learning models is concentrated on complex architectures with a large number of tunable model parameters, in other words, model-centric techniques. To enable training such large models, significant effort has also gone into high-performance computing and big-data handling. However, adapting DL to tackle specialized domain-related data and problems in real-world settings presents unique challenges that model-centric techniques do not suffice to optimize. In this paper, we tackle the problem of developing DL models for seismic imaging using complex seismic data. We specifically address developing and deploying DL models for salt interpretation using seismic images. Most importantly, we discuss how looking beyond model-centric and leveraging data-centric strategies for optimization of DL model performance was crucial to significantly improve salt interpretation. This technique was also key in developing production quality, robust and generalized models.
A liquefied natural gas (LNG) facility often incorporates replicate liquefaction trains. The performance of equivalent units across trains, designed using common numerical models, might be expected to be similar. In this article, we discuss statistical analysis of real plant data to validate this assumption. Analysis of operational data for end flash vessels from a pair of replicate trains at an LNG facility indicates that one train produces 2.8%–6.4% more end flash gas than the other. We then develop statistical models for train operation, facilitating reduced flaring and hence a reduction of up to 45% in CO2 equivalent flaring emissions, noting that flaring emissions for a typical LNG facility account for ~4%–8% of the overall facility emissions. We recommend that operational data-driven models be considered generally to improve the performance of LNG facilities and reduce their CO2 footprint, particularly when replica units are present.
This study introduces an advanced reinforcement learning (RL)-based control strategy for heating, ventilation, and air conditioning (HVAC) systems, employing a soft actor-critic agent with a customized reward mechanism. This strategy integrates time-varying outdoor temperature-dependent weighting factors to dynamically balance thermal comfort and energy efficiency. Our methodology has undergone rigorous evaluation across two distinct test cases within the building optimization testing (BOPTEST) framework, an open-source virtual simulator equipped with standardized key performance indicators (KPIs) for performance assessment. Each test case is strategically selected to represent distinct building typologies, climatic conditions, and HVAC system complexities, ensuring a thorough evaluation of our method across diverse settings. The first test case is a heating-focused scenario in a residential setting. Here, we directly compare our method against four advanced control strategies: an optimized rule-based controller inherently provided by BOPTEST, two sophisticated RL-based strategies leveraging BOPTEST’s KPIs as reward references, and a model predictive control (MPC)-based approach specifically tailored for the test case. Our results indicate that our approach outperforms the rule-based and other RL-based strategies and achieves outcomes comparable to the MPC-based controller. The second scenario, a cooling-dominated environment in an office setting, further validates the versatility of our strategy under varying conditions. The consistent performance of our strategy across both scenarios underscores its potential as a robust tool for smart building management, adaptable to both residential and office environments under different climatic challenges.
One can perform equational reasoning about computational effects with a purely functional programming language thanks to monads. Even though equational reasoning for effectful programs is desirable, it is not yet mainstream. This is partly because it is difficult to maintain pencil-and-paper proofs of large examples. We propose a formalization of a hierarchy of effects using monads in the Coq proof assistant that makes monadic equational reasoning practical. Our main idea is to formalize the hierarchy of effects and algebraic laws as interfaces like it is done when formalizing hierarchy of algebras in dependent-type theory. Thanks to this approach, we clearly separate equational laws from models. We can then take advantage of the sophisticated rewriting capabilities of Coq and build libraries of lemmas to achieve concise proofs of programs. We can also use the resulting framework to leverage on Coq’s mathematical theories and formalize models of monads. In this article, we explain how we formalize a rich hierarchy of effects (nondeterminism, state, probability, etc.), how we mechanize examples of monadic equational reasoning from the literature, and how we apply our framework to the design of equational laws for a subset of ML with references.
Numerical solutions of partial differential equations require expensive simulations, limiting their application in design optimization, model-based control, and large-scale inverse problems. Surrogate modeling techniques aim to decrease computational expense while retaining dominant solution features and characteristics. Existing frameworks based on convolutional neural networks and snapshot-matrix decomposition often rely on lossy pixelization and data-preprocessing, limiting their effectiveness in realistic engineering scenarios. Recently, coordinate-based multilayer perceptron networks have been found to be effective at representing 3D objects and scenes by regressing volumetric implicit fields. These concepts are leveraged and adapted in the context of physical-field surrogate modeling. Two methods toward generalization are proposed and compared: design-variable multilayer perceptron (DV-MLP) and design-variable hypernetworks (DVH). Each method utilizes a main network which consumes pointwise spatial information to provide a continuous representation of the solution field, allowing discretization independence and a decoupling of solution and model size. DV-MLP achieves generalization through the use of a design-variable embedding vector, while DVH conditions the main network weights on the design variables using a hypernetwork. The methods are applied to predict steady-state solutions around complex, parametrically defined geometries on non-parametrically-defined meshes, with model predictions obtained in less than a second. The incorporation of random Fourier features greatly enhanced prediction and generalization accuracy for both approaches. DVH models have more trainable weights than a similar DV-MLP model, but an efficient batch-by-case training method allows DVH to be trained in a similar amount of time as DV-MLP. A vehicle aerodynamics test problem is chosen to assess the method’s feasibility. Both methods exhibit promising potential as viable options for surrogate modeling, being able to process snapshots of data that correspond to different mesh topologies.
Rapid urbanization poses several challenges, especially when faced with an uncontrolled urban development plan. Therefore, it often leads to anarchic occupation and expansion of cities, resulting in the phenomenon of urban sprawl (US). To support sustainable decision–making in urban planning and policy development, a more effective approach to addressing this issue through US simulation and prediction is essential. Despite the work published in the literature on the use of deep learning (DL) methods to simulate US indicators, almost no work has been published to assess what has already been done, the potential, the issues, and the challenges ahead. By synthesising existing research, we aim to assess the current landscape of the use of DL in modelling US. This article elucidates the complexities of US, focusing on its multifaceted challenges and implications. Through an examination of DL methodologies, we aim to highlight their effectiveness in capturing the complex spatial patterns and relationships associated with US. This work begins by demystifying US, highlighting its multifaceted challenges. In addition, the article examines the synergy between DL and conventional methods, highlighting the advantages and disadvantages. It emerges that the use of DL in the simulation and forecasting of US indicators is increasing, and its potential is very promising for guiding strategic decisions to control and mitigate this phenomenon. Of course, this is not without major challenges, both in terms of data and models and in terms of strategic city planning policies.
We introduce a comprehensive data-driven framework aimed at enhancing the modeling of physical systems, employing inference techniques and machine-learning enhancements. As a demonstrative application, we pursue the modeling of cathodic electrophoretic deposition, commonly known as e-coating. Our approach illustrates a systematic procedure for enhancing physical models by identifying their limitations through inference on experimental data and introducing adaptable model enhancements to address these shortcomings. We begin by tackling the issue of model parameter identifiability, which reveals aspects of the model that require improvement. To address generalizability, we introduce modifications, which also enhance identifiability. However, these modifications do not fully capture essential experimental behaviors. To overcome this limitation, we incorporate interpretable yet flexible augmentations into the baseline model. These augmentations are parameterized by simple fully-connected neural networks, and we leverage machine-learning tools, particularly neural ordinary differential equations, to learn these augmentations. Our simulations demonstrate that the machine-learning-augmented model more accurately captures observed behaviors and improves predictive accuracy. Nevertheless, we contend that while the model updates offer superior performance and capture the relevant physics, we can reduce off-line computational costs by eliminating certain dynamics without compromising accuracy or interpretability in downstream predictions of quantities of interest, particularly film thickness predictions. The entire process outlined here provides a structured approach to leverage data-driven methods by helping us comprehend the root causes of model inaccuracies and by offering a principled method for enhancing model performance.
The global number of individuals experiencing forced displacement has reached its highest level in the past decade. In this context, the provision of services for those in need requires timely and evidence-based approaches. How can mobile phone data (MPD) based analyses address the knowledge gap on mobility patterns and needs assessments in forced displacement settings? To answer this question, in this paper, we examine the capacity of MPD to function as a tool for anticipatory analysis, particularly in response to natural disasters and conflicts that lead to internal or cross-border displacement. The paper begins with a detailed review of the processes involved in acquiring, processing, and analyzing MPD in forced displacement settings. Following this, we critically assess the challenges associated with employing MPD in policy-making, with a specific focus on issues of user privacy and data ethics. The paper concludes by evaluating the potential benefits of MPD analysis for targeted and effective policy interventions and discusses future research avenues, drawing on recent studies and ongoing collaborations with mobile network operators.
For precision-required robot operations, the robot’s positioning accuracy, repeatability, and stiffness characteristics should be considered. If the mechanism has the desired repeatability performance, a kinematic calibration process can enhance the positioning accuracy. However, for robot operations where high accelerations are needed, the compliance characteristics of the mechanism affect the trajectory-tracking accuracy adversely. In this paper, a novel approach is proposed to enhance the trajectory-tracking accuracy of a robot operating at high accelerations by predicting the compliant displacements when there is no physical contact of the robot with its environment. Also, this case study compares the trajectory-tracking characteristics of an over-constrained and a normal-constrained 2-degrees-of-freedom (DoF) planar parallel mechanism during high-acceleration operations up to 5 g accelerations. In addition, the influence of the end-effector’s center of mass (CoM) position along the normal of the plane is investigated in terms of its effects on the proposed trajectory-enhancing algorithm.
In 2020, the COVID-19 pandemic resulted in a rapid response from governments and researchers worldwide, but information-sharing mechanisms were variable, and many early efforts were insufficient for the purpose. We interviewed fifteen data professionals located around the world, working with COVID-19-relevant data types in semi-structured interviews. Interviews covered both challenges and positive experiences with data in multiple domains and formats, including medical records, social deprivation, hospital bed capacity, and mobility data. We analyze this qualitative corpus of experiences for content and themes and identify four sequential barriers a researcher may encounter. These are: (1) Knowing data exists, (2) being able to access that data, (3) data quality, and (4) ability to share data onwards. A fifth barrier, (5) human throughput capacity, is present throughout all four stages. Examples of these barriers range from challenges faced by single individuals to non-existent records of historic mingling/social distance laws, and up to systemic geopolitical data suppression. Finally, we recommend that governments and local authorities explicitly create machine-readable temporal “law as code” for changes in laws such as mobility/mingling laws and changes in geographical regions.
This paper proposes a mobile robot recovery mechanism for low-cost robotic systems due to vision sensor failure in vSLAM systems. The approach takes advantage of ROS architecture and adopts the Shannon Nyquist sampling theory to selectively sample path parameters that will be used for back travel in case of vision sensor failure. As opposed to point clouds normally used to store vSLAM data, this paper proposes to store and use lightweight variables namely distance between sampled points, velocity combinations, i.e., linear and angular velocity, sampled period, and yaw angle values to describe the robot path and reduce the memory space required to store these variables. In this study, low-cost robotic systems typically using cameras aided by proprioceptive sensors such as IMU during vSLAM activities are investigated. A demonstration is made on how the ROS architecture can be used in a scenario where vision sensing is adversely affected, resulting in mapping failure. Additionally, a recommendation is made for adoption of the approach for vSLAM platforms implemented on both ROS1 and ROS2. Furthermore, a proposal is made to add an additional layer to vSLAM systems that will be exclusively used for back travel in case of vision loss during vSLAM activities resulting in mapping failure.
Data assimilation is a core component of numerical weather prediction systems. The large quantity of data processed during assimilation requires the computation to be distributed across increasingly many compute nodes; yet, existing approaches suffer from synchronization overhead in this setting. In this article, we exploit the formulation of data assimilation as a Bayesian inference problem and apply a message-passing algorithm to solve the spatial inference problem. Since message passing is inherently based on local computations, this approach lends itself to parallel and distributed computation. In combination with a GPU-accelerated implementation, we can scale the algorithm to very large grid sizes while retaining good accuracy and compute and memory requirements.