To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The loading and unloading operations of smart logistic application robots depend largely on their perception system. However, there is a paucity of study on the evaluation of Lidar maps and their SLAM algorithms in complex environment navigation system. In the proposed work, the Lidar information is finetuned using binary occupancy grid approach and implemented Improved Self-Adaptive Learning Particle Swarm Optimization (ISALPSO) algorithm for path prediction. The approach makes use of 2D Lidar mapping to determine the most efficient route for a mobile robot in logistical applications. The Hector SLAM method is used in the Robot Operating System (ROS) platform to implement mobile robot real-time location and map building, which is subsequently transformed into a binary occupancy grid. To show the path navigation findings of the proposed methodologies, a navigational model has been created in the MATLAB 2D virtual environment using 2D Lidar mapping point data. The ISALPSO algorithm adapts its parameters inertia weight, acceleration coefficients, learning coefficients, mutation factor, and swarm size, based on the performance of the generated path. In comparison to the other five PSO variants, the ISALPSO algorithm has a considerably shorter path, a quick convergence rate, and requires less time to compute the distance between the locations of transporting and unloading environments, based on the simulation results that was generated and its validation using a 2D Lidar environment. The efficiency and effectiveness of path planning for mobile robots in logistic applications are validated using Quanser hardware interfaced with 2D Lidar and operated in environment 3 using proposed algorithm for production of optimal path.
Invertibility is a fundamental concept in computer science, with various manifestations in software development (serializer/deserializer, parser/printer, redo/undo, compressor/decompressor, and so on). Full invertibility necessarily requires bijectivity, but the direct approach of composing bijective functions to develop invertible programs is too restrictive to be useful. In this paper, we take a different approach by focusing on partially invertible functions—functions that become invertible if some of their arguments are fixed. The simplest example of such is addition, which becomes invertible when fixing one of the operands. More involved examples include entropy-based compression methods (e.g., Huffman coding), which carry the occurrence frequency of input symbols (in certain formats such as Huffman tree), and fixing this frequency information makes the compression methods invertible.
We develop a language Sparcl for programming such functions in a natural way, where partial invertibility is the norm and bijectivity is a special case, hence gaining significant expressiveness without compromising correctness. The challenge in designing such a language is to allow ordinary programming (the “partially” part) to interact with the invertible part freely, and yet guarantee invertibility by construction. The language Sparcl is linear-typed and has a type constructor to distinguish data that are subject to invertible computation and those that are not. We present the syntax, type system, and semantics of the language and prove that Sparcl correctly guarantees invertibility for its programs. We demonstrate the expressiveness of Sparcl with examples including tree rebuilding from preorder and inorder traversals, Huffman coding, arithmetic coding, and LZ77 compression.
Social network analysis is known to provide a wealth of insights relevant to many aspects of policymaking. Yet, the social data needed to construct social networks are not always available. Furthermore, even when they are, interpreting such networks often relies on extraneous knowledge. Here, we propose an approach to infer social networks directly from the texts produced by actors and the terminological similarities that these texts exhibit. This approach relies on fitting a topic model to the texts produced by these actors and measuring topic profile correlations between actors. This reveals what can be called “hidden communities of interest,” that is, groups of actors sharing similar semantic contents but whose social relationships with one another may be unknown or underlying. Network interpretation follows from the topic model. Diachronic perspectives can also be built by modeling the networks over different time periods and mapping genealogical relationships between communities. As a case study, the approach is deployed over a working corpus of academic articles (domain of philosophy of science; N=16,917).
Leading experts in the field ask what digital justice looks like in a time of pandemic across various interdisciplinary contexts and spheres in science, technology and society from public health to education, politics and everyday life.
This chapter establishes an explicit link between foreign aid inflows and development indicators classified in the multidimensional setting of the SDGs. This linkage is not a black box as it takes advantage of the model’s causal chains describing budget allocations and indicator performance. First, we create counterfactuals by removing aid flows. Hence, we can estimate aid impacts and assess their statistical significance at the indicator or country levels during the first decade of the 21st century. Second, we produce a validation exercise comparing our results with econometric evidence found in a well-known sector-level study (access and sanitation of water) using a subset of our data.
Coreference resolution is the task of identifying and clustering mentions that refer to the same entity in a document. Based on state-of-the-art deep learning approaches, end-to-end coreference resolution considers all spans as candidate mentions and tackles mention detection and coreference resolution simultaneously. Recently, researchers have attempted to incorporate document-level context using higher-order inference (HOI) to improve end-to-end coreference resolution. However, HOI methods have been shown to have marginal or even negative impact on coreference resolution. In this paper, we reveal the reasons for the negative impact of HOI coreference resolution. Contextualized representations (e.g., those produced by BERT) for building span embeddings have been shown to be highly anisotropic. We show that HOI actually increases and thus worsens the anisotropy of span embeddings and makes it difficult to distinguish between related but distinct entities (e.g., pilots and flight attendants). Instead of using HOI, we propose two methods, Less-Anisotropic Internal Representations (LAIR) and Data Augmentation with Document Synthesis and Mention Swap (DSMS), to learn less-anisotropic span embeddings for coreference resolution. LAIR uses a linear aggregation of the first layer and the topmost layer of contextualized embeddings. DSMS generates more diversified examples of related but distinct entities by synthesizing documents and by mention swapping. Our experiments show that less-anisotropic span embeddings improve the performance significantly (+2.8 F1 gain on the OntoNotes benchmark) reaching new state-of-the-art performance on the GAP dataset.
In this study, a fuzzy reinforcement learning control (FRLC) is proposed to achieve trajectory tracking of a differential drive mobile robot (DDMR). The proposed FRLC approach designs fuzzy membership functions to fuzzify the relative position and heading between the current position and a prescribed trajectory. Instead of fuzzy inference rules, the relationship between the fuzzy inputs and actuator voltage outputs is built using a reinforcement learning (RL) agent. Herein, the deep deterministic policy gradient (DDPG) methodology consisted of actor and critic neural networks is employed in the RL agent. Simulations are conducted with considering varying slip ratio disturbances, different initial positions, and two different trajectories in the testing environment. In the meantime, a comparison with the classical DDPG model is presented. The results show that the proposed FRLC is capable of successfully tracking different trajectories under varying slip ratio disturbances as well as having performance superiority to the classical DDPG model. Moreover, experimental results validate that the proposed FRLC is also applicable to real mobile robots.
This chapter introduces a model in which a government allocates financial resources across several policy issues (development dimensions), and a set of public servants (or agencies) that, through government programmes, transform public spending into policy outcomes. We start by describing the macro-level dynamics and the relevant equations involved. Then, we introduce a political economy game between the government and its officials (or public servants). First, we describe the public servants’ decision making in an environment of uncertainty through reinforcement learning. Second, we elaborate on the problem of the government (or central authority) and how we can specify its heuristic strategy. Finally, we provide an overview of the entire structure of the model.
Current research on data in policy has primarily focused on street-level bureaucrats, neglecting the changes in the work of policy advisors. This research fills this gap by presenting an explorative theoretical understanding of the integration of data, local knowledge and professional expertise in the work of policy advisors. The theoretical perspective we develop builds upon Vickers’s (1995, The Art of Judgment: A Study of Policy Making, Centenary Edition, SAGE) judgments in policymaking. Empirically, we present a case study of a Dutch law enforcement network for preventing and reducing organized crime. Based on interviews, observations, and documents collected in a 13-month ethnographic fieldwork period, we study how policy advisors within this network make their judgments. In contrast with the idea of data as a rationalizing force, our study reveals that how data sources are selected and analyzed for judgments is very much shaped by the existing local and expert knowledge of policy advisors. The weight given to data is highly situational: we found that policy advisors welcome data in scoping the policy issue, but for judgments more closely connected to actual policy interventions, data are given limited value.
This chapter elaborates on the calibration and validation procedures for the model. First, we describe our calibration strategy in which a customised optimisation algorithm makes use of a multi-objective function, preventing the loss of indicator-specific error information. Second, we externally validate our model by replicating two well-known statistical patterns: (1) the skewed distribution of budgetary changes and (2) the negative relationship between development and corruption. Third, we internally validate the model by showing that public servants who receive more positive spillovers tend to be less efficient. Fourth, we analyse the statistical behaviour of the model through different tests: validity of synthetic counterfactuals, parameter recovery, overfitting, and time equivalence. Finally, we make a brief reference to the literature on estimating SDG networks.
Deep reinforcement learning (DRL) is promising for solving control problems in fluid mechanics, but it is a new field with many open questions. Possibilities are numerous and guidelines are rare concerning the choice of algorithms or best formulations for a given problem. Besides, DRL algorithms learn a control policy by collecting samples from an environment, which may be very costly when used with Computational Fluid Dynamics (CFD) solvers. Algorithms must therefore minimize the number of samples required for learning (sample efficiency) and generate a usable policy from each training (reliability). This paper aims to (a) evaluate three existing algorithms (DDPG, TD3, and SAC) on a fluid mechanics problem with respect to reliability and sample efficiency across a range of training configurations, (b) establish a fluid mechanics benchmark of increasing data collection cost, and (c) provide practical guidelines and insights for the fluid dynamics practitioner. The benchmark consists in controlling an airfoil to reach a target. The problem is solved with either a low-cost low-order model or with a high-fidelity CFD approach. The study found that DDPG and TD3 have learning stability issues highly dependent on DRL hyperparameters and reward formulation, requiring therefore significant tuning. In contrast, SAC is shown to be both reliable and sample efficient across a wide range of parameter setups, making it well suited to solve fluid mechanics problems and set up new cases without tremendous effort. In particular, SAC is resistant to small replay buffers, which could be critical if full-flow fields were to be stored.
This chapter introduces the reader to the problem of policy prioritisation and why quantitative/computational analytic frameworks are much needed. We explain the various academic- and policy-oriented motivations for developing the Policy Priority Inference research programme. We apply this computational framework in the study of the SDGs and the feasibility of the 2030 Agenda of sustainable development.
This chapter formulates an analytical toolkit that incorporates an intricate – yet realistic – chain of causal mechanisms to explain the expenditure–development relationship. First, we explain several reasons why we take a complexity perspective for modelling the expenditure–development link and why we choose agent-based modelling as a suitable tool for assessing policy impacts in sustainable development. Second, we introduce the concept of social mechanisms and explain how we apply them to measure the impact of budgetary allocations when systemic effects are relevant. Third, we compare different concepts of causality and explain the advantages of an account that simulates counterfactual scenarios where policy interventions are absent.
This chapter provides a comprehensive framework to understand and quantify structural bottlenecks in a setting of multidimensional sustainable development. First, we formalise the idea of an idiosyncratic bottleneck when thinking in a hypothetical situation where a government has all the necessary resources to guarantee the success of its existing programmes (i.e., the budgetary frontier). Second, we compare the development gaps between the baseline and counterfactual outputs to assess how sensitive are the different indicators when they operate at the budgetary frontier. Third, we combine this information with the historical performance of indicators to develop a methodology that identifies idiosyncratic bottlenecks. Finally, we elaborate on a flagging system to differentiate between idiosyncratic bottlenecks according to the ‘urgency’ to unblock them.