To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Currently, most wheel-legged robots need to complete the switching of the wheel-and-leg modal in a stationary state, and the existing algorithms of statically switching the wheel-leg modal cannot meet the control requirements of multimodal switching dynamically for robots. In this paper, to achieve efficient switching of the wheel-and-leg modal for a quadruped robot, the novel transformable mechanism is designed. Then, a multimodal coordination operation control framework based on multiple algorithms is presented, incorporating the minimum foot force distribution method (algorithm No.1), the minimum joint torque distribution method (algorithm No.2), and the method of combining the single rigid body dynamic model with quadratic programming (algorithm No.3). In the process of switching wheel-leg modal dynamically, the existing algorithm No.3 is prone to produce the wrong optimal force due to the change of the whole-body rotational inertia. Therefore, an improved algorithm No.1 and algorithm No.2 are proposed, which do not consider the change in the body’s inertia. The control effects of the three algorithms are compared and analyzed by simulation. The results show that algorithm No.3 can maintain a small error in attitude angle and speed tracking regardless of whether the robot is under multilegged support or omnidirectional walking compared to the other two algorithms. However, proposed algorithms No.1 and No.2 can more accurately track the target speed when the robot is walking with wheels raising and falling. Finally, a multi-algorithm combination control scheme formulated based on the above control effects has been demonstrated to be effective for the dynamic switching of the wheel-and-leg modal.
There are now an estimated 114 million forcibly displaced people worldwide, some 88% of whom are in low- and middle-income countries. For governments and international organizations to design effective policies and responses, they require comparable and accessible socioeconomic data on those affected by forced displacement, including host communities. Such data is required to understand needs, as well as interactions between complex drivers of displacement and barriers to durable solutions. However, high-quality data of this kind takes time to collect and is costly. Can the ever-increasing volume of open data and evolving innovative techniques accelerate and enhance its generation? Are there applications of alternative data sources, advanced statistics, and machine-learning that could be adapted for forced displacement settings, considering their specific legal and ethical dimensions? As a catalytic bridge between the World Bank and UNHCR, the Joint Data Center on Forced Displacement convened a workshop to answer these questions. This paper summarizes the emergent messages from the workshop and recommendations for future areas of focus and ways forward for the community of practice on socioeconomic data on forced displacement. Three recommended areas of future focus are: enhancing and optimizing household survey sampling approaches; estimating forced displacement socioeconomic indicators from alternative data sources; and amplifying data accessibility and discoverability. Three key features of the recommended approach are: strong complementarity with the existing data-collection-to-use-pipeline; data responsibility built-in and tailored to forced displacement contexts; and iterative assessment of operational relevance to ensure continuous focus on improving outcomes for those affected by forced displacement.
This paper assembles a unifying framework encompassing a wide variety of mathematical instruments used to compare different theories. The main theme will be the idea that theory comparison techniques are most easily grasped and organized through the lens of category theory. The paper develops a table of different equivalence relations between theories and then answers many of the questions about how those equivalence relations are themselves related to each other. We show that Morita equivalence fits into this framework and provide answers to questions left open in Barrett and Halvorson [4]. We conclude by setting up a diagram of known relationships and leave open some questions for future work.
A broad gap exists between “God’s eye” transit maps from above that experts draw and how domestic workers map their commutes in Bogotá and Medellín, Colombia, through a street-level approach. Based on fieldwork conducted in both cities between 2017 and 2018, including interviews, participant observation, and social cartography, this translational article brings domestic workers’ understanding of the city they traverse daily vis-à-vis how experts conceive modern and rational public transportation systems. Delving into the literature on cartography, the Right to the City (RtC), and feminist geography, the study analyzes this gap and finds how it limits an effective RtC for this massive group of female commuters. It further provides public policy recommendations to address the gap and ensure RtC for all.
Monads prove to be useful mathematical tools in theoretical computer science, notably in denoting different effects of programming languages. In this paper, we investigate a type of monads which arise naturally from Keimel and Lawson’s $\mathbf{K}$-ification.
A subcategory of $\mathbf{TOP}_{\mathbf{0}}$ is called of type $\mathrm{K}^{*}$ if it consists of monotone convergence spaces and is of type $\mathrm K$ in the sense of Keimel and Lawson. Each such category induces a canonical monad $\mathcal K$ on the category $\mathbf{DCPO}$ of dcpos and Scott-continuous maps, which is called the order-$\mathbf{K}$-ification monad in this paper. First, for each category of type $\mathrm{K}^{*}$, we characterize the algebras of the corresponding monad $\mathcal K$ as k-complete posets and algebraic homomorphisms as k-continuous maps, from which we obtain that the order-$\mathbf{K}$-ification monad gives the free k-complete poset construction over the category $\mathbf{POS}_{\mathbf{d}}$ of posets and Scott-continuous maps. In addition, we show that all k-complete posets and Scott-continuous maps form a Cartesian closed category. Moreover, we consider the strongness of the order-K-ification monad and conclude with the fact that each order-K-ification monad is always commutative.
Answer set programming (ASP) is a popular declarative programming language for solving hard combinatorial problems. Although ASP has gained widespread acceptance in academic and industrial contexts, there are certain user groups who may find it more advantageous to employ a higher-level language that closely resembles natural language when specifying ASP programs. In this paper, we propose a novel tool, called CNL2ASP, for translating English sentences expressed in a controlled natural language (CNL) form into ASP. In particular, we first provide a definition of the type of sentences allowed by our CNL and their translation as ASP rules and then exemplify the usage of the CNL for the specification of both synthetic and real-world combinatorial problems. Finally, we report the results of an experimental analysis conducted on the real-world problems to compare the performance of automatically generated encodings with the ones written by ASP practitioners, showing that our tool can obtain satisfactory performance on these benchmarks.
We consider bond percolation on high-dimensional product graphs $G=\square _{i=1}^tG^{(i)}$, where $\square$ denotes the Cartesian product. We call the $G^{(i)}$ the base graphs and the product graph $G$ the host graph. Very recently, Lichev (J. Graph Theory, 99(4):651–670, 2022) showed that, under a mild requirement on the isoperimetric properties of the base graphs, the component structure of the percolated graph $G_p$ undergoes a phase transition when $p$ is around $\frac{1}{d}$, where $d$ is the average degree of the host graph.
In the supercritical regime, we strengthen Lichev’s result by showing that the giant component is in fact unique, with all other components of order $o(|G|)$, and determining the sharp asymptotic order of the giant. Furthermore, we answer two questions posed by Lichev (J. Graph Theory, 99(4):651–670, 2022): firstly, we provide a construction showing that the requirement of bounded degree is necessary for the likely emergence of a linear order component; secondly, we show that the isoperimetric requirement on the base graphs can be, in fact, super-exponentially small in the dimension. Finally, in the subcritical regime, we give an example showing that in the case of irregular high-dimensional product graphs, there can be a polynomially large component with high probability, very much unlike the quantitative behaviour seen in the Erdős-Rényi random graph and in the percolated hypercube, and in fact in any regular high-dimensional product graphs, as shown by the authors in a companion paper (Percolation on high-dimensional product graphs. arXiv:2209.03722, 2022).
For years, computer-assisted language learning (CALL) has thrived as an interdisciplinary subfield, linking applied linguistics and educational technology. Despite its significance and a number of syntheses, CALL research has not yet undergone a comprehensive scientometric synthesis. This study synthesizes CALL research over a period of 42 years by employing a scientometric analysis of sources and document co-citation analyses. Scopus was used to retrieve original articles with a timespan limit from 1980 to 2021. Our records identified 4,631 articles representing CALL-based research, which were published in 63 peer-reviewed journals and collectively contained 186,589 references. The findings indicate that CALL research is supported by robust theoretical frameworks, grounded in socio-cultural and second language acquisition theories. Our research findings have revealed several significant clusters of interest within the realm of CALL, with a pronounced focus on writing among CALL scholars. Additionally, the study identified emerging research areas such as mobile-assisted language learning, synchronous computer-mediated communication, and data-driven learning in CALL literature. Notably, “CALL-core” journals exhibited high productivity, with Language Learning & Technology, Computer Assisted Language Learning, and Computers & Education standing out as top-ranked journals in terms of the Hirsch index (h-index) and co-citation. Suggestions for future research are outlined in the conclusion.
Social media is not a neutral channel. How visible information posted online is depends on many factors such as the network structure, the emotional volatility of the content, and the design of the social media platform. In this paper, we use formal methods to study the visibility of agents and information in a social network, as well as how vulnerable the network is to exploitation. We introduce a modal logic to reason about a social network of agents that can follow each other, post, and share information. We show that by imposing some simple rules on the system, a potentially malicious agent can take advantage of the network construction to post an unpopular opinion that may reach many agents. The network is presented both in static and dynamic forms. We prove completeness, expressivity, and model checking problem complexity results for the corresponding logical systems.
Over decades of robotics research, cooperative object transportation has been studied as a meaningful model problem for robotic networks because it possesses a variety of crucial challenges. Although these challenges are demanding, the cooperation of multiple robots has the potential to solve automation problems that are beyond the scope of an individual robot. So far, the model problem has mostly been addressed by explicitly controlling the robots’ positions. However, the position-based approach suffers from some intrinsic detriments, for example, the lack of explicit feedback between robots and object. Moreover, it remains an open question how many robots shall be employed to ensure a successful transportation. This paper’s purpose is to overcome these challenges using a novel force-based approach taking into account the robots’ actual manipulation capabilities, that is, the exerted forces. Using cost-efficient hardware, the interaction forces are measured and, what is more, explicitly controlled by a highly responsive onboard controller. Employing a tailored software architecture, the novel force-based scheme, useful for robotic manipulation beyond the benchmark problem, is probably the most flexible of its kind regarding the number of robots and the object’s shape. The controller’s functionality and performance as well as the scheme’s versatility are demonstrated by several hardware experiments.
We investigate the reengineeering of interbank networks with a specific focus on capital increase. We consider a scenario where all other components of the network’s infrastructure remain stable (a practical assumption for short-term situations). Our objective is to assess the impact of raising capital on the network’s robustness and to address the following key aspects. First, given a predefined target for network robustness, our aim is to achieve this goal optimally, minimizing the required capital increase. Second, in cases where a total capital increase has been determined, the central challenge lies in distributing this increase among the banks in a manner that maximizes the stability of the network. To tackle these challenges, we begin by developing a comprehensive theoretical framework. Subsequently, we formulate an optimization model for the network’s redesign. Finally, we apply this framework to practical examples, highlighting its applicability in real-world scenarios.
Global warming will cause unprecedented changes to the world. Predicting events such as food insecurities in specific earth regions is a valuable way to face them with adequate policies. Existing food insecurity prediction models are based on handcrafted features such as population counts, food prices, or rainfall measurements. However, finding useful features is a challenging task, and data scarcity hinders accuracy. We leverage unsupervised pre-training of neural networks to automatically learn useful features from widely available Landsat-8 satellite images. We train neural feature extractors to predict whether pairs of images are coming from spatially close or distant regions on the assumption that close regions should have similar features. We also integrate a temporal dimension to our pre-training to capture the temporal trends of satellite images with improved accuracy. We show that with unsupervised pre-training on a large set of satellite images, neural feature extractors achieve a macro F1 of 65.4% on the Famine Early Warning Systems network dataset—a 24% improvement over handcrafted features. We further show that our pre-training method leads to better features than supervised learning and previous unsupervised pre-training techniques. We demonstrate the importance of the proposed time-aware pre-training and show that the pre-trained networks can predict food insecurity with limited availability of labeled data.
The past two decades have seen an explosion both in the volume of data we use, and our understanding of its management. However, while techniques and technology for manipulating data have advanced rapidly in this time, the concepts around the value of our data have not. This lack of progress has made it increasingly difficult for organisations to understand the value in their data, the value of their data, and how to exploit that value.
Halo Data proposes a paradigm shift in methodology for organisations to properly appreciate and leverage the value of their data. Written by an author team with many years' experience in data strategy, management and technology, the book will first review the current state of our understanding of data. This opening will demonstrate the limitations of this status quo, including a discussion on metadata and its limitations, data monetisation and data-driven business models. Following this, the book will present a new concept and framework for understanding and quantifying value in an organisation's data and a practical methodology for using this in practice.
Ideal for data leaders and executives who are looking to leverage the data at their fingertips.
Based on the authors' extensive teaching experience, this hands-on graduate-level textbook teaches how to carry out large-scale data analytics and design machine learning solutions for big data. With a focus on fundamentals, this extensively class-tested textbook walks students through key principles and paradigms for working with large-scale data, frameworks for large-scale data analytics (Hadoop, Spark), and explains how to implement machine learning to exploit big data. It is unique in covering the principles that aspiring data scientists need to know, without detail that can overwhelm. Real-world examples, hands-on coding exercises and labs combine with exceptionally clear explanations to maximize student engagement. Well-defined learning objectives, exercises with online solutions for instructors, lecture slides, and an accompanying suite of lab exercises of increasing difficulty in Jupyter Notebooks offer a coherent and convenient teaching package. An ideal teaching resource for courses on large-scale data analytics with machine learning in computer/data science departments.
Although several studies have revealed that fractional order controllers usually outperform conventional integer-order control solutions, fractional order controllers are not yet widely applied in industrial applications due to their complex mathematical background. In this paper, further improvements of a simple weighted sum feedback design are introduced that imitates the behavior of a fractional order controller but is free from its various formal restrictions. The proposed control solution has the main characteristics of a fractional order controller, such as finite memory length, excellent transient response with no overshoot and robust behavior, but it is placed into a much simpler mathematical framework. In the current paper, a simple derivative term was incorporated in the design which made the controller’s output more stable by completely eliminating output chattering. The proposed control method was developed for a general second-order system. It was tested in a fixed point iteration-based adaptive control scenario, through simulations using a robotic example and on experimental basis as well, utilizing a simple one-degree-of-freedom electromechanical system. The presented experiments are the first systematic investigations of the fixed point iteration-based adaptive control method.
Feature selection is an important component of machine learning for researchers that are confronted with high dimensional data. In the field of economics, researchers are often faced with high dimensional data, particularly in the studies that aim to understand the channels through which climate change affects the welfare of countries. This work reviews the current literature that introduces various feature selection algorithms that may be useful for applications in this area of study. The article first outlines the specific problems that researchers face in understanding the effects of climate change on countries’ macroeconomic outcomes, and then provides a discussion regarding different categories of feature selection. Emphasis is placed on two main feature selection algorithms: Least Absolute Shrinkage and Selection Operator and causality-based feature selection. I demonstrate an application of feature selection to discover the optimal heatwave definition for economic outcomes, enhancing our understanding of extreme temperatures’ impact on the economy. I argue that the literature in computer science can provide useful insights in studies concerned with climate change as well as its economic outcomes.
We study three classes of shock models governed by an inverse gamma mixed Poisson process (IGMP), namely a mixed Poisson process with an inverse gamma mixing distribution. In particular, we analyze (1) the extreme shock model, (2) the δ-shock model, and the (3) cumulative shock model. For the latter, we assume a constant and an exponentially distributed random threshold and consider different choices for the distribution of the amount of damage caused by a single shock. For all the treated cases, we obtain the survival function, together with the expected value and the variance of the failure time. Some properties of the inverse gamma mixed Poisson process are also disclosed.
Providing a graduate-level introduction to discrete probability and its applications, this book develops a toolkit of essential techniques for analysing stochastic processes on graphs, other random discrete structures, and algorithms. Topics covered include the first and second moment methods, concentration inequalities, coupling and stochastic domination, martingales and potential theory, spectral methods, and branching processes. Each chapter expands on a fundamental technique, outlining common uses and showing them in action on simple examples and more substantial classical results. The focus is predominantly on non-asymptotic methods and results. All chapters provide a detailed background review section, plus exercises and signposts to the wider literature. Readers are assumed to have undergraduate-level linear algebra and basic real analysis, while prior exposure to graduate-level probability is recommended. This much-needed broad overview of discrete probability could serve as a textbook or as a reference for researchers in mathematics, statistics, data science, computer science and engineering.
The purpose of this article is to investigate the interactional strategies that secondary school EFL learners use during telecollaboration with interlocutors from a different cultural and linguistic background. Its novelty lies in the educational level and cultural contexts explored as well as in the goal of examining whether frequency of strategy use may depend on the interlocutor and, if so, what the relevant factors are for such variation. The study focuses on two projects in which 10 participants from one Bulgarian school and 18 partners from two schools in Spain took part in dyadic synchronous videoconferences discussing culture-related topics. Data were collected via video recordings, field observations, semi-structured interviews, and questionnaires. The article presents both descriptive statistics and qualitative analysis of the communicative behavior of three case study students interacting with different telecollaborative partners. We found that the participants demonstrate differences in interactional behavior and utilize strategies with varying frequency depending on their interlocutor. The results also reveal crucial factors in the design of telecollaborative educational projects if learners are to successfully implement skills of discovery and interaction in real time.
Given a conditional sentence “${\varphi}\Rightarrow \psi$" (if ${\varphi}$ then $\psi$) and respective facts, four different types of inferences are observed in human reasoning: Affirming the antecedent (AA) (or modus ponens) reasons $\psi$ from ${\varphi}$; affirming the consequent (AC) reasons ${\varphi}$ from $\psi$; denying the antecedent (DA) reasons $\neg\psi$ from $\neg{\varphi}$; and denying the consequent (DC) (or modus tollens) reasons $\neg{\varphi}$ from $\neg\psi$. Among them, AA and DC are logically valid, while AC and DA are logically invalid and often called logical fallacies. Nevertheless, humans often perform AC or DA as pragmatic inference in daily life. In this paper, we realize AC, DA and DC inferences in answer set programming. Eight different types of completion are introduced, and their semantics are given by answer sets. We investigate formal properties and characterize human reasoning tasks in cognitive psychology. Those completions are also applied to commonsense reasoning in AI.