To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Search-optimization problems are plentiful in scientific and engineering domains. Artificial intelligence (AI) has long contributed to the development of search algorithms and declarative programming languages geared toward solving and modeling search-optimization problems. Automated reasoning and knowledge representation are the subfields of AI that are particularly vested in these developments. Many popular automated reasoning paradigms provide users with languages supporting optimization statements. Recall integer linear programming, MaxSAT, optimization satisfiability modulo theory, (constraint) answer set programming. These paradigms vary significantly in their languages in ways they express quality conditions on computed solutions. Here we propose a unifying framework of so-called extended weight systems that eliminates syntactic distinctions between paradigms. They allow us to see essential similarities and differences between optimization statements provided by distinct automated reasoning languages. We also study formal properties of the proposed systems that immediately translate into formal properties of paradigms that can be captured within our framework.
Research that examines the impact of economic, social, and political factors on political corruption uses expert’ and citizen’ perceptions for measuring corruption and testing arguments. Scholars argue that the perception of corruption is a good proxy for actual corruption because data on actual corruption are limited and not entirely trustworthy. However, perception indexes do not allow for testing separate mechanisms driving citizen’ perceptions of corruption from actual levels of corruption in different government branches. To address this issue, I introduce a new index based on Latin American countries to measure the risk of corruption in political parties. Using a de jure analysis of laws and regulations, the Risk of Corruption (ROC) index evaluates the likelihood of political parties engaging in corrupt activities. Instead of measuring corrupt activities or perception directly, the ROC measures the risks of involving in corruption. The index has important implications for academics and practitioners in anti-corruption issues. First, it allows us to test arguments about the role of political parties and legislatures in reducing political corruption. Second, it helps to understand how political parties could improve their internal organization to decrease the risk of corrupt activities. Finally, it is a valuable instrument for cross-national studies in diverse fields that study political parties.
Biped robots with dynamic motion control have shown strong robustness in complex environments. However, many motion planning methods rely on models, which have difficulty dynamically modifying the walking cycle, height, and other gait parameters to cope with environmental changes. In this study, a heuristic model-free gait template planning method with dynamic motion control is proposed. The gait trajectory can be generated by inputting the desired speed, walking cycle, and support height without a model. Then, the stable walking of the biped robot can be realized by foothold adjustment and whole-body dynamics model control. The gait template can be changed in real time to achieve gait flexibility of the biped robot. Finally, the effectiveness of the method is verified by simulations and experiments of the biped robot BHR-B2. The research presented here helps improve the gait transition ability of biped robots in dynamic locomotion.
We propose a hierarchical cognitive navigation model (HCNM) to improve the self-learning and self-adaptive ability of mobile robots in unknown and complex environments. The HCNM model adopts the divide and conquers approach by dividing the path planning task into different levels of sub-tasks in complex environments and solves each sub-task in a smaller state subspace to decrease the state space dimensions. The HCNM model imitates animal asymptotic properties through the study of thermodynamic processes and designs a cognitive learning algorithm to achieve online optimum search strategies. We prove that the learning algorithm designed ensures that the cognitive model can converge to the optimal behavior path with probability one. Robot navigation is studied on the basis of the cognitive process. The experimental results show that the HCNM model has strong adaptability in unknown and environment, and the navigation path is clearer and the convergence time is better. Among them, the convergence time of HCNM model is 25 s, which is 86.5% lower than that of HRLM model. The HCNM model studied in this paper adopts a hierarchical structure, which reduces the learning difficulty and accelerates the learning speed in the unknown environment.
Model-based systems engineering (MBSE) aims at creating a model of a system under development, covering the complete system with a level of detail that allows to define and understand its behavior and enables to define any interface and work package based on the model. Once the model is established, further benefits can be reaped, such as the analysis of complex technical correlations within the system. Various insights can be gained by displaying the model as a formal graph and querying it. To enable such queries, a graph schema is necessary, which allows to transfer the model into a graph database. In the course of this paper, we discuss the design of a graph schema and MBSE modeling approach, enabling deep going system analysis and anomaly resolution in complex embedded systems with a focus on testing and anomaly resolution. The schema and modeling approach are designed to answer questions such as What happens if there is an electrical short in a component? Which other components are now offline and which data cannot be gathered anymore? If a component becomes unresponsive, which alternative routes can be established to obtain data processed by it. We build on the use case of qualification and operations of a small spacecraft. Structural elements of the MBSE model are transferred to a graph database where analyses are conducted on the system. The schema is implemented by means of an adapter for MagicDraw to Neo4J. A selection of complex analyses is shown in the example of the MOVE-II space mission.
While many Latin American countries have a tradition of receiving migrants, including the countries selected as case studies, there are no institutionalized mechanisms for the integration and settlement of migrants. The objective of this article is to explore how to improve migration data collection and management in a region that does not have many migration integration policies in place. I assess the state of migration data collection and management in three case studies: the city of Cucuta in Colombia, the North Huetar Region in Costa Rica, and the city of Monterrey in Mexico. The three countries publish data exclusively at the national level, rather than the local or municipal. Despite all case studies having a variety of administrative data, mainly in the form of entries and exits by nationality, these data are not enough to properly identify the sociodemographic characteristics of migrant populations in a country, and much less in specific cities. I make recommendations divided into three main themes to improve migration data in Latin America.
Does digitalization reduce corruption? What are the integrity benefits of government digitalization? While the correlation between digitalization and corruption is well established, there is less actionable evidence on the integrity dividends of specific digitalization reforms on different types of corruption and the policy channels through which they operate. These linkages are especially relevant in high corruption risk environments. This article unbundles the integrity dividends of digital reforms undertaken by governments around the world, accelerated by the pandemic. It analyzes the rise of data-driven integrity analytics as promising tools in the anticorruption space deployed by tech-savvy integrity actors. It also assesses the broader integrity benefits of the digitalization of government services and the automation of bureaucratic processes, which contribute to reducing bribe solicitation risks by front-office bureaucrats. It analyzes in particular the impact of digitalization on social transfers. It argues that government digitalization can be an implicit yet effective anticorruption strategy, with subtler yet deeper effects, but there needs to be greater synergies between digital reforms and anticorruption strategies.
In recent years, ‘living lab (LL)’, a design approach that actively involves users as partners from the early stage of the design process, has been attracting much attention. Compared with the traditional participatory design or co-design approaches, one of the distinctive features of the LL approach is that the process of and opportunity for user participation tends to be long-term and complex. Thus, LL practitioners must appropriately plan and design effective integration of user participation into the design process to promote co-creation with users. In other words, LL practitioners are required to ‘configure user participation’ for the effective promotion of co-creation. However, to date, the knowledge on how to properly configure long-term and complex user participation in LLs has not been systematically clarified, nor have its methodologies been developed. This study develops a novel framework for configuring user participation in LLs. Through a literature review and analysis on LL case studies, we identified the 11 key elements in five categories that should be considered while configuring user participation in LLs. Furthermore, on the basis of the identified elements, we developed a novel framework for configuring user participation in LLs, which is called the participation blueprint. We have demonstrated its use and have also discussed its theoretical and practical contributions to the LL and co-design research community.
Corruption has pervasive effects on economic development and the well-being of the population. Despite being crucial and necessary, fighting corruption is not an easy task because it is a difficult phenomenon to measure and detect. However, recent advances in the field of artificial intelligence may help in this quest. In this article, we propose the use of machine-learning models to predict municipality-level corruption in a developing country. Using data from disciplinary prosecutions conducted by an anti-corruption agency in Colombia, we trained four canonical models (Random Forests, Gradient Boosting Machine, Lasso, and Neural Networks), and ensemble their predictions, to predict whether or not a mayor will commit acts of corruption. Our models achieve acceptable levels of performance, based on metrics such as the precision and the area under the receiver-operating characteristic curve, demonstrating that these tools are useful in predicting where misbehavior is most likely to occur. Moreover, our feature-importance analysis shows us which groups of variables are most important in predicting corruption.
Climate emulators are a powerful instrument for climate modeling, especially in terms of reducing the computational load for simulating spatiotemporal processes associated with climate systems. The most important type of emulators are statistical emulators trained on the output of an ensemble of simulations from various climate models. However, such emulators oftentimes fail to capture the “physics” of a system that can be detrimental for unveiling critical processes that lead to climate tipping points. Historically, statistical mechanics emerged as a tool to resolve the constraints on physics using statistics. We discuss how climate emulators rooted in statistical mechanics and machine learning can give rise to new climate models that are more reliable and require less observational and computational resources. Our goal is to stimulate discussion on how statistical climate emulators can further be improved with the help of statistical mechanics which, in turn, may reignite the interest of statistical community in statistical mechanics of complex systems.
Let $V_{(r,n,\tilde {m}_n,k)}^{(p)}$ and $W_{(r,n,\tilde {m}_n,k)}^{(p)}$ be the $p$-spacings of generalized order statistics based on absolutely continuous distribution functions $F$ and $G$, respectively. Imposing some conditions on $F$ and $G$ and assuming that $m_1=\cdots =m_{n-1}$, Hu and Zhuang (2006. Stochastic orderings between p-spacings of generalized order statistics from two samples. Probability in the Engineering and Informational Sciences 20: 475) established $V_{(r,n,\tilde {m}_n,k)}^{(p)} \leq _{{\rm hr}} W_{(r,n,\tilde {m}_n,k)}^{(p)}$ for $p=1$ and left the case $p\geq 2$ as an open problem. In this article, we not only resolve it but also give the result for unequal $m_i$'s. It is worth mentioning that this problem has not been proved even for ordinary order statistics so far.
This book proves some important new theorems in the theory of canonical inner models for large cardinal hypotheses, a topic of central importance in modern set theory. In particular, the author 'completes' the theory of Fine Structure and Iteration Trees (FSIT) by proving a comparison theorem for mouse pairs parallel to the FSIT comparison theorem for pure extender mice, and then using the underlying comparison process to develop a fine structure theory for strategy mice. Great effort has been taken to make the book accessible to non-experts so that it may also serve as an introduction to the higher reaches of inner model theory. It contains a good deal of background material, some of it unpublished folklore, and includes many references to the literature to guide further reading. An introductory essay serves to place the new results in their broader context. This is a landmark work in inner model theory that should be in every set theorist's library.
As the queue becomes exhausted, different maintenance tasks can be performed according to the fatigue load and wear degree of the service equipment. At the same time, considering the customer's sensitivity to time delay, the service facility will not completely remain inactive during the maintenance period. To describe this objectively existing phenomenon arising in the waiting line system, we consider a hyper-exponential working vacation queue with a batch renewal arrival process. Through the calculation of the well-structured roots of the associated characteristic equation, the shift operator method in the theory of difference equations and the supplementary variable technique for stochastic modeling plays a central role in the queue-length distribution analysis. Comparison with other ways to analyze queueing models, the advantage of our approach is that we can avoid deriving the complex transition probability matrix of the queue-length process embedded at input points. The feasibility of this approach is verified by extensive numerical examples.
We provide a framework for probabilistic reasoning in Vadalog-based Knowledge Graphs (KGs), satisfying the requirements of ontological reasoning: full recursion, powerful existential quantification, expression of inductive definitions. Vadalog is a Knowledge Representation and Reasoning (KRR) language based on Warded Datalog+/–, a logical core language of existential rules, with a good balance between computational complexity and expressive power. Handling uncertainty is essential for reasoning with KGs. Yet Vadalog and Warded Datalog+/– are not covered by the existing probabilistic logic programming and statistical relational learning approaches for several reasons, including insufficient support for recursion with existential quantification and the impossibility to express inductive definitions. In this work, we introduce Soft Vadalog, a probabilistic extension to Vadalog, satisfying these desiderata. A Soft Vadalog program induces what we call a Probabilistic Knowledge Graph (PKG), which consists of a probability distribution on a network of chase instances, structures obtained by grounding the rules over a database using the chase procedure. We exploit PKGs for probabilistic marginal inference. We discuss the theory and present MCMC-chase, a Monte Carlo method to use Soft Vadalog in practice. We apply our framework to solve data management and industrial problems and experimentally evaluate it in the Vadalog system.
A novel concept—the contact-based landing on a mobile platform—is proposed in this paper. An adaptive backstepping controller is designed to deal with the unknown disturbances in the interactive process, and the contact-based landing mission is implemented under the hybrid force/motion control framework. A rotorcraft aerial vehicle system and a ground mobile platform are designed to conduct flight experiments, evaluating the feasibility of the proposed landing scheme and control strategy. To the best of our knowledge, this is the first time a rotorcraft unmanned aerial vehicle has been implemented to conduct a contact-based landing. To improve system autonomy in future applications, vision-based recognition and localization methods are studied, contributing to the detection of a partially occluded cooperative object or at a close range. The proposed recognition algorithms are tested on a ground platform and evaluated in several simulated scenarios, indicating the algorithm’s effectiveness.
Although an accurate reliability assessment is essential to build a resilient infrastructure, it usually requires time-consuming computation. To reduce the computational burden, machine learning-based surrogate models have been used extensively to predict the probability of failure for structural designs. Nevertheless, the surrogate model still needs to compute and assess a certain number of training samples to achieve sufficient prediction accuracy. This paper proposes a new surrogate method for reliability analysis called Adaptive Hyperball Kriging Reliability Analysis (AHKRA). The AHKRA method revolves around using a hyperball-based sampling region. The radius of the hyperball represents the precision of reliability analysis. It is iteratively adjusted based on the number of samples required to evaluate the probability of failure with a target coefficient of variation. AHKRA adopts samples in a hyperball instead of an n-sigma rule-based sampling region to avoid the curse of dimensionality. The application of AHKRA in ten mathematical and two practical cases verifies its accuracy, efficiency, and robustness as it outperforms previous Kriging-based methods.