To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Abstraction is a well-known approach to simplify a complex problem by over-approximating it with a deliberate loss of information. It was not considered so far in Answer Set Programming (ASP), a convenient tool for problem solving. We introduce a method to automatically abstract ASP programs that preserves their structure by reducing the vocabulary while ensuring an over-approximation (i.e., each original answer set maps to some abstract answer set). This allows for generating partial answer set candidates that can help with approximation of reasoning. Computing the abstract answer sets is intuitively easier due to a smaller search space, at the cost of encountering spurious answer sets. Faithful (non-spurious) abstractions may be used to represent projected answer sets and to guide solvers in answer set construction. For dealing with spurious answer sets, we employ an ASP debugging approach to help with abstraction refinement, which determines atoms as badly omitted and adds them back in the abstraction. As a show case, we apply abstraction to explain unsatisfiability of ASP programs in terms of blocker sets, which are the sets of atoms such that abstraction to them preserves unsatisfiability. Their usefulness is demonstrated by experimental results.
As a ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words, but the vectors corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related to a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget’s Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. Manual human evaluation results have also been presented to further verify that the proposed method increases interpretability. We also demonstrate the preservation of semantic coherence of the resulting vector space using word-analogy/word-similarity tests and a downstream task. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.
Blockchain is an emerging technology framework for creating and storing transaction in distributed ledgers with a high degree of security and reliability. In this paper, we present a blockchain-based platform to create and store contracts in between students and their higher education sponsors facilitated by intermediary brokers denoted as fundraisers. The sponsorship might be in any form, such as scholarship, donation, or loan. The fund will be arranged and managed by a group of competitive fundraisers who will hold the distributed ledgers and act as the miners in the blockchain network.
As popularity of algebraic effects and handlers increases, so does a demand for their efficient execution. Eff, an ML-like language with native support for handlers, has a subtyping-based effect system on which an effect-aware optimising compiler could be built. Unfortunately, in our experience, implementing optimisations for Eff is overly error-prone because its core language is implicitly typed, making code transformations very fragile. To remedy this, we present an explicitly typed polymorphic core calculus for algebraic effect handlers with a subtyping-based type-and-effect system. It reifies appeals to subtyping in explicit casts with coercions that witness the subtyping proof, quickly exposing typing bugs in program transformations. Our typing-directed elaboration comes with a constraint-based inference algorithm that turns an implicitly typed Eff-like language into our calculus. Moreover, all coercions and effect information can be erased in a straightforward way, demonstrating that coercions have no computational content. Additionally, we present a monadic translation from our calculus into a pure language without algebraic effects or handlers, using the effect information to introduce monadic constructs only where necessary.
We study structural properties of graphs with bounded clique number and high minimum degree. In particular, we show that there exists a function L = L(r,ɛ) such that every Kr-free graph G on n vertices with minimum degree at least ((2r–5)/(2r–3)+ɛ)n is homomorphic to a Kr-free graph on at most L vertices. It is known that the required minimum degree condition is approximately best possible for this result.
For r = 3 this result was obtained by Łuczak (2006) and, more recently, Goddard and Lyle (2011) deduced the general case from Łuczak’s result. Łuczak’s proof was based on an application of Szemerédi’s regularity lemma and, as a consequence, it only gave rise to a tower-type bound on L(3, ɛ). The proof presented here replaces the application of the regularity lemma by a probabilistic argument, which yields a bound for L(r, ɛ) that is doubly exponential in poly(ɛ).
Existential rules are a positive fragment of first-order logic that generalizes function-free Horn rules by allowing existentially quantified variables in rule heads. This family of languages has recently attracted significant interest in the context of ontology-mediated query answering. Forward chaining, also known as the chase, is a fundamental tool for computing universal models of knowledge bases, which consist of existential rules and facts. Several chase variants have been defined, which differ on the way they handle redundancies. A set of existential rules is bounded if it ensures the existence of a bound on the depth of the chase, independently from any set of facts. Deciding if a set of rules is bounded is an undecidable problem for all chase variants. Nevertheless, when computing universal models, knowing that a set of rules is bounded for some chase variant does not help much in practice if the bound remains unknown or even very large. Hence, we investigate the decidability of the k-boundedness problem, which asks whether the depth of the chase for a given set of rules is bounded by an integer k. We identify a general property which, when satisfied by a chase variant, leads to the decidability of k-boundedness. We then show that the main chase variants satisfy this property, namely the oblivious, semi-oblivious (aka Skolem), and restricted chase, as well as their breadth-first versions.
Many real-world domains can benefit from adaptable decentralized task allocation through emergent specialization, especially in large teams of non-communicating agents. We begin with an existing bio-inspired response threshold reinforcement approach for decentralized task allocation and extend it to handle hierarchical task domains. We test the extension on self-deployment of a large team of non-communicating agents to patrolling a hierarchically defined set of areas. Results show near-ideal performance across all areas, while minimizing wasteful task switching through the development of specializations and subsequent respecializations when area demands change. A genetic algorithm is then used to evolve even more adaptable and stable task allocation behavior, by incorporating weight and power coefficients into agents’ response threshold reinforcement action probability calculations.
We consider a friendship model in which each member of a community has a latent value such that the probability that any two individuals are friends is a function of their latent values. We consider such questions as does information that i and j are both friends with k make it more likely that i and j are themselves friends. Among other things, we show that for fixed sets S and T, the more friends that i has in S, then the stochastically more friends i has in T. We consider how a variation of the friendship paradox applies to our model. We also study the distribution of the number of friendless individuals in the community and derive a bound on the total variation distance between it and a Poisson with the same mean.
In advertising, slogans are used to enhance the recall of the advertised product by consumers and to distinguish it from others in the market. Creating effective slogans is a resource-consuming task for humans. In this paper, we describe a novel method for automatically generating slogans, given a target concept (e.g., car) and an adjectival property to express (e.g., elegant) as input. Additionally, a key component in our approach is a novel method for generating nominal metaphors, using a metaphor interpretation model, to allow generating metaphorical slogans. The method for generating slogans extracts skeletons from existing slogans. It then fills a skeleton in with suitable words by utilizing multiple linguistic resources (such as a repository of grammatical relations, and semantic and language models) and genetic algorithms to optimize multiple objectives such as semantic relatedness, language correctness, and usage of rhetorical devices. We evaluate the metaphor and slogan generation methods by running crowdsourced surveys. On a five-point Likert scale, we ask online judges to evaluate whether the generated metaphors, along with three other metaphors generated using different methods, highlight the intended property. The slogan generation method is evaluated by asking crowdsourced judges to rate generated slogans from five perspectives: (1) how well is the slogan related to the topic, (2) how correct is the language of the slogan, (3) how metaphoric is the slogan, (4) how catchy, attractive, and memorable is it, and (5) how good is the slogan overall. Similarly, we evaluate existing expert-made slogans. Based on the evaluations, we analyze the method and provide insights regarding existing slogans. The empirical results indicate that our metaphor generation method is capable of producing apt metaphors. Regarding the slogan generator, the results suggest that the method has successfully produced at least one effective slogan for every evaluated input.
This paper introduces a new RGBD-Simultaneous Localization And Mapping (RGBD-SLAM) based on a revisited keyframe SLAM. This solution improves the localization by combining visual and depth data in a local bundle adjustment. Then, it presents an extension of this RGBD-SLAM that takes advantage of a partial knowledge of the scene. This solution allows using a prior knowledge of the 3D model of the environment when this latter is available which drastically improves the localization accuracy. The proposed solutions called RGBD-SLAM and Constrained RGBD-SLAM are evaluated on several public benchmark datasets and on real scenes acquired by a Kinect sensor. The system works in real time on a standard central processing units and it can be useful for certain applications, such as localization of lightweight robots, UAVs, and VR helmet.
In this paper, we review different definitions that multi-state k-out-of-n systems have received along the literature and study them in a unified way using the algebra of monomial ideals. We thus obtain formulas and algorithms to compute their reliability and bounds for it. We provide formulas and computer experiments for simple and generalized multi-state k-out-of-n systems and for binary k-out-of-n systems with multi-state components.
This paper studies the parameter estimation for Ornstein–Uhlenbeck stochastic volatility models driven by Lévy processes. We propose computationally efficient estimators based on the method of moments that are robust to model misspecification. We develop an analytical framework that enables closed-form representation of model parameters in terms of the moments and autocorrelations of observed underlying processes. Under moderate assumptions, which are typically much weaker than those for likelihood methods, we prove large-sample behaviors for our proposed estimators, including strong consistency and asymptotic normality. Our estimators obtain the canonical square-root convergence rate and are shown through numerical experiments to outperform likelihood-based methods.
In this paper, hybrid control of central pattern generators (CPGs), along with an adaptive supper-twisting sliding mode (ASTSM) control based on supper-twisting state observer, is proposed to guard against disturbances and uncertainties. Rhythmic and coordinated signals are generated using CPGs. In addition, to overcome the chattering of conventional sliding mode, supper-twisting sliding mode has been applied. The ASTSM method triggers sliding variables, and its derivatives tend to zero continuously in the presence of the uncertainties. Moreover, to acquire maximum stability, the desired trajectory of the upper limb based on zero moment point criterion is designed.
The process of software testing usually involves the correction of a detected bug immediately upon detection. In this article, in contrast, we discuss continuous time testing of a software with periodic debugging in which bugs are corrected, instead of at the instants of their detection, at some pre-specified time points. Under the assumption of renewal distribution for the time between successive occurrence of a bug, maximum-likelihood estimation of the initial number of bugs in the software is considered, when the renewal distribution belongs to any general parametric family or is arbitrary. The asymptotic properties of the estimated model parameters are also discussed. Finally, we investigate the finite sample properties of the estimators, specially that of the number of initial number of bugs, through simulation.
In software architectures, architectural design decisions (ADDs) strongly influence the quality of the resulting software system. Wrong decisions lead to low-quality systems and are difficult to repair later on in the development process. As of today, little is known about the impact of certain ADDs for the development of architectures for blockchain-based systems. Thus, it is difficult to predict the outcome of certain ADDs when developing architectures for such systems. In the following, we propose a simulation-based approach for blockchain architectures in which the impact of certain ADDs on certain quality attributes can be simulated. To this end, we first implemented a simulation environment for blockchain architectures. The simulation environment was then used to execute a series of experiments from which we derived a set of hypotheses about the impact of certain ADDs on quality attributes for blockchain architectures. Finally, we tested the hypotheses using statistical analyses and derived an empirical model for blockchain architectures based on the outcome of the analysis. The model can be used by architects to predict the effect of certain decisions in the design of blockchain architectures before implementing them.
Cloud robotics (CR) is currently a growing area in the robotic community. Indeed, the use of cloud computing to share data and resources of distributed robotic systems leads to the design and development of cloud robotic systems (CRS) which constitute useful technologies for a wide range of applications such as smart manufacturing, aid and rescue missions. However, in order to get coherent agent-to-cloud communications and efficient agent-to-agent collaboration within these CRS, there is a need to formalize the knowledge representation in CR. Hence, the use of ontologies provides a mean to define formal concepts and their relations in an interoperable way. This paper presents standard robotic ontologies and their extension in the CR domain as well as their possible implementations in the case of a real-world CR scenario.
The blockchain is an emerging technology which has the potential to improve many information systems. In this regard, the applications and the platform they are built on must be able to connect and communicate with each other. However, the current blockchain platforms have several limitations, such as lack of interoperability among different systems. The existing platforms of blockchain applications operate only within their own networks. Even though the underlying technology is similar, it relies on centralized third-party mediators to exchange or retrieve information from other blockchain networks. The current third-party intermediaries establish trust and security by preserving a centralized ledger to track ‘account balances’ and vouch for a transaction’s authenticity. The inability for independent blockchains to communicate with one another is an inherent problem in the decentralized systems. Lack of appropriate inter-blockchain communication puts a strain on the mainstream adoption of blockchain. It is evident that blockchain technology has the potential to become a suitable solution for some systems if it can scale and is able to cross communicate with other systems. For the multisystem blockchain concept to become a reality, a mechanism is required that would connect and communicate with multiple entities’ blockchain systems in a distributed fashion (without any intermediary), while maintaining the property of trust and integrity built by individual blockchains. In this article, we propose a mechanism that provides cross-chain interoperability using transactions.
This paper presents how numerical optimization methods, like topology optimization, and new design possibilities through additive manufacturing (AM) can be used for structural improvements of the load introduction points in sandwich structures. A new design approach is presented, which allows a direct load-path optimized integration of the load introduction point into the sandwich core. The corresponding methodical procedure is shown and the application is demonstrated exemplarily for a sandwich structure with a honeycomb core. The advantages for design science are that the new design possibilities of AM can be considered and used when designing the load introduction points. Thus, the additional reinforcements of the sandwich structures to absorb locally introduced forces in the lightweight structure can be minimized. This enables a meaningful technical comparison and it can be decided in the future whether such a design can be used for sandwich structures under economic aspects. In addition, the influence of the initial and boundary conditions on the design is presented and discussed in this paper. The challenges of optimizing multiple load introduction points simultaneously as well as the special aspects to be considered when transferring the design approach to larger sandwich structures are also highlighted.
The paper deals with prognosis estimation for industrial systems in a series configuration, modeled by superimposed renewal processes (SRP), when the cause of failures is not available. In the presence of missing information, an SRP is commonly approximated by a Poisson process or a virtual age model. The performance of the approximations was assessed in the ideal configuration where all parameters of the models are known. The current article adopts a practitioner's perspective by assuming that the parameters of the models are unknown and must be estimated. In addition to inference procedures, the assessment of the prognosis indicators, such as the remaining useful life, is discussed. Finally, we investigate a fleet of infrastructure components of the Norwegian railway network operated by Bane NOR.
Recent work studying triadic closure in undirected graphs has drawn attention to the distinction between measures that focus on the “center” node of a wedge (i.e., length-2 path) versus measures that focus on the “initiator,” a distinction with considerable consequences. Existing measures in directed graphs, meanwhile, have all been center-focused. In this work, we propose a family of eight directed closure coefficients that measure the frequency of triadic closure in directed graphs from the perspective of the node initiating closure. The eight coefficients correspond to different labeled wedges, where the initiator and center nodes are labeled, and we observe dramatic empirical variation in these coefficients on real-world networks, even in cases when the induced directed triangles are isomorphic. To understand this phenomenon, we examine the theoretical behavior of our closure coefficients under a directed configuration model. Our analysis illustrates an underlying connection between the closure coefficients and moments of the joint in- and out-degree distributions of the network, offering an explanation of the observed asymmetries. We also use our directed closure coefficients as predictors in two machine learning tasks. We find interpretable models with AUC scores above 0.92 in class-balanced binary prediction, substantially outperforming models that use traditional center-focused measures.