To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Programming environments have evolved from purely text based to using graphical user interfaces, and now we see a move toward web-based interfaces, such as Jupyter. Web-based interfaces allow for the creation of interactive documents that consist of text and programs, as well as their output. The output can be rendered using web technology as, for example, text, tables, charts, or graphs. This approach is particularly suitable for capturing data analysis workflows and creating interactive educational material. This article describes SWISH, a web front-end for Prolog that consists of a web server implemented in SWI-Prolog and a client web application written in JavaScript. SWISH provides a web server where multiple users can manipulate and run the same material, and it can be adapted to support Prolog extensions. In this article we describe the architecture of SWISH, and describe two case studies of extensions of Prolog, namely Probabilistic Logic Programming and Logic Production System, which have used SWISH to provide tutorial sites.
We generalise the Blok–Jónsson account of structural consequence relations, later developed by Galatos, Tsinakis and other authors, in such a way as to naturally accommodate multiset consequence. While Blok and Jónsson admit, in place of sheer formulas, a wider range of syntactic units to be manipulated in deductions (including sequents or equations), these objects are invariably aggregated via set-theoretical union. Our approach is more general in that nonidempotent forms of premiss and conclusion aggregation, including multiset sum and fuzzy set union, are considered. In their abstract form, thus, deductive relations are defined as additional compatible preorderings over certain partially ordered monoids. We investigate these relations using categorical methods and provide analogues of the main results obtained in the general theory of consequence relations. Then we focus on the driving example of multiset deductive relations, providing variations of the methods of matrix semantics and Hilbert systems in Abstract Algebraic Logic.
Suszko’s problem is the problem of finding the minimal number of truth values needed to semantically characterize a syntactic consequence relation. Suszko proved that every Tarskian consequence relation can be characterized using only two truth values. Malinowski showed that this number can equal three if some of Tarski’s structural constraints are relaxed. By so doing, Malinowski introduced a case of so-called mixed consequence, allowing the notion of a designated value to vary between the premises and the conclusions of an argument. In this article we give a more systematic perspective on Suszko’s problem and on mixed consequence. First, we prove general representation theorems relating structural properties of a consequence relation to their semantic interpretation, uncovering the semantic counterpart of substitution-invariance, and establishing that (intersective) mixed consequence is fundamentally the semantic counterpart of the structural property of monotonicity. We use those theorems to derive maximum-rank results proved recently in a different setting by French and Ripley, as well as by Blasio, Marcos, and Wansing, for logics with various structural properties (reflexivity, transitivity, none, or both). We strengthen these results into exact rank results for nonpermeable logics (roughly, those which distinguish the role of premises and conclusions). We discuss the underlying notion of rank, and the associated reduction proposed independently by Scott and Suszko. As emphasized by Suszko, that reduction fails to preserve compositionality in general, meaning that the resulting semantics is no longer truth-functional. We propose a modification of that notion of reduction, allowing us to prove that over compact logics with what we call regular connectives, rank results are maintained even if we request the preservation of truth-functionality and additional semantic properties.
With the rise of machine learning, and more recently the overwhelming interest in deep learning, knowledge representation and reasoning (KRR) approaches struggle to maintain their position within the wider Artificial Intelligence (AI) community. Often considered as part of the good old-fashioned AI (Haugeland 1985) – like a memory of glorious old days that have come to an end – many consider KRR as no longer applicable (on its own) to the problems faced by AI today (Blackwell 2015; Garnelo et al. 2016). What they see are logical languages with symbols incomprehensible by most, inference mechanisms that even experts have difficulties tracing and debugging, and the incapability to process unstructured data like text.
Standard reasoning about Kripke semantics for modal logic is almost always based on a background framework of classical logic. Can proofs for familiar definability theorems be carried out using a nonclassical substructural logic as the metatheory? This article presents a semantics for positive substructural modal logic and studies the connection between frame conditions and formulas, via definability theorems. The novelty is that all the proofs are carried out with a noncontractive logic in the background. This sheds light on which modal principles are invariant under changes of metalogic, and provides (further) evidence for the general viability of nonclassical mathematics.
Recent years have witnessed an explosion in the volume and variety of data collected in all scientific disciplines and industrial settings. Such massive data sets present a number of challenges to researchers in statistics and machine learning. This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level. It includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices - as well as chapters devoted to in-depth exploration of particular model classes - including sparse linear models, matrix models with rank constraints, graphical models, and various types of non-parametric models. With hundreds of worked examples and exercises, this text is intended both for courses and for self-study by graduate students and researchers in statistics, machine learning, and related fields who must understand, apply, and adapt modern statistical methods suited to large-scale data.
Wireless power transmission (WPT) systems with moveable mechanical parts have been acquired more and more attention during the past decade. However, due to the moveable feature of transmitting coil and receiving coil, misalignment issue lead to extra power loss, decrease in efficiency, increase in control complexity, and unwanted performance degradation of the whole system. Moreover, it happened frequently than those traditional planar coils systems. The motivation for this paper is trying to have a deep understanding of quantitative relationship between ball-shaped coils mutual inductance and misalignment. Based upon that, engineers would know more detail of the coils position and mutual inductance. So, optimized design might be achieved. On considering that, this paper presents a WPT system with a ball-shaped coil for robot joints. A mutual inductance calculation based on filament method aimed at ball-shaped coil is proposed. Based on these, nine different ball-shaped coil solutions are calculated. Then, model with a minimized change rate of mutual inductance against the angular misalignment is chosen as the optimized design. Circuit analysis of the WPT system with the series–series resonant topology is conducted to choose a proper working frequency and load. Finally, an experimental platform is established. It demonstrates the feasibility of the proposed calculation method and the feasibility of the WPT prototype.
In this paper, a path planning algorithm for robotic systems with excess degrees of freedom (DOF) for welding of intersecting pipes is presented. At first step, the procedure of solving the inverse kinematics considering system kinematic redundancy is developed. The robotic system consists of a 6 DOF robotic manipulator installed on a railed base with linear motion. Simultaneously, the main pipe is able to rotate about its longitudinal axis. The system redundancy is employed to improve weld quality. Three different simulation studies are performed to show the effect of the robotic system kinematic redundancy to plan a better path for the welding of intersecting pipes. In the first case, it is assumed that robotic manipulator base and main pipe are fixed, and the path is planned only with manipulator joints motion. In the second case, only the robot base is free to move and the main pipe is fixed, and in the third case, the main pipe is free to rotate together with the base of the manipulator. It is seen that kinematic constraints according to the system’s redundancy will help to plan the most efficient path for the welding of complex pipe joints.
In this work a simple method to solve the kinematics of the 5-R$\underbar{P}$UR parallel manipulator is introduced. Dealing with the displacement analysis, the kinematic constraint equations required to address the forward–inverse displacement analysis are established according to linear combinations of two vectors attached to the moving platform. Then, besides the solution of the inverse displacement analysis two strategies are proposed in order to solve the forward position analysis. Finally, the input–output equations of velocity and acceleration are systematically obtained by resorting to reciprocal-screw theory. Numerical examples are provided with the purpose to illustrate the proposed method. Furthermore, the numerical results obtained by means of screw theory are confirmed with the aid of commercially available software.
Unlike English and other Western languages, many Asian languages such as Chinese and Japanese do not delimit words by space. Word segmentation and new word detection are therefore key steps in processing these languages. Chinese word segmentation can be considered as a part-of-speech (POS)-tagging problem. We can segment corpus by assigning a label for each character which indicates the position of the character in a word (e.g., “B” for word beginning, and “E” for the end of the word, etc.). Chinese word segmentation seems to be well studied. Machine learning models such as conditional random field (CRF) and bi-directional long short-term memory (LSTM) have shown outstanding performances on this task. However, the segmentation accuracies drop significantly when applying the same approaches to out-domain cases, in which high-quality in-domain training data are not available. An example of out-domain applications is the new word detection in Chinese microblogs for which the availability of high-quality corpus is limited. In this paper, we focus on out-domain Chinese new word detection. We first design a new method Edge Likelihood (EL) for Chinese word boundary detection. Then we propose a domain-independent Chinese new word detector (DICND); each Chinese character is represented as a low-dimensional vector in the proposed framework, and segmentation-related features of the character are used as the values in the vector.
hex-programs are an extension of answer set programs (ASP) with external sources. To this end, external atoms provide a bidirectional interface between the program and an external source. The traditional evaluation algorithm for hex-programs is based on guessing truth values of external atoms and verifying them by explicit calls of the external source. The approach was optimized by techniques that reduce the number of necessary verification calls or speed them up, but the remaining external calls are still expensive. In this paper, we present an alternative evaluation approach based on inlining of external atoms, motivated by existing but less general approaches for specialized formalisms such as DL-programs. External atoms are then compiled away such that no verification calls are necessary. The approach is implemented in the dlvhex reasoner. Experiments show a significant performance gain. Besides performance improvements, we further exploit inlining for extending previous (semantic) characterizations of program equivalence from ASP to hex-programs, including those of strong equivalence, uniform equivalence, and $\langle\mathcal{H},\mathcal{B}\rangle$- equivalence. Finally, based on these equivalence criteria, we characterize also inconsistency of programs w.r.t. extensions. Since well-known ASP extensions (such as constraint ASP) are special cases of hex, the results are interesting beyond the particular formalism.
Implicit programming (IP) mechanisms infer values by type-directed resolution, making programs more compact and easier to read. Examples of IP mechanisms include Haskell’s type classes, Scala’s implicits, Agda’s instance arguments, Coq’s type classes and Rust’s traits. The design of IP mechanisms has led to heated debate: proponents of one school argue for the desirability of strong reasoning properties, while proponents of another school argue for the power and flexibility of local scoping or overlapping instances. The current state of affairs seems to indicate that the two goals are at odds with one another and cannot easily be reconciled. This paper presents COCHIS, the Calculus Of CoHerent ImplicitS, an improved variant of the implicit calculus that offers flexibility while preserving two key properties: coherence and stability of type substitutions. COCHIS supports polymorphism, local scoping, overlapping instances, first-class instances and higher-order rules, while remaining type-safe, coherent and stable under type substitution. We introduce a logical formulation of how to resolve implicits, which is simple but ambiguous and incoherent, and a second formulation, which is less simple but unambiguous, coherent and stable. Every resolution of the second formulation is also a resolution of the first, but not conversely. Parts of the second formulation bear a close resemblance to a standard technique for proof search called focusing. Moreover, key for its coherence is a rigorous enforcement of determinism.
Considering the Gaussian noise channel, Costa [4] investigated the concavity of the entropy power when the input signal and noise components are independent. His argument was connected to the first-order derivative of the Fisher information. In real situations, however, the noise can be highly dependent on the main signal. In this paper, we suppose that the input signal and noise variables are dependent. Then, some well-known copula functions are used to define their dependence structure. The first- and second-order derivatives of Fisher information of the model are obtained. Then, by using these derivatives, we will generalize two inequalities based on the Fisher information and a functional that is closely associated to Fisher information for the case when the input signal and noise variables are dependent. We will also show that the previous results for the independent case are recovered as special cases of our result. Several applications are provided to support the usefulness of our results. Finally, the channel capacity of the Gaussian noise channel model with dependent signal and noise is studied.
The t-test is a work horse of a lot of statistical analysis in HCI. There are a lot of myths about how robust it is to deviations from normality and other assumptions. However, when faced with practical data, particularly those coming from usability studies, the claims of robustness do not stand up. This chapter reevaluates the t-test as a test for an effect on the location of data. This leads to considering robust measures of location, such as trimmed or Winsorized means and associated Yuen–Welch test as a robust alternative to the traditional t-test.