To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The present chapter introduces the Semantic Web and its philosophy. This involves two main ideas. The first is to associate meta-information with Internet-based resources. The second is to reason about this type of information. We show how these two ideas can help in solving the problems mentioned in the previous chapter.
Having introduced the main concepts we continue the chapter by describing technologies that can be used for representing meta-information in a uniform way. First we introduce the XML language, which forms the basis of the Semantic Web as a standard information exchange format. Then we describe the RDF language; this has an XML notation as well as other representations and can be used to associate meta-information to an arbitrary resource. By doing this we can extend web contents with computer-processable semantics.
Subsequently, we introduce the RDF schema language, which provides the background knowledge that is essential to do reasoning on meta-information. We discuss the similarities and differences between RDF schemas and traditional object-oriented modelling paradigms.
We conclude the chapter by presenting several applications that directly or indirectly use RDF descriptions during their operation.
Introduction
The Semantic Web approach was originated by Tim Berners-Lee, the father of the World Wide Web and related technologies (URI, HTTP, HTML etc.). The approach is based on two fundamental ideas.
Okasaki introduced the canonical formulation of functional red-black trees when he gave a concise, elegant method of persistent element insertion. Persistent element deletion, on the other hand, has not enjoyed the same treatment. For this reason, many functional implementations simply omit persistent deletion. Those that include deletion typically take one of two approaches. The more-common approach is a superficial translation of the standard imperative algorithm. The resulting algorithm has functional airs but remains clumsy and verbose, characteristic of its imperative heritage. (Indeed, even the term insertion is a holdover from imperative origins, but is now established in functional contexts. Accordingly, we use the term deletion which has the same connotation.) The less-common approach leverages the features of advanced type systems, which obscures the essence of the algorithm. Nevertheless, foreign-language implementors reference such implementations and, apparently unable to tease apart the algorithm and its type specification, transliterate the entirety unnecessarily. Our goal is to provide for persistent deletion what Okasaki did for insertion: a succinct, comprehensible method that will liberate implementors. We conceptually simplify deletion by temporarily introducing a “double-black” color into Okasaki's tree type. This third color, with its natural interpretation, significantly simplifies the preservation of invariants during deletion.
The Semantic Web is a new area of computer science that is being developed with the main aim of making it easier for computers to process intelligently the huge amount of information on the web. In other words, as the common slogan of the Semantic Web says: computers should not only read but also understand the information on the web. To achieve this, it is necessary to associate metadata with web-based information. For example, in the case of a picture one should formally provide information regarding its author, title and contents. Furthermore, computers should be able to perform reasoning tasks. For example, if it is known that a river appears in a picture, the computer should be able to deduce that water can also be found in the picture.
Research into hierarchical terminology systems, i.e. ontologies, is strongly connected to the area of the Semantic Web. Ontologies are formal systems that allow the description of concrete knowledge about objects of interest as well as of general background knowledge. The description logic formalism is the most widespread approach providing the mathematical foundation of this field. It is not a coincidence that both OWL and its second edition, OWL 2, which are Semantic Web languages standardised by the World Wide Web Consortium (W3C), are based on Description Logic.
In this chapter we discuss the family of description logic (DL) languages. Following an introduction in Section 4.1, we present informally the most important language elements of Description Logic (Section 4.2). Next, we give the exact syntax and semantics of each language (Sections 4.3–4.5). Section 4.6 gives an overview of reasoning tasks for Description Logic while Section 4.7 deals with the simplification of reasoning tasks. Section 4.8 introduces the so-called assertion boxes (ABoxes). In Section 4.9 we explain the links between Description Logic and first-order logic, while Section 4.10 gives an overview of advanced features of DL languages.
In the present chapter and the rest of the book we follow a common notation: names in DL formulae are typeset using the grotesque font.
Introduction
Description Logic allows us to build a mathematical model describing the notions used in a specific area of interest, or in common knowledge [4]. Description Logic deals with concepts representing sets of individuals: for instance the concept “human” describes the set of all human beings. Furthermore, one can also describe relationships between individuals. In Description Logic, as in RDF, only binary (i.e. two-argument) relationships can be used, which here are referred to as roles. For instance, the role “has child” holds between a parent and a child individual.
This chapter presents an implementation of a description logic reasoning engine. Building on the theoretical foundations discussed in the previous chapters we present a program, written in the Haskell functional programming language, which is able to answer concept satisfiability queries over an arbitrary TBox using the ALCN language.
Following the introduction we give some examples illustrating the use of the program to be described. We then present the data structures of the program. Next, we describe the transformation of ALCN concepts into negation normal form and present the entry point of the tableau algorithm. The bulk of the chapter deals with the implementation of the main components of the tableau algorithm: the transformation rules and blocking. Finally, we describe the auxiliary functions used in the preceding sections and discuss possible improvements to the reasoning engine.
Introduction
The ALCN tableau algorithm considered in this chapter is implemented in Haskell, a functional programming language with lazy evaluation [65]. Haskell has been chosen for this task because it allows us to present the inference engine in a concise, simple and easily understandable way which is quite close to the mathematical notation employed in the previous chapters. No prior knowledge of Haskell is required to understand this chapter, since the various features of the Haskell programming language are explained at their first use.
In this chapter we deal with the way in which RDF descriptions are stored, processed and queried as well as the applications and languages involved in the process.
In Section 3.1 we describe how to make RDF meta-information on the web available for search engines. Next, in Section 3.2, we give an overview of development tools which can be used to parse and manage RDF-based sources. In Section 3.3 we describe RDF query languages and show why XML query languages are not suitable for this purpose. Subsequently, in Section 3.4, we discuss the possible reasoning tasks involved in answering RDF queries. Finally, in Section 3.5, we describe problems which arise in the course of optimising RDF queries and outline possible solutions.
RDF descriptions on the web
The RDF language is a generic framework that helps to associate meta-information with resources in a uniform way. RDF is by no means limited to the web, because anything that is identified by a URI can be used in RDF statements. However, as we saw in the previous chapter, practically anything can have a URI: a person, a rucksack, a house etc. This allows us to use RDF in environments other than the web, for example, traditional databases, information integration and other knowledge-intensive systems.
In this paper the control of flexible-joint manipulators while explicitly avoiding actuator saturation is considered. The controllers investigated are composed of a bounded proportional control term and a Hammerstein strictly positive real angular rate control term. This control structure ensures that the total torque demanded of each actuator is bounded by a value that is less than the maximum torque that each actuator is able to provide, thereby disallowing actuator saturation. The proposed controllers are shown to render the closed-loop system asymptotically stable, even in the presence of modeling uncertainties. The performance of the controllers is demonstrated experimentally and in simulation.
Particle swarm optimization (PSO) is a heuristic optimization algorithm and is commonly used for the tuning of PD/PID-type controllers. In this paper, PSO is applied for control gain tuning of a position domain PID controller in order to improve contour tracking performances of linear and nonlinear contours for a serial multi-DOF robotic manipulator. A new fitness function is proposed for gain tuning based on the statistics of the contour error, and pre-existed fitness functions are also applied for the optimization process, followed by some comparison studies. The PSO tuning technique demonstrated the same effectiveness in position domain controllers as in time domain controllers with the results being quite satisfying with low contour errors for both linear and nonlinear contours, and the proposed fitness function is proved to be on par with the pre-existed fitness functions.
Recently, small-sized compact electric vehicles have been in demand for short-distance travel in urban areas, although battery charging in electric vehicles present in the market is still problematic. Borrowing from the concept of a mobile inverted pendulum system, in this paper, a two-wheel robotic vehicle system is implemented and controlled as the future personal transportation device called the TransBOT. The TransBOT has two driving modes: a regular vehicle mode, where stable contact on the ground is maintained by two wheels and two casters, and the balancing mode, which maintains the stable posture with two wheels on the ground. The two-wheel balancing mechanism can be used as a transportation vehicle in narrow and busy urban areas. Gain scheduling control methods based on linear controllers are used for different drivers. In addition, desired balancing angles are specified for the different sizes of drivers in order to have a stable balancing control performance. These desired balancing angle values have been found by empirical studies. Experimental studies with drivers of different weights, as well as indoor and outdoor driving tasks, were conducted to ensure the feasibility of TransBOT.
Although proprioceptive impairment is likely to affect in a significant manner the capacity of stroke patients to recover functionality of upper limb, clinical assessment methods currently in use are rather crude, with a low level of reliability and a limited capacity to discriminate the relevant features of this severe deficit. In the present paper, we describe a new technique based on robot technology, with the goal of providing a reliable, accurate, and quantitative evaluation of kinesthetic acuity, which can be integrated in robot therapy. The proposed technique, based on a pulsed assistance paradigm, has been evaluated on a group of healthy subjects.
This paper presents a probabilist paraconsistent theory of belief revision. This theory is based on a very general theory of probability, that fits with a wide range of classical and nonclassical logics. The theory incorporates a version of Jeffrey conditionalisation as its method of updating. A Dutch book argument is given, and the theory is applied to the problem of choosing a logical system.
We have begun a theory of measurement in which an experimenter and his or her experimental procedure are modeled by algorithms that interact with physical equipment through a simple abstract interface. The theory is based upon using models of physical equipment as oracles to Turing machines. This allows us to investigate the computability and computational complexity of measurement processes. We examine eight different experiments that make measurements and, by introducing the idea of an observable indicator, we identify three distinct forms of measurement process and three types of measurement algorithm. We give axiomatic specifications of three forms of interfaces that enable the three types of experiment to be used as oracles to Turing machines, and lemmas that help certify an experiment satisfies the axiomatic specifications. For experiments that satisfy our axiomatic specifications, we give lower bounds on the computational power of Turing machines in polynomial time using nonuniform complexity classes. These lower bounds break the barrier defined by the Church-Turing Thesis.
We consider statistical properties of random integer partitions. In order to compute means, variances and higher moments of various partition statistics, one often has to study generating functions of the form P(x)F(x), where P(x) is the generating function for the number of partitions. In this paper, we show how asymptotic expansions can be obtained in a quasi-automatic way from expansions of F(x) around x = 1, which parallels the classical singularity analysis of Flajolet and Odlyzko in many ways. Numerous examples from the literature, as well as some new statistics, are treated via this methodology. In addition, we show how to compute further terms in the asymptotic expansions of previously studied partition statistics.
Compound Poisson population models are particular conditional branching process models. A formula for the transition probabilities of the backward process for general compound Poisson models is verified. Symmetric compound Poisson models are defined in terms of a parameter θ ∈ (0, ∞) and a power series φ with positive radius r of convergence. It is shown that the asymptotic behaviour of symmetric compound Poisson models is mainly determined by the characteristic value θrφ′(r−). If θrφ′(r−)≥1, then the model is in the domain of attraction of the Kingman coalescent. If θrφ′(r−) < 1, then under mild regularity conditions a condensation phenomenon occurs which forces the model to be in the domain of attraction of a discrete-time Dirac coalescent. The proofs are partly based on the analytic saddle point method. They draw heavily from local limit theorems and from results of S. Janson on simply generated trees, conditioned Galton-Watson trees, random allocations and condensation. Several examples of compound Poisson models are provided and analysed.