To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the previous chapters of this book, we have illustrated the use of the ingredients in our methodology for the description and analysis of reactive systems by means of simple but, it is hoped, illustrative examples. As we have mentioned repeatedly, the difficulty in understanding and reasoning reliably about even the simplest reactive systems has long been recognized. Apart from the intrinsic scientific and intellectual interest of a theory of reactive computation, this realization has served as a powerful motivation for the development of the theory we have presented so far and its associated verification techniques.
In order to offer you further evidence for the usefulness of the theory you have learned so far in the modelling and analysis of reactive systems, we shall now use it to model and analyse some well-known mutual exclusion algorithms. These algorithms are amongst the most classic ones in the theory of concurrent algorithms and have been investigated by many authors using a variety of techniques; see, for instance, the classic papers Dijkstra (1965), Knuth (1966) and Lamport (1986). Here, they will give us the opportunity to introduce some modelling and verification techniques that have proved their worth in the analysis of many different kinds of reactive system.
In order to illustrate concretely the steps that have to be taken in modelling and verification problems, we shall consider a very elegant solution to the mutual exclusion problem proposed by Peterson and discussed in Peterson and Silberschatz (1985).
In most natural language processing applications, Description Logics have been used to encode in a knowledge base some syntactic, semantic, and pragmatic elements needed to drive the semantic interpretation and the natural language generation processes. More recently, Description Logics have been used to fully characterize the semantic issues involved in the interpretation phase. In this chapter the various proposals that have appeared in the literature about the use of Description Logics for natural language processing will be analyzed.
Introduction
Since the early days of the Kl-One system, one of the main applications of Description Logics has been for semantic interpretation in natural language processing [Brachman et al., 1979]. Semantic interpretation is the derivation process from the syntactic analysis of an utterance to its logical form – intended here as the representation of its literal deep and context-dependent meaning. Typically, Description Logics have been used to encode in a knowledge base both syntactic and semantic elements needed to drive the semantic interpretation process. One part of the knowledge base constitutes the lexical semantics knowledge, relating words and their syntactic properties to concept structures, while the other part describes the contextual and domain knowledge, giving a deep meaning to concepts. By developing this idea further, a considerable part of the research effort has been devoted to the development of linguistically motivated ontologies, i.e., large knowledge bases where both concepts closely related to lexemes and domain concepts coexist.
This introduction presents the main motivations for the development of Description Logics (DLs) as a formalism for representing knowledge, as well as some important basic notions underlying all systems that have been created in the DL tradition. In addition, we provide the reader with an overview of the entire book and some guidelines for reading it.
We first address the relationship between Description Logics and earlier semantic network and frame systems, which represent the original heritage of the field. We delve into some of the key problems encountered with the older efforts. Subsequently, we introduce the basic features of DL languages and related reasoning techniques.
DL languages are then viewed as the core of knowledge representation systems, considering both the structure of a DL knowledge base and its associated reasoning services. The development of some implemented knowledge representation systems based on Description Logics and the first applications built with such systems are then reviewed.
Finally, we address the relationship of Description Logics to other fields of Computer Science. We also discuss some extensions of the basic representation language machinery; these include features proposed for incorporation in the formalism that originally arose in implemented systems, and features proposed to cope with the needs of certain application domains.
Introduction
Research in the field of knowledge representation and reasoning is usually focused on methods for providing high-level descriptions of the world that can be effectively used to build intelligent applications.
In Sections 11.1 and 11.2, we introduced some notions of behavioural equivalence over real-time systems specified by means of timed automata. These equivalences are based on various adaptations to the timed setting of the classic notions of trace equivalence and bisimilarity over LTSs – as presented in Sections 3.2 and 3.3 of this book – and may be used to perform implementation verification for real-time systems. This is useful because, at least in principle, a formalism like that of timed automata can be used to describe both actual systems and their specifications and, as we saw in Section 11.6, these notions of behavioural equivalence are decidable over (networks) of timed automata, with the notable exception of timed trace equivalence.
However, as we have already noted in the setting of modelling and verification for classic untimed reactive systems, when establishing the correctness of our system with respect to a specification using the methodology of implementation verification, we are forced to specify in some way the overall behaviour of the system under consideration. In a real-time setting, this often means that our specifications need to take into account many details pertaining to the timing behaviour of the implementation under analysis. This may lead to overly complex and subtle specifications. Moreover, sometimes we are interested only in specifying the expected behaviour of the system in certain specific circumstances.
This appendix describes three selected student projects. All these projects involve the use of software tools for verification and validation. In our lecture courses we have usually introduced the students to the Concurrency Workbench (CWB) and to Uppaal, but other tools could be used just as well. Further information on the following projects and more suggestions for student projects are available from the web page for the book at www.cs.aau.dk/rsbook/.
Alternating-bit protocol
In this project you are asked to model the alternating-bit protocol in the CCS language and verify your model using the CWB. The alternating-bit protocol is a simple yet effective protocol for managing the retransmission of lost messages. Consider a sender S and a receiver R, and assume that the communication medium from S to R is initialized, so that there are no messages in transit. The alternating-bit protocol works as follows.
Each message sent by S contains an additional protocol bit, 0 or 1.
When S sends a message, it does so repeatedly (with its corresponding bit) until it receives an acknowledgment (ACK) from R that contains the same protocol bit as the message being sent.
When R receives a message, it sends an acknowledgment ACK to S and includes the protocol bit of the received message. When a message is received for the first time, the receiver delivers it for processing, while subsequent messages with the same bit are simply acknowledged.
We present lower bounds on the computational complexity of satisfiability and subsumption in several Description Logics. We interpret these lower bounds as coming from different “sources of complexity”, which we isolate one by one. We consider both reasoning with simple concept expressions and reasoning with an underlying TBox. We discuss also complexity of instance checking in simple ABoxes. We have tried to enhance clarity and ease of presentation, sometimes sacrificing exhaustiveness for lack of space.
Introduction
Complexity of reasoning has been one of the major issues in the development of Description Logics. This is because such logics are conceived [Brachman and Levesque, 1984] as the formal specification of subsystems for representing knowledge, to be used in larger knowledge-based systems. Since using knowledge also means deriving implicit facts from the given ones, the implementation of derivation procedures should take into account the optimality of reasoning algorithms. The study of optimal algorithms starts from the elicitation of the computational complexity of the problem the algorithm should solve. Initially, studies about the complexity of reasoning problems in Description Logics were more focused on polynomial-time versus intractable (NP- or coNP-hard) problems. The idea was that a knowledge representation system based on a Description Logic with polynomial-time inference problems would guarantee timely answers to the rest of the system. However, once systems based on very expressive Description Logics with exponential-time reasoning problems were implemented [Horrocks, 1998b], it was recognized that knowledge bases of realistic size could be processed in reasonable time.
The model of timed automata, introduced by Alur and Dill (1990, 1994), has by now established itself as a classical formalism for modelling real-time systems with a dense representation of time. The development of the timed-automata formalism was carried out largely in parallel with – and independently of – the work on timed extensions of process algebras. Roughly speaking, whereas the development of timed process algebras was driven by their (relative) expressiveness, their revealing of new behavioural equivalences and their axiomatizations, the development of the timed-automata formalism was largely driven by the goal of obtaining decidability results for several important properties (Dill, 1989). By now, real-time model checking tools such Uppaal (Behrmann, David and Larsen, 2004) and Kronos (Bozga et al., 1998) are based on the timed-automata formalism and on the substantial body of research on this model that has been targeted towards transforming the early decidability results into practically efficient algorithms.
Motivation
Timed automata are essentially nondeterministic finite automata equipped with a finite number of real-valued clocks, so that transitions can be conditioned on clock values and the performing of a particular transition can reset selected clocks. We shall now introduce the formalism intuitively, showing how the light switch from the start of the previous chapter can be described using the formalism of timed automata without recourse to assumptions such as the urgency of some actions or maximal progress. Graphically, we could model the light switch as in Figure 10.1.
This chapter considers, on the one hand, extensions of Description Logics by features not available in the basic framework, but considered important for using Description Logics as a modeling language. In particular, it addresses the extensions concerning: concrete domain constraints; modal, epistemic, and temporal operators; probabilities and fuzzy logic; and defaults.
On the other hand, it considers non-standard inference problems for Description Logics, i.e., inference problems that – unlike subsumption or instance checking – are not available in all systems, but have turned out to be useful in applications. In particular, it addresses the non-standard inference problems: least common subsumer and most specific concept; unification and matching of concepts; and rewriting.
Introduction
Chapter 2 introduces the language ALCN as a prototypical Description Logic, defines the most important reasoning tasks (like subsumption, instance checking, etc.), and shows how these tasks can be realized with the help of tableau-based algorithms. For many applications, the expressive power of ALCN is not sufficient to express the relevant terminological knowledge of the application domain. Some of the most important extensions of ALCN by concept and role constructs have already been briefly introduced in Chapter 2; these and other extensions have then been treated in more detail in Chapter 5. All these extensions are “classical” in the sense that their semantics can easily be defined within the model-theoretic framework introduced in Chapter 2. even to undecidable ones), all the Description Logics obtained this way can only be used to represent time-independent, objective, and certain knowledge.
I've just come back from the 45th Annual Meeting of the Association for Computational Linguistics (ACL) in Prague; this was the biggest ever ACL conference, with more than 1,000 people attending for the first time. Attendance at ACL conferences has been growing year on year, and that is a sign of a healthy field. Another sign of health is industry sponsorship. For this year's conference, the Gold Sponsor was Google, and Microsoft and Yahoo! were Silver Sponsors, along with a few companies we have not seen as ACL sponsors before: −textkernel, News Tin, and – a name that seems now to pop up regularly in this column – Powerset. There are all sorts of reasons why companies sponsor conferences like this, but clearly a major purpose is to make themselves visible to potential employees. And, if companies are hiring, that's good news across the board: it gives us a way of attracting more students into the field, and more generally, it speaks to the industrial and commercial relevance of what we do. There is nothing like external validation – especially commercial validation – to wash away those niggling self-doubts about the utility of your research endeavours. I remember attending MT Summit VII in Singapore in 1999, when Jo Lernout, then of Lernout & Hauspie, gave an invited talk in which (I'm sure I'm remembering this correctly) he said his ambition was to hire everyone in the hall. There were around 250 attendees – big for an NLP conference at the time – so that created quite a buzz. Jo wanted to hire everyone, not just cherry pick those with the near-to-product big ideas; in his vision of the future, every teensy-weensy tightly-focused research contribution had a role to play. For just a moment, everybody felt wanted.
This article provides series expansions of the stationary distribution of a finite Markov chain. This leads to an efficient numerical algorithm for computing the stationary distribution of a finite Markov chain. Numerical examples are given to illustrate the performance of the algorithm.
We study the stochastic optimal control for an assemble-to-order system with multiple products and components that arrive at the system in random batches and according to renewal reward processes. Our purpose is to maximize expected infinite-horizon discounted profit by selecting product prices, component production rates, and a dynamic sequencing rule for assembly. We refine the solution of some static planning problem and a discrete review policy to batch arrival environment and develop an asymptotically optimal policy for the system operating under heavy traffic, which indicates that the system can be approximated by a diffusion process and exhibits a state space collapse property.
Suppose that there are n types of coupons and that each new coupon collected is type i with probability pi. Suppose, further, that there are m subsets of coupon types and that coupons are collected until all of the types of at least one of these subsets have been collected. When these subsets have no overlap, we derive expressions for the mean and variance of the number of coupons that are needed. In the general case where the subsets can overlap, we derive the mean of the number that are needed. We also note that this number is an increasing failure rate on average random variable and we present a conjecture as to a sufficient condition for it to be an increasing failure rate random variable.
This article investigates reliability properties of a flexible extended linear failure-rate family of distributions generated using the relation F¯(x) = αF¯0(x)/[1 − α¯F¯0(x)], where α > 0, α¯ = 1 − α, and F¯0(x) is the reliability function of the linear failure-rate distribution.
The concept of generalized order statistics was introduced as a unified approach to a variety of models of ordered random variables. The purpose of this article is to establish the usual stochastic and the likelihood ratio orderings of conditional distributions of generalized order statistics from one sample or two samples, strengthening and generalizing the main results in Khaledi and Shaked [15], and Li and Zhao [17]. Some applications of the main results are also given.