To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the next several chapters we discuss the application of the methodology for representing knowledge about dynamic domains and ASP programming to the design of intelligent agents capable of acting in a changing environment. The design is based on the agent architecture from Section 1.1. In this chapter we address planning — one of the most important and well studied tasks that an intelligent agent should be able to perform (see step 3 of the agent loop from Section 1.1).
Classical Planning with a Given Horizon
We start with classical planning in which a typical problem is defined as follows:
• A goal is a set of fluent literals that the agent wants to become true.
• A plan for achieving a goal is a sequence of agent actions that takes the system from the current state to one that satisfies this goal.
• Problem: Given a description of a deterministic dynamic system, its current state, and a goal, find a plan to achieve this goal.
A sequence α of actions is called a solution to a classical planning problem if the problem's goal becomes true at the end of the execution of α.
In this chapter we show how to use ASP programming techniques to solve a special case of the classical planning problem in which the agent has a limit on the length of the allowed plans. The limit is often referred to as the horizon of the planning problem.
So far, we have limited our attention to static domains — no attempt was made to represent a domain's evolution in time. Recall from the introduction that we are interested in agents that are intended to populate dynamic, changing domains and should therefore be able to plan, explain unexpected observations, and do other types of reasoning requiring the ability to predict effects of series of complex actions. This can be done only if the agent has sufficient knowledge about actions and their effects. In this chapter we discuss one of several current approaches to representing and reasoning with such knowledge. We start by looking at an extended example that illustrates some of the issues that arise when we attempt to represent actions and their effects on the world. Once some of these issues become clear, we present a general, formal theory of actions and change, with further examples on how to apply it to various domains. The theory views the world as a dynamic system whose states are changed by actions, and provides an “action language” for describing such systems. This language allows concise and mathematically accurate descriptions of the system's states and of possible state-action-state transitions; it allows us to represent dynamic domains and their laws. Such representations can be translated into ASP programs that are used to give the precise semantics of the language. Later, we show how this and similar translations can be used to answer queries about the effects of actions in a given situation.
So far we have used our ASP knowledge bases to get information about the truth or falsity of some statements or to find objects satisfying some simple properties. These types of tasks are normally performed by database systems. Even though the language's ability to express recursive definitions and the methodology of representing defaults and various forms of incomplete information gave us additional power and allowed us to construct rich and elaboration-tolerant knowledge bases, the types of queries essentially remained the same as in databases.
In this chapter we illustrate how significantly different computational problems can be reduced to finding answer sets of logic programs. The method of solving computational problems by reducing them to finding the answer sets of ASP programs is often called the answer-set programming (ASP) paradigm. It has been used for finding solutions to a variety of programming tasks, ranging from building decision support systems for the Space Shuttle and computer system configuration to solving problems arising in bio-informatics, zoology, and linguistics. In principle, any NP-complete problem can be solved in this way using programs without disjunction. Even more complex problems can be solved if disjunctive programs are used. In this chapter we illustrate the ASP paradigm by several simple examples. More advanced examples involving larger knowledge representation components are discussed in later chapters.
There are currently several ASP inference engines called ASP solvers capable of computing answer sets of programs with millions of ground rules.
Mappings are logical specifications of the relationship between schemas. In data exchange, one typically restricts the kind of dependencies allowed in mappings, either to be able to find more efficient procedures for constructing solutions and answering target queries, or to make mappings have desirable properties, such as closure under composition. These two tasks could be contradictory. For instance, the mapping language of SO tgds ensures closure under composition, but such mappings include a form of second-order quantification that can be difficult to handle in practice. Thus, it is desirable to replace an SO tgd by an equivalent set of st-tgds whenever possible.
In this chapter, we consider the problem of simplifying schema mappings by providing characterizations of the most common classes of mappings in terms of the structural properties they satisfy. The main goal for studying these properties is to isolate the features that different classes of mappings satisfy, and to understand what one can lose or gain by switching from one class of mappings to another. We present basic structural properties and then we use them to characterize the class of mappings specified by st-tgds, both generally, and in LAV and GAV scenarios. We also show that the structural characterizations can be used to derive complexity-theoretical results for testing definability of a mapping into some class of mappings.
So far we have tacitly assumed that one uses a native XML DBMS for performing data exchange tasks. However, this is not the only (and perhaps not even the most common) route: XML documents are often stored in relational DBMSs. Thus, it is natural to ask whether relational data exchange techniques, developed in PART TWO, can be used to perform XML data exchange tasks.
In XML terminology, translations from XML to relations are referred to as shredding of documents, whereas translations going the other way, from relations to XML, are referred to as publishing. Thus, to use relational technology for XML data exchange tasks, we can employ a two-step approach:
shred XML data into relations;
then apply a relational data-exchange engine (and publish the result back as an XML document if necessary).
The seems very natural, but the key question is whether it will work correctly. That is, are we guaranteed to have the same result as we would have gotten had we implemented a native XML data-exchange system? This is what we investigate in this chapter. It turns out that we need to impose restrictions on XML schema mappings to enable this approach, and the restrictions are similar to those we needed to ensure tractability of data exchange tasks in the previous chapters.
Translations and correctness
We now describe what we mean by correctness of translations that enable a relational data exchange system to perform XML data exchange tasks.
In data exchange, we are interested in computing certain answers to a query. However, we do not yet know when such a computation is feasible. The goal of the chapter is to answer this question.
The bad news is that the problem of computing certain answers for relational calculus (equivalently, relational algebra or FO) queries is undecidable, even in the absence of target dependencies. But the good news is that the problem becomes decidable, and, indeed, tractable, for unions of conjunctive queries over mappings with a weakly acyclic set of tgds. Conjunctive queries, as was already mentioned several times, play a very important role and are very common, and mappings with weakly acyclic sets of tgds, as we have seen, are the ones behaving particularly well when it comes to materializing solutions.
The positive result, however, breaks down when we extend conjunctive queries with inequalities. But we can still find a meaningful class of queries capable of expressing interesting properties in the data exchange context that extends conjunctive queries with a limited amount of negation and shares most of their good properties for data exchange.
Finally, we study the notion of query rewriting, i.e., when certain answers to a query Q can be computed by posing a possibly different query Q′ over a materialized solution. Such rewritings are easy for unions of conjunctive queries; our study concentrates on rewritings of relational algebra queries.
So far we have concentrated on handling data in data exchange, i.e., transforming source databases into target ones, and answering queries over them. We now look at manipulating information about schemas and schema mappings, known as metadata, i.e., we deal with metadata management. In this short chapter we outline the key problems that need to be addressed in the context of metadata management. These are divided into two groups of problems. The first concerns reasoning about mappings, and the second group of problems is about manipulating mappings, i.e., building new mappings from existing ones.
Reasoning about schema mappings
As we have seen, mappings are logical specifications of the relationship between schemas, both in the relational and XML scenarios. In particular, we have seen many different logical languages that are used to specify mappings. Thus, a first natural problem that one would like to study in the context of metadata management is to characterize the properties that a mapping satisfies depending on the logical formulae that are used to define it. More precisely, one would like, in the first place, to understand whether the logical formulae used to specify a mapping are excessively restrictive in the sense that no source instance admits a solution, or at least restrictive in the sense that some source instances do not admit solutions. Note that this is different from the problem of checking for the existence of solutions, studied in Chapter 5.
Data exchange, as the name suggests, is the problem of exchanging data between different databases that have different schemas. One often needs to exchange data between existing legacy databases, whose schemas cannot be easily modified, and thus one needs to specify rules for translating data from one database to the other. These rules are known as schema mappings. Once a source database and a schema mapping are given, one needs to transfer data to the target, i.e., construct a target database. And once the target database is constructed, one needs to answer queries against it.
This problem is quite old; it has been studied, and systems have been built, but it was done in a rather ad hoc way. A systematic study of the problem of data exchange commenced with the 2003 paper “Data exchange: semantics and query answering” by Fagin, Kolaitis, Miller, and Popa, published in the proceedings of the International Conference on Database Theory. A large number of followup papers appeared, and for a while data exchange was one of the most active research topics in databases. Foundational questions related to data exchange largely revolved around three key problems:
how to build a target solution;
how to answer queries over target solutions; and
how to manipulate schema mappings themselves.
The last question is also known under the name of metadata management, since mappings represent metadata, rather than data in the database.