Planning with Incomplete Information in Quantified Answer Set Programming

We present a general approach to planning with incomplete information in Answer Set Programming (ASP). More precisely, we consider the problems of conformant and conditional planning with sensing actions and assumptions. We represent planning problems using a simple formalism where logic programs describe the transition function between states, the initial states and the goal states. For solving planning problems, we use Quantified Answer Set Programming (QASP), an extension of ASP with existential and universal quantifiers over atoms that is analogous to Quantified Boolean Formulas (QBFs). We define the language of quantified logic programs and use it to represent the solutions to different variants of conformant and conditional planning. On the practical side, we present a translation-based QASP solver that converts quantified logic programs into QBFs and then executes a QBF solver, and we evaluate experimentally the approach on conformant and conditional planning benchmarks. Under consideration for acceptance in TPLP.


Introduction
We propose a general and uniform framework for planning in Answer Set Programming (ASP; Lifschitz 2002). Apart from classical planning, referring to planning with deterministic actions and complete initial states, our focus lies on conformant and conditional planning. While the former extends the classical setting by incomplete initial states, the latter adds sensing actions and conditional plans. Moreover, we allow for making assumptions to counterbalance missing information.
To illustrate this, let us consider the following example. There is a cleaning robot in a corridor going through adjacent rooms that may be occupied by people. The robot can go to the next room and can sweep its current room to clean it, but it should not sweep a room if it is occupied. We assume that nothing changes if the robot tries to go further than the last room, or if it sweeps a room that is already clean. The goal of the robot is to clean all rooms that are not occupied by people. We present a solution for any number of rooms, but in our examples we consider only two.
Classical planning. Consider an initial situation where the robot is in the first room, only the first room is clean, and no room is occupied. In this case, the classical planning problem is to find a plan that, applied to the initial situation, achieves the goal. The plan, where the robot goes to the second room and then sweeps it, solves this problem.
Conformant planning. Consider now that the robot initially does not know whether the rooms are clean or not. There are four possible initial situations, depending on the state of cleanliness of the two rooms. In this case, the conformant planning problem is arXiv:2108.06405v1 [cs.AI] 13 Aug 2021 to find a plan that, applied to all possible initial situations, achieves the goal. The plan, where the robot sweeps, goes to the second room and sweeps it, solves that problem.
So far rooms were unoccupied. Now consider that initially the robot knows that exactly one of the rooms is occupied, but does not know which. Combining the previous four options about cleanliness with the two new options about occupancy, there are eight possible initial situations. It turns out that there is no conformant plan for this problem. The robot would have to leave the occupied room as it is and sweep the other, but there is no way of doing that without knowing which is the occupied room.
Assumption-based planning. At this point, the robot can try to make assumptions about the unknown initial situation, and find a plan that at least works under these assumptions, hoping that they will indeed hold in reality. In this case, a conformant planning problem with assumptions is to find a plan and a set of assumptions such that the assumptions hold in some possible initial situation, and the plan, applied to all possible initial situations satisfying the assumptions, achieves the goal. Assuming that room one is occupied, the plan where the robot goes to room two and then sweeps it, solves the problem. Another solution is to assume that the second room is occupied and simply sweep the first room.
Conditional planning. The robot has a safer approach, if it can observe the occupancy of a room and prepare a different subplan for each possible observation. This is similar to conformant planning, but now plans have actions to observe the world and different subplans for different observations. In our example, there is a conditional plan where the robot first observes if the first room is occupied, and if so, goes to the second room and sweeps it, otherwise it simply sweeps the first room. The robot could also make some assumptions about the initial situation, but this is not needed in our example.
Unfortunately, the expressiveness of regular ASP is insufficient to capture this variety of planning problems. While bounded classical and conformant planning are still expressible since their corresponding decision problems are still at the first and second level of the polynomial hierarchy, bounded conditional planning is Pspace-complete (Turner 2002).
To match this level of complexity, we introduce a quantified extension of ASP, called Quantified Answer Set Programming (QASP), in analogy to Quantified Boolean Formulas (QBFs). More precisely, we start by capturing the various planning problems within a simple uniform framework centered on the concept of transition functions, mainly by retracing the work of Son et al. (2005;2007;. The core of this framework consists of a general yet simplified fragment of logic programs that aims at representing transition systems, similar to action languages (Gelfond and Lifschitz 1998) and (present-centered) temporal logic programs (Cabalar et al. 2018), We then extend the basic setting of ASP with quantifiers and define quantified logic programs, in analogy to QBFs. Although we apply QASP to planning problems, it is of general nature. This is just the same with its implementation, obtained via a translation of quantified logic programs to QBFs. This allows us to represent the above spectrum of planning problems by quantified logic programs and to compute their solutions with our QASP solver. Interestingly, the core planning problems are described by means of the aforementioned simple language fragment, while the actual type of planning problem is more or less expressed via quantification. Finally, we empirically evaluate our solver on conformant and conditional planning benchmarks.

Background
We consider normal logic programs over a set A of atoms with choice rules and integrity constraints. A rule r has the form H ← B where B is a set of literals, and H is either an atom p, and we call r a normal rule, or {p} for some atom p, making r a choice rule, or ⊥, so that r an integrity constraint. We usually drop braces from rule bodies B, and also use l, B instead of {l} ∪ B for a literal l. We also abuse notation and identify sets of atoms X with sets of facts {p ← | p ∈ X}. A (normal) logic program is a set of (normal) rules. As usual, rules with variables are viewed as shorthands for the set of their ground instances. We explain further practical extensions of ASP, like conditional literals and cardinality constraints, in passing. Semantically, we identify a body B with the conjunction of its literals, the head of a choice rule {p} with the disjunction p ∨ ¬p, a rule H ← B with the implication B → H, and a program with the conjunction of its rules. A set of atoms X ⊆ A is a stable model of a logic program P if it is a subset-minimal model of the formula that results from replacing in P any literal by ⊥ if it is not satisfied by X. We let SM (P ) stand for the set of stable models of P .
The dependency graph of a logic program P has nodes A, an edge p + → q if there is a rule whose head is either q or {q} and whose body contains p, and an edge p − → q if there is a rule with head q or {q}, and ¬p in its body. A logic program is stratified if its dependency graph has no cycle involving a negative edge ( − →). Note that stratified normal programs have exactly one stable model, unlike more general stratified programs.
ASP rests on a Generate-define-test (GDT) methodology (Lifschitz 2002). Accordingly, we say that a logic program P is in GDT form if it is stratified and all choice rules in P are of form {p} ← such that p does not occur in the head of any other rule in P . In fact, GDT programs constitute a normal form because every logic program can be translated into GDT form by using auxiliary atoms (Niemelä 2008;Fandinno et al. 2020).
Quantified Boolean formulas (QBFs; Giunchiglia et al. 2009) extend propositional formulas by existential (∃) and universal (∀) quantification over atoms. We consider QBFs over A of the form where n ≥ 0, X 0 , . . . , X n are pairwise disjoint subsets of A, every Q i is either ∃ or ∀, and φ is a propositional formula over A in conjunctive normal form (CNF). QBFs as in (1) are in prenex conjunctive normal form. More general QBFs can be transformed to this form in a satisfiability-preserving way (Giunchiglia et al. 2009). Atoms in X i are existentially (universally) quantified if Q i is ∃ (∀). Sequences of quantifiers and sets Q 0 X 0 . . . Q n X n are called prefixes, and abbreviated by Q. With it, a QBF as in (1) can be written as Qφ. As usual, we represent CNF formulas as sets of clauses, and clauses as sets of literals.
For sets X and Y of atoms such that X ⊆ Y , we define fixbf (X, Y ) as the set of clauses {{p} | p ∈ X} ∪ {{¬p} | p ∈ Y \ X} that selects models containing the atoms in X and no other atom from Y . That is, if φ is a formula then the models of φ ∪ fixbf (X, Y ) are {M | M is a model of φ and M ∩ Y = X}. Given that a CNF formula is satisfiable if it has some model, the satisfiability of a QBF can be defined as follows: The formula φ in Qφ generates a set of models, while the prefix Q can be interpreted as a kind of query over them. Consider φ 1 = {{a, b, ¬c}, {c}} and its models {{a, c}, {a, b, c}, {b, c}}. Adding the prefix Q 1 = ∃{a}∀{b} amounts to querying if there is some subset Y 1 of {a} such that for all subsets Y 2 of {b} there is some model of φ 1 that contains the atoms in Y 1 ∪ Y 2 and no other atoms from {a, b}. The answer is yes, for Y 1 = {a}, hence Q 1 φ 1 is satisfiable. One can check that letting Q 2 be ∃{b, c}∀{a} it holds that Q 2 φ 1 is satisfiable, while letting Q 3 be ∃{a}∀{b, c} we have that Q 3 φ 1 is not.

Planning Problems
In this section, we define different planning problems with deterministic and non-concurrent actions using a transition function approach building on the work of Tu et al. (2007). The domain of a planning problem is described in terms of fluents, i.e. properties changing over time, normal actions, and sensing actions for observing fluent values. We represent them by disjoint sets F , A n , and A s of atoms, respectively, let A be the set A n ∪A s of actions, and assume that F and A are non-empty. For clarity, we denote sensing actions in A s by a f for some f ∈ F , indicating that a f observes fluent f . To simplify the presentation, we assume that sets F , A n , A s and A are fixed. A state s is a set of fluents, s ⊆ F , that represents a snapshot of the domain. To describe planning domains, we have to specify what is the next state after the application of actions. Technically, this is done by a transition function Φ, that is, a function that takes as arguments a state and an action, and returns either one state or the bottom symbol ⊥. Formally, Φ : P(F ) × A → P(F ) ∪ {⊥}, where P(F ) denotes the power set of F . The case where Φ(s, a) = ⊥ represents that action a is not executable in state s.
Example. Let R be the set of rooms {1, . . . , r}. We represent our example domain with the fluents F = {at(x), clean(x), occupied(x) | x ∈ R}, normal actions A n = {go, sweep}, and sensing actions A s = {sense(occupied(x)) | x ∈ R}. For r = 2, s 1 = {at(1), clean(1)} is the state representing the initial situation of our classical planning example. The transition function Φ e can be defined as follows: for all x ∈ R, at(x) ∈ s implies occupied(x) / ∈ s, and is ⊥ otherwise; and, for all x ∈ R, Φ e (s, sense(occupied(x))) is s if at(x) ∈ s and is ⊥ otherwise.
Once we have fixed the domain, we can define a planning problem as a tuple Φ, I, G where Φ is a transition function, and I, G ⊆ P(F ) are non-empty sets of initial and goal states. A conformant planning problem is a planning problem with no sensing actions, viz. A s = ∅, and a classical planning problem is a conformant planning problem where I is a singleton. A planning problem with assumptions is then a tuple Φ, I, G, As where Φ, I, G is a planning problem and As ⊆ F is a set of possible assumptions.
Example. The initial situation is I 1 = {s 1 } for our classical planning problem, I 2 = {{at(1)} ∪ X | X ⊆ {clean(1), clean(2)}} for the first conformant planning problem and I 3 = {X ∪ Y | X ∈ I 2 , Y ∈ {{occupied(1)}, {occupied(2)}}} for the second one. All problems share the goal states G e = {s ⊆ F | for all x ∈ R either occupied(x) ∈ s or clean(x) ∈ s}. Let PP 1 be Φ e , I 1 , G e , PP 2 be Φ e , I 2 , G e , and PP 3 be Φ e , I 3 , G e . If we disregard sensing actions (and adapt Φ e consequently) these problems correspond to our examples of classical and conformant planning, respectively. The one of assumptionbased planning is given by PP 4 = Φ e , I 3 , G e , {occupied(1), occupied(2)} , and the one of conditional planning by PP 3 with sensing actions. Our next step is to define the solutions of a planning problem. For this, we specify what is a plan and extend transitions functions to apply to plans and sets of states. A plan and its length are defined inductively as follows: • [] is a plan, denoting the empty plan of length 0.
• If a ∈ A n is a non-sensing action and p is a plan, then [a; p] is a plan of length one plus the length of p.
• If a f ∈ A s is a sensing action, and p f , p f are plans, then [a f ; (p f , p f )] is a plan of length one plus the maximum of the lengths of p f and p f . We extend the transition function Φ to a set of states S as follows: Φ(S, a) is ⊥ if there is some s ∈ S such that Φ(a, s) = ⊥, and is s∈S Φ(s, a) otherwise. In our example, Φ e (I 1 , go) = {{at(2), clean(1)}}, Φ e (I 2 , sweep) = {{at(1), clean(1)}, {at(1), clean(1), clean(2)}}, Φ e (I 3 , sweep) = ⊥, and Φ e (Φ e (I 3 , go), sweep) = ⊥. With this, we can extend the transition function Φ to plans as follows. Let p be a plan, and S a set of states, then: , where a is a non-sensing action and q is a plan, then , where a f is a sensing action and q f , q f are plans, then In our example, Φ(I 1 , p 1 ) = {{at(2), clean(1), clean(2)}}, Φ(I 2 , p 2 ) = {{at(2), clean(1), clean(2)}}, Φ(I 3 , q) = ⊥ for any plan q without sensing actions that involves some sweep We can now define the solutions of planning problems: a plan p is a solution to planning problem Φ, I, G if Φ(I, p) = ⊥ and Φ(I, p) ⊆ G. In our example, plan p 1 solves PP 1 , p 2 solves PP 2 , and p 3 solves PP 3 . There is no plan without sensing actions solving PP 3 . For assumption-based planning, a tuple p, T, F , where p is a plan and T, F ⊆ As, is a solution to a planning problem with assumptions Φ, I, G, As if (1) J = {s | s ∈ I, T ⊆ s, s ∩ F = ∅} is not empty, and (2) p solves the planning problem Φ, J, G . Condition (1) guarantees that the true assumptions T and the false assumptions F are consistent with some initial state, and condition (2) checks that p achieves the goal starting from all initial states consistent with the assumptions. For example, the planning problem with assumptions PP 4 is solved by p 1 , {occupied(1)}, {} and by [sweep], {occupied(2)}, {} .

Representing Planning Problems in ASP
In this section we present an approach for representing planning problems using logic programs. Let F , A n , A s and A be sets of atoms as before, and let F = {f | f ∈ F } be a set of atoms that we assume to be disjoint from the others. We use the atoms in F to represent the value of the fluents in the previous situation. We represent planning problems by planning descriptions, that consist of dynamic rules to represent transition functions, initial rules to represent initial states, and goal rules to represent goal states. Formally, a dynamic rule is a rule whose head atoms belong to F and whose body atoms belong to A ∪ F ∪ F , an initial rule is a rule whose atoms belong to F , and a goal rule is an integrity constraint whose atoms belong to F . Then a planning description D is a tuple DR, IR, GR of dynamic rules DR, initial rules IR, and goal rules GR. By D(D), I(D) and G(D) we refer to the elements DR, IR and GR, respectively.
Example. We represent the transition function Φ e by the following dynamic rules DR e : On the left column, the normal rules describe the position of the robot depending on its previous position and the action go, while the integrity constraint below forbids the robot to sweep if it is in a room that is occupied. On the right column, the first two rules state when a room is clean, the third one expresses that rooms remain occupied if they were before, and the last integrity constraint forbids the robot to observe a room if it is not at it. For the initial states, I 1 is represented by the initial rules IR 1 = {at(1) ←; clean(1) ←}, I 2 is represented by IR 2 : at(1) ← {clean(R)} ← R = 1..r and I 3 by IR 3 , that contains the rules in IR 2 and also these ones: The choice rules generate the different possible initial states, and the integrity constraint inforces that exactly one room is occupied. The goal states G are represented by GR e = {⊥ ← ¬occupied(R), ¬clean(R), R = 1..n} that forbids states where some room is not occupied and not clean. Next we specify formally the relation between planning descriptions and planning problems. We extract a transition function Φ(s, a) from a set of dynamic rules DR by looking at the stable models of the program s ∪ {a ←} ∪ DR for every state s and action a, where s is {f | f ∈ s}. The state s is represented as s to stand for the previous situation, and the action and the dynamic rules generate the next states. Given that we consider only deterministic transition functions, we restrict ourselves to deterministic problem descriptions, where we say that a set of dynamic rules DR is deterministic if for every state s ⊆ F and action a ∈ A, the program s ∪ {a ←} ∪ DR has zero or one stable model, and a planning description D is deterministic if D(D) is deterministic. A deterministic set of dynamic rules DR defines the transition function defined for every state s ⊆ F and action a ∈ A. Note that given that DR is deterministic, the second condition only holds when the program s ∪ {a} ∪ DR has no stable models.
For the case where no action occurs, we require dynamic rules to make the previous state persist. This condition is not strictly needed, but it makes the formulation of the solutions to planning problems in Section 6 easier. Formally, we say that a set of dynamic rules DR is inertial if for every state s ⊆ F it holds that SM (s ∪ DR) = {s ∪ s}, and we say that a planning description D is inertial if D(D) is inertial. From now on, we restrict ourselves to deterministic and inertial planning descriptions. Coming back to the initial and goal rules of a planning description D, the first ones represent the initial states SM (I(D)), while the second ones represent the goal states In the latter case, the choice rules generate all possible states while the integrity constraints in G(D) eliminate those that are not goal states. Given that we consider only non-empty subsets of initial and goal states, we require the programs I(D) and {{f } ←| f ∈ F } ∪ G(D) to have at least one stable model. Finally, putting all together, we say that a deterministic and inertial planning description D represents the planning problem Φ D(D) , SM (I(D)), SM ({{f } ←| f ∈ F } ∪ G(D)) . Moreover, the planning description D together with a set of atoms As represent the planning problem with assumptions Φ D(D) , SM (I(D)), SM ({{f } ←| f ∈ F } ∪ G(D)), As .
Example. One can check that the dynamic rules DR e are deterministic, inertial, and define the transition function Φ e of the example. Moreover, D 1 = DR e , IR 1 , GR e represents PP 1 , D 2 = DR e , IR 2 , GR e represents PP 2 , D 3 = DR e , IR 3 , GR e represents PP 3 , and D 3 with the set of atoms {occupied(1), occupied(2)} represents PP 4 .

Quantified Answer Set Programming
Quantified Answer Set Programming (QASP) is an extension of ASP to quantified logic programs (QLPs), analogous to the extension of propositional formulas by QBFs. A quantified logic program over A has the form where n ≥ 0, X 0 , . . . , X n are pairwise disjoint subsets of A, every Q i is either ∃ or ∀, and P is a logic program over A. Prefixes are the same as in QBFs, and we may refer to a QLP as in (2) by QP . For sets X and Y of atoms such that X ⊆ Y , we define fixcons(X, Y ) as the set of rules {⊥ ← ¬x | x ∈ X}∪{⊥ ← x | x ∈ Y \X} that selects stable models containing the atoms in X and no other atom from Y . That is, if P is a logic program then the stable models of P ∪ fixcons(X, Y ) are {M | M is a stable model of P and M ∩ Y = X}.
Given that a logic program is satisfiable if it has a stable model, the satisfiability of a QLP is defined as follows: As with QBFs, program P in QP generates a set of stable models, while its prefix Q can be seen as a kind of query on it. Consider P 1 = {{a} ← ; {b} ← ; c ← a; c ← b; ⊥ ← ¬c} and its stable models {a, c}, {a, b, c}, {b, c}. The prefixes of Q 1 P 1 , Q 2 P 1 and Q 3 P 1 pose the same queries over the stable models of P than those posed in Q 1 φ 1 , Q 2 φ 1 and Q 3 φ 1 over the models of φ 1 . Given that the stable models of P 1 and the models of φ 1 coincide, the satisfiability of the Q i P 1 's is the same as that of the corresponding Q i φ 1 's. This result is generalized by the following theorem that relates QLPs and QBFs.
Theorem 5.1 Let P be a logic program over A and φ be a CNF formula over A ∪ B such that For every prefix Q whose sets belong to A, the QLP QP is satisfiable if and only if the QBF Qφ is satisfiable.
The proof is by induction on the number of quantifiers in Q. The condition SM (P ) = {M ∩ A | M is a model of φ} of Theorem 5.1 is satisfied by existing polynomial-time translations from logic programs P over A to CNF formulas φ over A ∪ B, and from CNF formulas φ over A to logic programs P over A (Janhunen 2004). Using these translations, Theorem 5.1, and the fact that deciding whether a QBF is satisfiable is PSPACE-complete, we can prove the following complexity result about QASP.
Theorem 5.2 The problem of deciding whether a given QLP QP is satisfiable is PSPACE-complete.
The implementation of our system qasp2qbf 1 relies on the previous results. The input is a QLP QP that is specified by putting together the rules of P with facts over the predicates exists/2 and f orall/2 describing the prefix Q, where exists(i, a) ( f orall(i, a), respectively) asserts that the atom a is existentially (universally, respectively) quantified at position i of Q. The system first translates P into a CNF formula φ that satisfies the condition of Theorem 5.1 using the tools lp2normal, lp2acyc, and lp2sat, 2 and then uses a QBF solver to decide the satisfiability of Qφ. If Qφ is satisfiable and the outermost quantifier is existential, then the system returns an assignment to the atoms occurring in the scope of that quantifier.

Solving Planning Problems in QASP
In this section, we describe how to solve planning problems represented by a planning description using QASP.
Classical planning. We start with a planning description D that represents a classical planning problem PP = Φ, {s 0 }, G . Our task, given a positive integer n, is to find a plan [a 1 ; . . . ; a n ] of length n such that Φ({s 0 }, p) ⊆ G. This can be solved as usual in answer set planning (Lifschitz 2002), using choice rules to generate possible plans, initial rules to define the initial state of the problem, dynamic rules replicated n times to define the next n steps, and goal rules to check the goal conditions at the last step. To do that in this context, we first let Domain be the union of the following sets of facts asserting the time steps of the problem, the actions, and the fluents that are sensed by each sensing action: The last set is only needed for conditional planning, but we already add it here for simplicity. Given these facts, the following choice rule generates the possible plans of the problem: Additionally, by D • I (D • G , respectively) we denote the set of rules that results from replacing in I(D) (G(D), respectively) the atoms f ∈ F by h(f, 0) (by h(f, n), respectively); by D • D we denote the set of rules that results from replacing in D(D) the atoms f ∈ F by h(f, T ), the atoms f ∈ F by h(f, T − 1), the atoms a ∈ A by occ(a, T ) and adding the atom t(T ) to the body of every rule; and by D • we denote the program D Putting all together, the program Domain ∪ (3) ∪ D • represents the solutions to the planning problem PP. The choice rule (3) guesses plans [a 1 ; . . . ; a n ] using atoms of the form occ(a 1 , 1), . . . , occ(a n , n), the rules of D • I define the initial state s 0 using atoms of the form h(·, 0), the rules of D • D define the next states s i = Φ(s i−1 , a i ) for i ∈ {1, . . . , n} using atoms of the form h(·, i) while at the same time check that the actions a i are executable in s i−1 , and the rules of D • G check that the last state s n belongs to G. Of course, all this works only because D represents PP and therefore Φ, {s 0 } and G are defined by D(D), I(D) and G(D), respectively. Finally, letting Occ be the set of atoms {occ(a, t) | a ∈ A, t ∈ {1, . . . , n}}, we can represent the solutions to PP by the quantified logic program where the atoms selected by the existential quantifier correspond to solutions to PP. Going back to our example, where D 1 represents the problem PP 1 , we have that for n = 2 the program (4) (adapted to D 1 ) is satisfiable selecting the atoms {occ(go, 1), occ(sweep, 2)} that represent the solution p 1 . Conformant planning. When D represents a conformant planning problem PP = Φ, I, G our task is to find a plan p such that Φ(I, p) ⊆ G, or, alternatively, p must be such that for all s ∈ I it holds that Φ({s}, p) ⊆ G. This formulation of the problem suggests to use a prefix ∃∀ where the existential quantifier guesses a plan p and the universal quantifier considers all initial states. Let us make this more concrete. From now on we assume that I(D) is in GDT form. 3 If that is not the case then we can translate the program into GDT form using the method mentioned in the Background section, and we continue from there. Let Gen, Def and Test be the choice rules, normal rules and integrity constraints of D • I , respectively, and let Open be the set {h(f, 0) | {h(f, 0)} ←∈ Gen} of possible guesses of the choice rules of D • I . Observe that for every possible set X ⊆ Open the program X ∪ Def has a unique stable model M , and if M is also a model of Test then M is a stable model of D • I . Moreover, note that all stable models of D • I can be constructed in this manner. Given this, we say that the sets X ⊆ Open that lead to a stable model M of D • I are relevant, because they can be used as representatives of the initial states, and the other sets in Open are irrelevant. Back to our quantified logic program, we are going to use the prefix ∃Occ∀Open. This works well with the logic program Domain ∪ (3) ∪ D • whenever all choices X ⊆ Open are relevant. But it fails as soon as there are irrelevant sets because, when we select them as subsets of Open in the universal quantifier, the resulting program becomes unsatisfiable. To fix this, we can modify our logic program so that for the irrelevant sets the resulting program becomes always satisfiable. We do that in two steps. First, we modify D • I so that the irrelevant sets lead to a unique stable model that contains the special atom α(0). This is done by the program D • I that results from replacing in D • I the symbol ⊥ in the head of the integrity constraints by α(0). Additionally, we consider the rule that derives α(1), . . . , α(n) for the irrelevant sets. Second, we modify D • D and D • G so that whenever those special atoms are derived, these programs are inmediately satisfied. This is done by the programs D • D and D • G that result from adding the literal ¬α(T ) to the bodies of the rules in D • D and D • G , respectively. Whenever the special atoms α(0), . . . , α(n) are derived, they deactivate the rules in D • D and D • G and make the program satisfiable. We denote by D • the program D Then the following theorem establishes the correctness and completeness of the approach.
Theorem 6.1 Let D be a planning description that represents a conformant planning problem PP, and n be a positive integer. If I(P ) is in GDT form, then there is a plan of length n that solves PP if and only if the following quantified logic program is satisfiable: In our example, where D 2 represents the problem PP 2 , for n = 3 the program (6) (adapted to D 2 ) is satisfiable selecting the atoms {occ(sweep, 1), occ(go, 2), occ(sweep, 3)} that represent the solution p 2 , while for D 3 , that represents PP 3 , the corresponding program is unsatisfiable for any integer n. Assumption-based planning. Let D, together with a set of atoms As ⊆ F , represent a conformant planning problem with assumptions PP = Φ, I, G, As . To solve this problem we have to find a plan p = [a 1 ; . . . ; a n ] and a set of assumptions T, F ⊆ As such that (1) the set J = {s | s ∈ I, T ⊆ s, s ∩ F = ∅} is not empty, and (2) p solves the conformant planning problem Φ, J, G . The formulation of the problem suggests that first we can guess the set of assumptions T and F , and then check (1) and (2) separately. The guess can be represented by the set of rules Guess that consists of the facts {assumable(f ) | f ∈ As} and the choice rule {assume(F, true); assume(F, f alse)} ≤ 1 ← assumable(F ) that generates all possible sets of assumptions using the predicate assume/2. Moreover, we add the set of atoms Assume = {assume(f, v) | f ∈ As, v ∈ {true, f alse}} to Occ at the outermost existential quantifier of our program. Condition (1) can be checked by the set of rules C1 that can be divided in two parts. The first part is a copy of the initial rules, that consists of the rules that result from replacing in I(P ) every atom f ∈ F by init(f ), and in this way generates all possible initial states in I using the predicate init/1. The second part contains the integrity constraints ⊥ ← ¬init(F ), assume(F, true) ⊥ ← init(F ), assume(F, f alse) that guarantee that the guessed assumptions represented by assume/2 hold in some initial state represented by init/1. Condition (2) can be represented extending the program for conformant planning with the following additional rules C2 , stating that the initial states that do not agree with the guessed assumptions are irrelevant: With these rules, the plans only have to succeed starting from the initial states that agree with the assumptions, and condition (2) is satisfied.
Theorem 6.2 Let D, PP, and As be as specified before, and n be a positive integer. If I(D) is in GDT form, then there is a plan with assumptions of length n that solves PP if and only if the following quantified logic program is satisfiable: In our example, where D 3 together with the set of atoms {occupied(1), occupied(2)} represents PP 4 , we have that for n = 2 the program (7) where the cardinality constraint {h(F, T −1), obs(true, T −1)} = 1 holds if the truth value of obs(true, T −1) and h(F, T −1) is not the same. Another issue with the previous QLP is that it allows normal actions at every time point T to be followed by different actions at T +1 for each value of obs(true, T ). This is not a problem for the correctness of the approach, but it is not a natural representation. To fix this, we can consider that, whenever a normal action occurs at time point T , the case where obs(true, T ) holds is irrelevant: Appart from this, note that in conditional planning the different subplans may have different lengths. For this reason, in the choice rule (3) we have to replace the symbol '=' by '≤'. We denote the new rule by (3) ≤ .

Theorem 6.3
Let D and PP be as specified before, and n be a positive integer. If I(D) is in GDT form, then there is a plan of length less or equal than n that solves PP if and only if the following quantified logic program is satisfiable: ∃Occ 1 ∀Obs 1 . . . ∃Occ n−1 ∀Obs n−1 ∃Occ n ∀Open Domain ∪P • ∪(3) ≤ ∪(5)∪(8−10) (11) For D 3 , that represents the problem PP 3 , and n = 3, the program (11) (adapted to D 3 ) is satisfiable selecting first {occ(sense(occupied(1)))} at Occ 1 , then at Occ 2 selecting {occ(clean, 2)} for the subset {} ⊆ Obs 1 and {occ(go, 2)} for the subset Obs 1 ⊆ Obs 1 , and finally at Occ 3 selecting {} in all cases except for the subsets Obs 1 ⊆ Obs 1 and {} ⊆ Obs 2 that we select {occ(clean, 3)}. This assignment corresponds to plan p 3 .

Experiments
We evaluate the performance of qasp2qbf in conformant and conditional planning benchmarks. We consider the problem opt of computing a plan of optimal length. To solve it we first run the solver for length 1, and successively increment the length by 1 until an optimal plan is found. The solving times of this procedure are usually dominated by the unsatisfiable runs. To complement this, we also consider the problem sat of computing a plan of a fixed given length, for which we know that a solution exists. We evaluate qasp2qbf  ; for a total of 16 configurations. By caqe • we refer to the combination of qasp2qbf with caqe without preprocessor, and by caqe b to qasp2qbf with caqe and the preprocessor bloqqer. We proceed similarly for other QBF solvers and preprocessors. We compare qasp2qbf with the incomplete planner CPasp (Tu et al. 2007), that translates a planning description into a normal logic program that is fed to an ASP solver. In the experiments we have used the ASP solver clingo (version 5.5) 5 and evaluated its 6 basic configurations (crafty (c), frumpy (f), handy (h), jumpy (j), trendy (r), and tweety (t)). We refer to CPasp with clingo and configuration crafty by clingo c , and similarly with the others.
We use the benchmark set from Tu et al. (2007), but we have increased the size of the instances of some domains if they were too easy. The conformant domains are six variants of Bomb in the Toilet (Bt, Bmt, Btc, Bmtc, Btuc, Bmtuc), Domino and Ring. We have also added a small variation of the Ring domain, called Ringu, where the room of the agent is unknown, and the planner CPasp cannot find any plan due to its incompleteness. The conditional domains are four variations of Bomb in the Toilet with Sensing Actions (Bts1, Bts2, Bts3 and Bts4), Domino, Medical Problem (Med), Ring and Sick. All domains have 5 instances of increasing difficulty, except Domino in conformant planning that has 6, and Ring in conditional planning that has 4. For the problem sat, the fixed plan length is always the minimal plan length for CPasp.
All experiments ran on an Intel Xeon 2.20GHz processor under Linux. Each run was limited to 30 minutes runtime and 16 GB of memory. We report the aggregated results per domain: average runtime in seconds and number of timeouts in parentheses for opt, next to the average runtime in seconds for sat, for which there were very few timeouts. To calculate the averages, we add 1800 seconds for every timeout. In Appendix A, we report these results for every solver and configuration, and provide further details. Here, we show and discuss the best configuration for each solver, separately for conformant planning in Table 1, and for conditional planning in Table 2.
In conformant planning, looking at the qasp2qbf configurations, for the variations of Bt, caqe q and qesto q perform better than depqbf q and qute h . Domino is solved very quickly by all solvers, while in Ring and Ringu caqe q clearly outperforms the others. The planner clingo j , in the variations of Bt and Domino, in opt has a similar performance to the best qasp2qbf solvers, while in sat it is much faster and solves the problems in less than a second. In Ring, however, its performance is worse than that of caqe q . Finally, in Ringu for opt, given the incompleteness of the system, it never manages to find a plan and always times out. In conditional planning, caqe b is the best for the variations of Bts, while clingo h is better than the other solvers for sat but worse for opt. In Domino, for opt, the qasp2qbf solvers perform better than clingo h , while for sat only qesto b matches its performance. Finally, for Med, Ring, and Sick, all solvers yield similar times. Summing up, we can conclude that qasp2qbf with the right QBF solver and preprocessor compares well to CPasp, except for the sat problem in conformant planning, while on the other hand it can solve problems, like Ringu, that are out of reach for CPasp due to its incompleteness.   Conformant and conditional planning have already been addressed with ASP (Eiter et al. 2003;Son et al. 2005;Tu et al. 2007;Tu et al. 2011). Eiter et al. (2003 introduce the system dlv K for planning with incomplete information. It solves a conformant planning problem by first generating a potential plan and then verifying it, no sensing actions are considered. Son et al. (2005;2007; propose an approximation semantics for reasoning about action and change in the presence of incomplete information and sensing actions. This is then used for developing ASP-based conformant and conditional planners, like CPasp, that are generally incomplete. Closely related is SAT-based conformant planning: C-Plan (Castellini et al. 2003) is similar to dlv K in identifying a potential plan before validating it. compile-project-sat (Palacios and Geffner 2005) uses a single call to a SAT-solver to compute a conformant plan. Their validity check is doable in linear time, if the planning problem is encoded in deterministic decomposable negation normal form. Unlike this, QBFPlan (Rintanen 1999) maps the problem into QBF and uses a QBF-solver as back-end.
A more recent use of ASP for computing conditional plans is proposed by Yalciner et al. (2017). The planner deals with sensing actions and incomplete information; it generates multiple sequential plans before combining them in a graph representing a conditional plan. Cardinality constraints, defaults, and choices are used to represent the execution of sensing actions, their effects, and branches in the final conditional plan. In addition, the system computes sequential plans in parallel and also avoids regenerating plans.
Assumption-based planning, as considered here, is due to Davis-Mendelow et al. (2013). In that work, the problem is solved by translating it into classical planning using an adaptation of the translation of Palacios and Geffner (2009). To the best of our knowledge, there exists no ASP-based planner for this type of problems.
There are a number of extensions of ASP to represent problems whose complexity lays beyond NP in the polynomial hierarchy. A comprehensive review was made by Amendola et al. (2019), that presents the approach that is closer to QASP, named ASP with Quantifiers (ASP(Q)). Like QASP, it introduces existential and universal quantifiers, but they range over stable models of logic programs and not over atoms. This quantification over stable models is very useful for knowledge representation. For example, it allows us to represent conformant planning problems without the need of additional α atoms, using the following ASP(Q) program: where ∃ st and ∀ st are existential and universal stable model quantifiers, respectively. The program is coherent (or satisfiable, in our terms) if there is some stable model M 1 of Domain ∪ (3) such that for all stable models Stable models M 1 correspond to possible plans. They are extended in M 2 by atoms representing initial states, that are used in M 2 ∪ D • D ∪ D • G to check if the plans achieve the goal starting from all those initial states. Assumption-based planning can be represented in a similar way, while for conditional planning we have not been able to come up with any formulation that does not use additional α atoms.
As part of this work, we have developed translations between QASP and ASP(Q). We leave their formal specification to Appendix B, and illustrate them here for conformant planning. From ASP(Q) to QASP, we assume that D • I is in GDT form, and otherwise we translate it into this form. Then, the translation of an ASP(Q) program of the form (12) yields a QLP that is essentially the same as (6), except for some renaming of the additional α atoms and some irrelevant changes in the prefix. In the other direction, the QLP program (6) is translated to the ASP(Q) program where P 0 is {{p } ←| p ∈ Occ}, P 1 is {{p } ←| p ∈ Open}, and O contains the set of rules ⊥ ← p, ¬p and ⊥ ← ¬p, p for every p ∈ Occ ∪ Open. Programs P 0 and P 1 guess the values of the atoms of the prefix using additional atoms p , and the constraints in O match those guesses to the corresponding original atoms.

Conclusion
We defined a general ASP language to represent a wide range of planning problems: classical, conformant, with assumptions, and conditional with sensing actions. We then defined a quantified extension of ASP, viz. QASP, to represent the solutions to those planning problems. Finally, we implemented and evaluated a QASP solver, available at potassco.org, to compute the solutions to those planning problems. Our focus lays on the generality of the language and the tackled problems; on the formal foundations of the approach, by relating it to simple transition functions; and on having a baseline implementation, whose performance we expect to improve further in the future.

Appendix A Results
In this section, we provide some additional information about the experiments, and present the results for every solver and configuration. We note that in conditional planning, CPasp receives as input an additional width parameter that determines the minimum number of branches of a conditional plan. qasp2qbf receives no information of that kind, and in this regard it solves a more difficult task than CPasp. On the other hand, that width parameter allows CPasp to return the complete conditional plan, while qasp2qbf only returns the assignment to the first action of the plan. Observe also that given that CPasp is incomplete, in some instances qasp2qbf can find plans for opt whose length is smaller than the fixed length of sat. This explains why in some few cases qasp2qbf solves faster the opt problem than sat.
In the experiments, the times for grounding and for the translators lp2normal and lp2sat are negligible, while, in the sat problem, preprocessing takes on average 5 and 2 seconds in conformant and conditional planning, respectively. We expect these times to be similar for the opt problem. The reported times are dominated by clingo's solving time in CPasp, and by the QBF solvers in qasp2qbf. But the usage of preprocessors is fundamental for the performance of qasp2qbf. For example, in conformant planning, caqe alone results in 18 timeouts, while with qratpre+ they go down to only 6.         An ASP(Q) program Π is an expression of the form: where n ≥ 0, P 0 , . . . , P n are logic programs, every i is either ∃ st or ∀ st , and C is a normal stratified program. In (Amendola et al. 2019), the logic programs P i can be disjunctive. Here, for simplicity, we restrict ourselves to logic programs as defined in the Background section. Symbols ∃ st and ∀ st are called existential and universal answer set quantifiers, respectively. We say that the program P i is existentially (universally, respectively) quantified if i is ∃ st (∀ st , respectively). Let X and Y be sets of atoms such that X ⊆ Y , by fixfact(X, Y ) we denote the set of rules {x ←| x ∈ X} ∪ {← x | x ∈ Y \ X}. Let atoms(P ) denote the set of atoms occurring in a logic program P . Given an ASP(Q) program Π, a logic program P , and a set of atoms X, we denote by Π P,X the program of the form (B1), where P 0 is replaced by P 0 ∪ fixfact(X, atoms(P )), that is, Π P,X = 0 (P 0 ∪ fixfact(X, atoms(P ))) · · · n P n : C. The coherence of ASP(Q) programs can be defined recursively as follows: • ∃ st P : C is coherent, if there exists M ∈ SM (P ) such that C ∪ fixfact(M, atoms(P )) is satisfiable; • ∀ st P : C is coherent, if for every M ∈ SM (P ), C ∪ fixfact(M, atoms(P )) is satisfiable; • ∃ st P Π is coherent, if there exists M ∈ SM (P ) such that Π P,M is coherent.
• ∀ st P Π is coherent, if for every M ∈ SM (P ), Π P,M is coherent.
Before moving to the translations between ASP(Q) and QASP, we introduce some special forms of ASP(Q) programs. We say that an ASP(Q) program of the form (B1) is in normal form if C is empty and the heads of the rules of every P i do not contain atoms occurring in any P j such that j < i, and we say that an ASP(Q) program is in ∀-GDT form if all its universally quantified logic programs are in GDT form. ASP(Q) programs of the form (B1) can be translated to ASP(Q) programs in normal and ∀-GDT form (using auxiliary variables) following the next steps: 1. Replace program C by ∅, and add ∃ st C after n P n . 2. In every logic program, replace the normal rules p ← B such that p occurs in some previous program by constraints ⊥ ← ¬p, B; and erase the choice rules {p} ← B such that p occurs in some previous program. 3. Apply the method described in (Niemelä 2008;Fandinno et al. 2020) to translate the universally quantified programs to GDT form.
The translation can be computed in polynomial time on the size of the ASP(Q) program, and the result is an ASP(Q) program in normal and ∀-GDT form that is coherent if and only if the original ASP(Q) program is coherent.
Once an ASP(Q) program Π of the form (B1) is in normal and ∀-GDT form, we can translate it to a QLP as follows. Let qlp(Π) be the QLP of the form where for every i ∈ {0, . . . , n}, if i is ∃ st then Q i is ∃, X i is the set of atoms occurring in P i that do not occur in any previous program, and P • i is P 0 if n = 0 and otherwise it is the program that results from adding the literal ¬α i to the body of every rule i is the program that results from replacing ⊥ in the head of every integrity constraint by α i ; and O is The translation in the other direction is as follows. Given a QLP QP over A of the form (2), let aspq(QP ) be the ASP(Q) program of the form ..,n} X i , and we assume that the set {p | p ∈ X } is disjunct from A.

Theorem Appendix B.2 QP is satisfiable if and only if aspq(QP ) is coherent.
Appendix C Proof of Theorem 6.1 We first introduce some notation. Let D be a planning description. By C D we denote the logic program Domain ∪ D • ∪ (3) ∪ (5). Then (6) can be rewritten as ∃Occ∀Open C D . Let Generate denote (3), and Alpha denote (5). Then C D can be rewritten as Domain ∪ D • ∪ Generate ∪ Alpha. Since all occurrences of variable T in C D are always bound by an atom t(T ), and the only ground atoms of that form that can be true are t(1), . . . , t(n), we can consider only the ground instantiations of C D where T has the values from 1 to n. We add some more notation to represent those ground instances. We denote by r(i) the rule obtained by replacing in r the variable T by i. If i ≤ j, by r(i, j) we denote the set of rules containing r(k) for all i ≤ k ≤ j. For a set of rules R, we define Note that C D has the same stable models as the program for every j ∈ {1, . . . , n}. The following results make repetitive use of the Splitting Theorem (Lifschitz and Turner 1994). Note that no atom occurring in C D [0, j] occurs in the head of a rule in C D [i] ∪ D • G for any i > j. We also define Holds[i] = {h(f, i) | f ∈ F }. Lemma 1 Let D be a planning description that represents a conformant planning problem PP, s, s ⊆ F be two states, a be an action, and i be a positive integer. Moreover, let X be a stable model of i) and, thus, P 2 is equal to (C1). Program P 1 has a unique stable model X 1 = X ∪ {occ(a, i)} and, by the Splitting Theorem, the stable models of P 2 are the same as the stable models of X 1 ∪ D • D (i). Note that α(i − 1) / ∈ X implies that α(i) / ∈ X 1 because the only rule with α(i) in the head has α(i − 1) in the body. Furthermore, since D represents PP and α(i) / ∈ X, it follows that either (Case 1) has a unique stable model Y such that s = {f ∈ F | h(f, i) ∈ Y } or (Case 2) has no stable model and Φ(s , a) = ⊥. Recall that D(D) is deterministic. Hence, it cannot generate multiple stable models, and the same applies to D • D (i). Note also that α(i) / ∈ X implies α(i) / ∈ Y because the only rules with α(i) in the head belong to P 1 . In Case 1 the iff holds because Y = Y ∪ X is the unique stable model of X 1 ∪ D • D (i), and it satisfies the conditions s = {f ∈ F | h(f, i) ∈ Y } and α(i) / ∈ Y . To see this, observe that the atoms in X 1 that do not belong to (C2) also do not occur in the rules of D • D (i). In Case 2 the iff holds because none of its two sides holds. Finally, if Φ(s , a) = ⊥ then (C2) is unsatisfiable, hence X 1 ∪D • D (i) is also unsatisfiable, and therefore (C1) is unsatisfiable.

Lemma 2
Let D be a planning description that represents a conformant planning problem PP, s, s ⊆ F be two states, p = [a 1 ; a 2 ; . . . ; a n ] be a plan, i be a positive integer, and X be a

Proof
We consider first the case where Φ(s , a 1 ) is ⊥, and then the case where Φ(s , a 1 ) is a state s . If Φ(s , a 1 ) is ⊥, then by Lemma 1 the program is unsatisfiable. Hence, by the Splitting Theorem, the program (C3) is also unsatisfiable, and the iff holds because none of its sides holds. We continue with the case where Φ(s , a 1 ) is a state s . If n = 1, the lemma statement follows directly from Lemma 1. Then the proof follows by induction assuming that the lemma statement holds for all plans of length n − 1. By definition, Φ({s }, p) = Φ({s }, q) with q = [a 2 ; . . . ; a n ]. From Lemma 1, we get that Φ(s , a 1 ) = s iff there is a unique stable model Z of (C4) such that s = {f ∈ F | h(f, i) ∈ Z} and α(i) / ∈ Z. By induction hypothesis, we get that Φ({s }, q) = s iff there is a unique stable model Y of From the Splitting Theorem, Y is the unique stable model of (C3), and the iff condition holds.

Lemma 3
Let D be a planning description that represents a conformant planning problem PP. Let Y ⊆ Open be a set of atoms, s ⊆ F and s 0 ∈ SM (I(D)) be states such that and p = [a 1 ; a 2 ; . . . ; a n ] be a plan with X = {occ(a 1 , 1), occ(a 2 , 2), . . . , occ(a n , n)}.
Proof Note that C D [0, 0] = Domain ∪ D • I and that s 0 ∈ SM (I(D)) implies that this program has a unique stable model Z 0 such that s 0 = {f ∈ F | h(f, 0) ∈ Z 0 } and α(0) / ∈ Z 0 . Then, from Lemma 2 with i = 1, we get that Φ({s 0 }, p) = s iff there is a unique stable model Z of Furthermore, α(n) / ∈ Z. From the Splitting Theorem, this implies that Z is a stable model of C D [0, n]. Furthermore, it is the unique stable model of C D [0, n] such that Note also that Y = {h(f, 0) ∈ Open | f ∈ s 0 } = Z ∩ Open and, thus, Z satisfies fixcons(Y, Open) = fixcons(Z ∩ Open, Open).
No other stable model of C D [0, 0] satisfies these constraints. Therefore, Z is the unique stable model of (C5).

Lemma 4
Let D be a planning description that represents a conformant planning problem PP such that I(D) is in GDT form, and n be a positive integer. Let p = [a 1 ; . . . ; a n ] be a plan, X = {occ(a 1 , 1), . . . occ(a n , n)} be a set of atoms, and Y ⊆ Open be another set of atoms. Then, the following two statements are equivalent: • there is some stable model Z of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open) such that α(n) / ∈ Z; and • there is s 0 ∈ SM (I(D)) s.t. Y = {h(f, 0) ∈ Open | f ∈ s 0 } and Φ D(D) ({s 0 }, p) ⊆ G.

Proof
First, assume that there is no s 0 ∈ SM (I(D)) s.t. Y = {h(f, 0) ∈ Open | f ∈ s 0 }. In this case, the second lemma statement does not hold, and we will see next that the first lemma statement does not hold either. Given our assumption and by construction, the program Domain ∪ D • I ∪ Alpha ∪ fixcons(Y, Open), that is a subset of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open), has a unique stable model, and this stable model contains the atom α(n). Then, by the Splitting Theorem, the stable models of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open) contain the atom α(n), which contradicts the first lemma statement.
Otherwise, there is s 0 ∈ SM (I(D)) such that Y = {h(f, 0) ∈ Open | f ∈ s 0 }. Then, from Lemma 3, it follows that Φ D(D) ({s 0 }, p) = s iff there is a unique stable model Z of such that s = {f ∈ F | h(f, n) ∈ Z}. Furthermore, α(n) / ∈ Z. Note that C D = C D [0, n]. Moreover, we can see that s ∈ G iff Z satisfies D • G . Since D • G ⊆ D • , it follows that the two lemma statements are equivalent.

Lemma 5
Let D be a planning description that represents a conformant planning problem PP such that I(D) is in GDT form. Then, the following statements hold: • For every Y ⊆ Open such that D • I ∪ fixcons(Y, Open) is satisfiable, there is a unique s Y ∈ SM (I(D)) such that Y = {h(f, 0) ∈ Open | f ∈ s Y }.

Proof
For the first statement, note that there is a choice rule of form {f } ← in I(D) iff the atom h(f, 0) belongs to Open. The rest of the program is deterministic, therefore, there is at most one s 0 ∈ SM (I(D)) such that Y = {h(f, 0) ∈ Open | f ∈ s 0 }. Furthermore, since D • I ∪ fixcons(Y, Open) is satisfiable, such stable model exists. For the same reason, for the second statement, if s 0 ∈ SM (I(D)), there is a unique Y ⊆ Open such that Y = {h(f, 0) ∈ Open | f ∈ s 0 }. Furthermore, it is easy to see that D • I ∪ fixcons(Y, Open) is satisfiable.

Lemma 6
Let D be a planning description that represents a conformant planning problem PP such that I(D) is in GDT form, and n be a positive integer. Let p = [a 1 ; . . . ; a n ] be a plan, and X = {occ(a 1 , 1), . . . occ(a n , n)} be a set of atoms. Then, the following two statements are equivalent: • for all Y ⊆ Open such that D • I ∪ fixcons(Y, Open) is satisfiable, there is some stable model Z of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open)) such that α(n) / ∈ Z; and • every s 0 ∈ SM (I(D)) satisfies Φ D(D) ({s 0 }, p) ⊆ G.
• the bodies of all rules in Alpha are satisfied by Z and all α(i) are thus supported, and • all rules in Generate are satisfied by Z and all elements in X are supported.
Thus Z is a stable model of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open) and the first lemma statement holds.

Lemma 8
Let D be a planning description that represents a conformant planning problem PP such that I(D) is in GDT form, and n be a positive integer. Let p = [a 0 ; . . . ; a n ] be a plan and X = {occ(a 0 , 0), . . . occ(a n , n)} be a set of atoms. Then, the following two statements are equivalent: • for all Y ⊆ Open, C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open) is satisfiable; and • every s 0 ∈ SM (I(D)) satisfies Φ D(D) ({s 0 }, p) ⊆ G.

Proof
The first lemma statement holds iff for all Y ⊆ Open satisfying that D • I ∪fixcons(Y, Open) is satisfiable, there is some stable model Z of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open)) such that α(n) / ∈ Z (Lemma 7) iff the second lemma statement holds (Lemma 6).

Lemma 9
Let D be a planning description that represents a conformant planning problem PP such that I(D) is in GDT form, and let n be a positive integer. Then, the following two statements are equivalent: • there is some X ⊆ Occ such that, for all Y ⊆ Open, there is some stable model of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open); and • there is some X ⊆ Occ of the form {occ(a 1 , 1), . . . , occ(a n , n)} such that, for all Y ⊆ Open, there is some stable model of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open).

Proof
The second statement trivially implies the first. In the other direction, assume that the first statement holds, we show that in this case the second statement also holds. Pick any Y ⊆ Open, and let M be some stable model of C D ∪ fixcons(X, Occ) ∪ fixcons(Y, Open). Given that the rule Generate belongs to C D , it follows that M ∩ Occ has the form {occ(a 1 , 1), . . . , occ(a n , n)}. On the other hand, since M satisfies fixcons(X, Occ), it follows that M ∩ Occ = X. Hence, X has the form {occ(a 1 , 1), . . . , occ(a n , n)}, and the second statement holds.
Proof of Theorem 6.1 Since D represents PP, it follows that PP = Φ D(D) , SM (I(D)), G where G is SM (G(D)∪ {{f } ←| f ∈ F }). Then, the program ∃Occ∀Open C D is satisfiable iff there is some X ⊆ Occ such that, for all Y ⊆ Open, there is some stable model of