Skip to main content Accessibility help
×
×
Home
Automated Planning and Acting
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 1
  • Cited by
    This book has been cited by the following publications. This list is generated based on data provided by CrossRef.

    Wantia, Nils Esen, Menno Hengstebeck, Andre Heinze, Frank Rossmann, Juergen Deuse, Jochen and Kuhlenkoetter, Bernd 2016. Task planning for human robot interactive processes. p. 1.

    ×
  • Malik Ghallab, Centre National de la Recherche Scientifique (CNRS), Paris , Dana Nau, University of Maryland, College Park , Paolo Traverso, FBK ICT – IRST (Center for Scientific and Technological Research), Italy

Book description

Autonomous AI systems need complex computational techniques for planning and performing actions. Planning and acting require significant deliberation because an intelligent system must coordinate and integrate these activities in order to act effectively in the real world. This book presents a comprehensive paradigm of planning and acting using the most recent and advanced automated-planning techniques. It explains the computational deliberation capabilities that allow an actor, whether physical or virtual, to reason about its actions, choose them, organize them purposefully, and act deliberately to achieve an objective. Useful for students, practitioners, and researchers, this book covers state-of-the-art planning techniques, acting techniques, and their integration which will allow readers to design intelligent systems that are able to act effectively in the real world.

Reviews

‘Automated Planning and Acting will be the text I require my students to read when they first start, and the go-to book on my shelf for my own reference. As a timely source of motivation for game-changing research on the integration of planning and acting, it will also help shape the field for the next decade.'

Sylvie Thiébaux - Australian National University, Canberra, from the Foreword

‘This book is currently the most comprehensive introduction [to] computational principles of deliberative action that I know of. Whoever thinks about bringing planning and reasoning to bear on robots or other agents embedded in the real world should study it carefully - and share it with their students too.’

Joachim Hertzberg - Osnabrück University

‘This book by Ghallab, Nau and Traverso is the best to date on automated artificial intelligence planning. It is very comprehensive, covering topics both in the core of AI planning and acting and other related AI topics such as robotic execution, automation and learning. Numerous features make it ideal for students to learn about AI planning, including historical notes and many illustrative examples. The book will serve as a trove of resources for researchers and practitioners in AI planning and other AI fields.’

Qiang Yang - Chair Professor and Head of the Computer Science and Engineering Department, Hong Kong University of Science and Technology

Refine List

Actions for selected content:

Select all | Deselect all
  • View selected items
  • Export citations
  • Download PDF (zip)
  • Send to Kindle
  • Send to Dropbox
  • Send to Google Drive
  • Send content to

    To send content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to .

    To send content items to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.

    Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

    Find out more about the Kindle Personal Document Service.

    Please be advised that item(s) you selected are not available.
    You are about to send
    ×

Save Search

You can save your searches here and later view and run them again in "My saved searches".

Please provide a title, maximum of 40 characters.
×

Contents

Bibliography
[1] Aarup, M., Arentoft, M. M., Parrod, Y., Stader, J., and Stokes, I. (1994). OPTIMUM-AIV: A knowledge-based planning and scheduling system for spacecraft AIV. In Intelligent Scheduling, pp. 451–469. Morgan Kaufmann.
[2] Abbeel, P., Coates, A., and Ng, A. (2010). Autonomous helicopter aerobatics through apprenticeship learning. Intl. J. Robotics Research, 29(13):1608–1639.
[3] Abbeel, P. and Ng, A.Y. (2010). Inverse reinforcement learning. In Sammut, C. and Webb, G. I., editors, Encyclopedia of Machine Learning, pp. 554–558. Springer.
[4] Abdul-Razaq, T. and Potts, C. (1988). Dynamic programming state-space relaxation for single-machine scheduling. J. Operational Research Soc., pp. 141–152.
[5] Adali, S., Console, L., Sapino, M. L., Schenone, M., and Terenziani, P. (2000).Representing and reasoning with temporal constraints in multimedia presentations. In Intl. Symp. on Temporal Representation and Reasoning (TIME), pp. 3–12.
[6] Agosta, J. M. (1995). Formulation and implementation of an equipment configuration problem with the SIPE-2 generative planner. In AAAI-95 Spring Symp.on Integrated Planning Applications, pp. 1–10.
[7] Albore, A. and Bertoli, P. (2004). Generating safe assumption-based plans for partially observable, nondeterministic domains. In Proc. AAAI, pp. 495–500.
[8] Alford, R., Kuter, U., and Nau, D.S. (2009). Translating HTNs to PDDL:A small amount of domain knowledge can go a long way. In Proc. IJCAI.
[9] Alford, R., Kuter, U., Nau, D. S., and Goldman, R. P. (2014a). Plan aggregation for strong cyclic planning in nondeterministic domains. Artificial Intelligence, 216:206–232.
[10] Alford, R.,Shivashankar, V.,Kuter, U.,and Nau, D.S. (2014b).On the feasibility of planning graph style heuristics for HTN planning. In Proc. ICAPS.
[11] Allen, J. (1984).Towards a general theory of action and time. Artificial Intelligence, 23:123–154.
[12] Allen, J. (1991a). Temporal reasoning and planning. In Allen, J., Kautz, H., Pelavin, R., and Tenenberg, J., editors, Reasoning about Plans, pp. 1–68. Morgan Kaufmann.
[13] Allen, J. F. (1983). Maintaining knowledge about temporal intervals. Communications ACM, 21(11):832–843.
[14] Allen, J. F. (1991b). Planning as temporal reasoning. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR).
[15] Allen, J. F., Hendler, J., and Tate, A., editors (1990). Readings in Planning. Morgan Kaufmann.
[16] Allen, J. F. and Koomen, J. A. (1983). Planning using a temporal world model. In Proc. IJCAI.
[17] Ambite, J. L., Knoblock, C. A., and Minton, S. (2000). Learning plan rewriting rules. In Proc. ICAPS.
[18] Ambros-Ingerson, J. A. and Steel, S. (1988). Integrating planning, execution and monitoring. In Proc. AAAI, pp. 21–26.
[19] Anderson, J. R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., and Qin, Y. (2004).An integrated theory of the mind. Psychological Review, 111(4):1036–1060.
[20] Andre, D., Friedman, N., and Parr, R. (1997).Generalized prioritized sweeping. In Adv. in Neural Information Processing Syst. (Proc. NIPS).
[21] Andre, D. and Russell, S. J. (2002). State abstraction for programmable reinforcement learning agents. In Proc. AAAI.
[22] Andrews, T., Curbera, F., Dolakia, H., Goland, J., Klein, J., Leymann, F., Liu, K., Roller, D., Smith, D., Thatte, S., Trickovic, I., and Weeravarana, S. (2003). Business Process Execution Language for Web Services. http://msdn.microsoft.com/en-us/library/ ee251594(v=bts.10).aspx.
[23] Araya-Lopez, M., Thomas, V., Buffet, O., and Charpillet, F. (2010). A closer look at MOMDPs. In IEEE Intl. Conf. on Tools with AI (ICTAI), pp. 197–204.
[24] Argall, B.D.,Chernova, S., veloso, M.M., and Browning, B. (2009).Asurvey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469–483.
[25] Ȧstrom, K. J. (1965).Optimal control of Markov decision processes with incomplete state estimation. J. Math. Analysis and Applications, 10:174–205.
[26] Awaad, I., Kraetzschmar, G. K., and Hertzberg, J. (2014). Finding ways to get the job done: An affordance-based approach. In Proc. ICAPS.
[27] Baader, F.,Calvanese, D.,McGuinness, D.,Nardi, D.,and Patel-Schneider, P., editors (2003). The Description Logic Handbook: Theory, Implementation and Applications. Cambridge Univ. Press.
[28] Bacchus, F. and Kabanza, F. (2000). Using temporal logics to express search control knowledge for planning. Artificial Intelligence, 116(1–2):123–191.
[29] Backstrom, C. (1991). Planning in polynomial time: The SAS-PUB class. Computational Intelligence, 7:181–197.
[30] Backstrom, C. and Nebel, B. (1993). Complexity results for SAS+ planning. In Proc. IJCAI.
[31] Backstrom, C. and Nebel, B. (1995).Complexity results for SAS+ planning. Computational Intelligence, 11(4):1–34.
[32] Baier, J. A., Mombourquette, B., and McIlraith, S. (2014). Diagnostic problem solving: a planning perspective. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 1–10.
[33] Balas, E. (1968).A note on the branch-and-bound principle. Operations Research, 16:442–444.
[34] Ball, M. and Holte, R.C. (2008). The compression power of symbolic pattern databases. In Proc. ICAPS, pp. 2–11.
[35] Baptiste, P., Laborie, P., Le Pape, C., and Nuijten, W. (2006). Constraint-based scheduling and planning. In Rossi, F., Van Beek, P., and Walsh, T., editors, Handbook of constraint programming, chapter 22, pp. 759–798. Elsevier.
[36] Barbier, M.,Gabard, J.-F., Llareus, J. H., and Tessier, C. (2006). Implementation and flight testing of an onboard architecture for mission supervision. In Intl. Unmanned Air Vehicle Syst. Conf.
[37] Barreiro, J., Boyce, M., Frank, J., Iatauro, M., Kichkaylo, T., Morris, P., Smith, T., and Do, M. (2012). EUROPA: A platform for AI planning, scheduling, constraint programming, and optimization. In Intl. Competition on Knowledge Engg. for Planning and Scheduling (ICKEPS).
[38] Barrett, A., Golden, K.,Penberthy, J. S., and Weld, D. S. (1993).UCPOP user's manual (version 2.0). Technical Report TR-93-09-06, Univ. ofWashington, Dept. of Computer Science and Engineering.
[39] Barrett, A. and Weld, D. S. (1993). Characterizing subgoal interactions for planning. In Proc. IJCAI, pp. 1388–1393.
[40] Barrett, A. and Weld, D. S. (1994). Partial order planning: Evaluating possible efficiency gains. Artificial Intelligence, 67(1):71–112.
[41] Barrett, C.,Stump, A.,and Tinelli, C. (2010).The SMT-LIB standard:Version 2.0. In Gupta, A. and Kroening, D., editors, 8th Intl. Wksp. on Satisfiability Modulo Theories.
[42] Barry, J., Kaelbling, L. P., and Lozano-Perez, T. (2010). Hierarchical solution of large Markov decision processes. In ICAPS Workshop, pp. 1–8.
[43] Barry, J., Kaelbling, L. P., and Lozano-Perez, T. (2011). DetH*: Approximate hierarchical solution of large Markov decision processes. In Proc. IJCAI, pp. 1–8.
[44] Bartak, R., Morris, R., and Venable, B. (2014). An Introduction to Constraint-Based Temporal Reasoning.Morgan & Claypool.
[45] Bartak, R., Salido, M. A., and Rossi, F. (2010). Constraint satisfaction techniques in planning and scheduling. J. Intelligent Manufacturing, 21(1):5–15.
[46] Barto, A.G., Bradke, S. J., and Singh, S. P. (1995). Learning to act using real-time dynamicprogramming. Artificial Intelligence, 72:81–138.
[47] Beetz, M.(1999).Structured reactive controllers:Controlling robots that perform everyday activity. In Proc. Annual Conf. on Autonomous Agents, pp. 228–235.ACM.
[48] Beetz, M. and McDermott, D. (1992). Declarative goals in reactive plans. In Proc. AIPS, p. 3.
[49] Beetz, M. and McDermott, D. (1994). Improving robot plans during their execution. In Proc. AIPS.
[50] Bellman, R. (1957).Dynamic Programming. Princeton Univ. Press.
[51] Ben Lamine, K. and Kabanza, F. (2002).Reasoning about robot actions: a model checking approach. In Beetz, M.,Hertzberg, J., Ghallab, M., and Pollack, M.E., editors, Advances in Plan-Based Control of Robotic Agents, pp. 123–139. Springer.
[52] Bercher, P., Keen, S., and Biundo, S. (2014). Hybrid planning heuristics based on task decomposition graphs. International Symposium on Combinatorial Search (SoCS), pp. 1–9.
[53] Bernard, D., Gamble, E., Rouquette, N., Smith, B., Tung, Y., Muscettola, N., Dorais, G., Kanefsky, B., Kurien, J. A., and Millar, W. (2000). Remote agent experiment DS1 technology validation report. Technical report, NASA.
[54] Bernardi, G., Cesta, A.,Orlandini, A., and Finzi, A. (2013).Aknowledge engineering environment for P&S with timelines. In Proc. ICAPS, pp. 16–23.
[55] Bernardini, S. and Smith, D. (2011). Finding mutual exclusion invariants in temporal planning domains. In Intl. Wksp. on Planning and Scheduling for Space (IWPSS).
[56] Bernardini, S. and Smith, D.E. (2007).Developing domain-independent search control for Europa2. In ICAPS Wksp. on Heuristics for Domain-Independent Planning.
[57] Bernardini, S. and Smith, D. E. (2008). Automatically generated heuristic guidance for Europa2. In Intl. Symp. on Artificial Intell., Robotics and Automation in Space (i-SAIRAS).
[58] Bernardini, S. and Smith, D. E. (2009). Towards search control via dependency graphs in Europa2. In ICAPS Wksp. on Heuristics for Domain-Independent Planning.
[59] Bertoli, P., Cimatti, A., Pistore, M., Roveri, M., and Traverso, P. (2001a). MBP: a model based planner.In IJCAIWksp.on Planning underUncertainty and Incomplete Information, pp. 93–97.
[60] Bertoli, P.,Cimatti, A., Pistore, M., and Traverso, P. (2003).A framework for planning with extended goals under partial observability. In Proc. ICAPS.
[61] Bertoli, P., Cimatti, A., Roveri, M., and Traverso, P. (2001b). Planning in nondeterministic domains under partial observability via symbolic model checking. In Proc. IJCAI, pp. 473–478.
[62] Bertoli, P., Cimatti, A., Roveri, M., and Traverso, P. (2006). Strong planning under partial observability. Artificial Intelligence, 170(4):337–384.
[63] Bertoli, P., Cimatti, A., and Traverso, P. (2004). Interleaving execution and planning for nondeterministic, partially observable domains. In Proc. ECAI, pp. 657–661.
[64] Bertoli, P., Pistore, M., and Traverso, P. (2010). Automated composition of Web services via planning in asynchronous domains. Artificial Intelligence, 174(3-4):316–361.
[65] Bertsekas, D. (2001).Dynamic Programming and Optimal Control.Athena Scientific.
[66] Bertsekas, D. and Tsitsiklis, J. (1996). Neuro-Dynamic Programming.Athena Scientific.
[67] Bertsekas, D.P.and Tsitsiklis, J.N. (1991).An analysis of stochastics shortest path problems. Mathematics of Operations Research, 16(3):580–595.
[68] Betz, C. and Helmert, M. (2009). Planning with h+ in theory and practice. In Proc.Annual German Conf. on AI (KI), volume 5803. Springer.
[69] Bhatia, A., Kavraki, L. E., and Vardi, M. Y. (2010). Sampling-based motion planning with temporal goals. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pp. 2689–2696. IEEE.
[70] Bird, C. D. and Emery, N. J. (2009a). Insightful problem solving and creative tool modification by captive nontool-using rooks. Proc. Natl. Acad. of Sci. (PNAS), 106(25):10370–10375.
[71] Bird, C. D. and Emery, N. J. (2009b). Rooks use stones to raise the water level to reach a floating worm. Current Biology, 19(16):1410–1414.
[72] Biundo, S. and Schattenberg, B. (2001). From abstract crisis to concrete relief – A preliminary report on combining state abstraction and HTN planning. In Proc. European Conf. on Planning (ECP), pp. 157–168.
[73] Blum, A. and Langford, J. (1999). Probabilistic planning in the graphplan framework. In Proc. European Conf. on Planning (ECP), pp. 319–322. Springer.
[74] Blum, A.L.and Furst, M.L. (1997).Fast planning through planning graph analysis. Artificial Intelligence, 90(1–2):281–300.
[75] Boddy, M. and Dean, T. (1989). Solving time-dependent planning problems. In Proc. IJCAI, pp. 979–984.
[76] Boese, F.and Piotrowski, J. (2009).Autonomously controlled storage management in vehicle logistics applications of RFID and mobile computing systems. Intl. J. RF Technologies: Research and Applications, 1(1):57–76.
[77] Bogomolov, S., Magazzeni, D., Minopoli, S., and Wehrle, M. (2015). PDDL+ planning with hybrid automata: Foundations of translating must behavior. In Proc. ICAPS, pp. 42–46.
[78] Bogomolov, S., Magazzeni, D., Podelski, A., and Wehrle, M. (2014). Planning as model checking in hybrid domains. In Proc. AAAI, pp. 2228–2234.
[79] Bohren, J., Rusu, R. B., Jones, E.G.,Marder-Eppstein, E., Pantofaru, C.,Wise, M.,Mosenlechner, L., Meeussen, W., and Holzer, S. (2011). Towards autonomous robotic butlers: Lessons learned with the PR2. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pp. 5568–5575.
[80] Bonet, B. (2007). On the speed of convergence of value iteration on stochastic shortestpath problems. Mathematics of Operations Research, 32(2):365–373.
[81] Bonet, B. and Geffner, H. (2000). Planning with incomplete information as heuristic search in belief space. In Proc. AIPS, pp. 52–61.
[82] Bonet, B. and Geffner, H. (2001). Planning as heuristic search. Artificial Intelligence, 129:5–33.
[83] Bonet, B. and Geffner, H. (2003a). Faster heuristic search algorithms for planning with uncertainty and full feedback. In Proc. IJCAI.
[84] Bonet, B.and Geffner, H. (2003b).LabeledRTDP:Improving the convergence of real-time dynamic programming. In Proc. ICAPS, pp. 12–21.
[85] Bonet, B.and Geffner, H. (2005).mGPT:Aprobabilistic planner based on heuristic search. J. Artificial Intelligence Research, 24:933–944.
[86] Bonet, B. and Geffner, H. (2006). Learning in depth-first search: A unified approach to heuristic search in deterministic, non-deterministic, probabilistic, and game tree settings. In Proc. ICAPS, pp. 142–151.
[87] Bonet, B. and Geffner, H. (2009). Solving POMDPs:RTDP-Bel vs. point-based algorithms. In Proc. IJCAI, pp. 1641–1646.
[88] Bonet, B. and Geffner, H. (2012).Action selection for MDPs: Anytime AO* versus UCT. In Proc. AAAI.
[89] Bonet, B. and Geffner, H. (2014). Belief tracking for planning with sensing:Width, complexity and approximations. J. Artificial Intelligence Research, 50:923–970.
[90] Bonet, B. and Helmert, M. (2010). Strengthening landmark heuristics via hitting sets. In Proc. ECAI, pp. 329–334.
[91] Bouguerra, A., Karlsson, L., and Saffiotti, A. (2007). Semantic knowledge-based execution monitoring for mobile robots. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pp. 3693–3698. IEEE.
[92] Boutilier, C., Brafman, R. I., and Geib, C. (1998). Structured reachability analysis for Markov decision processes. In Proc. Conf. on Uncertainty in AI (UAI), pp. 24–32.
[93] Boutilier, C., Dean, T., and Hanks, S. (1999). Decision-theoretic planning: Structural assumptions and computational leverage. J. Artificial Intelligence Research, 11:1–94.
[94] Boutilier, C., Dearden, R., and Goldszmidt, M. (2000). Stochastic dynamic programming with factored representations. Artificial Intelligence, 121:49–107.
[95] Boyan, J.A. and Littman, M. L. (2001). Exact solutions to time dependent MDPs. In Adv. in Neural Information Processing Syst. (Proc. NIPS), pp. 1026–1032.
[96] Brafman, R. and Hoffmann, J. (2004). Conformant planning via heuristic forward search: A new approach. In Proc. ICAPS.
[97] Brenner, M. and Nebel, B. (2009). Continual planning and acting in dynamic multiagent environments. J. Autonomous Agents and Multi-Agent Syst., 19(3):297–331.
[98] Brusoni, V., Console, L., Terenziani, P., and Dupre, D. T. (1998).A spectrum of definitions for temporal model-based diagnosis. Artificial Intelligence, 102(1):39–79.
[99] Brusoni, V., Console, L., Terenziani, P., and Pernici, B. (1999). Qualitative and quantitative temporal constraints and relational databases: Theory, architecture, and applications. IEEE trans. on KDE, 11(6):948–968.
[100] Bucchiarone, A., Marconi, A., Pistore, M., and Raik, H. (2012). Dynamic adaptation of fragment-based and context-aware business processes. In Intl. Conf. on Web Services, pp. 33–41.
[101] Bucchiarone, A., Marconi, A., Pistore, M., Traverso, P., Bertoli, P., and Kazhamiakin, R. (2013).Domain objects for continuous context-aware adaptation of service-based systems. In ICWS, pp. 571–578.
[102] Buffet, O. and Sigaud, O., editors (2010). Markov Decision Processes in Artificial Intelligence. Wiley.
[103] Busoniu, L.,Munos, R., De Schutter, B., and Babuska, R. (2011). Optimistic planning for sparsely stochastic systems. In IEEE Symp. on Adaptive Dynamic Progr. and Reinforcement Learning, pp. 48–55.
[104] Bylander, T. (1992). Complexity results for extended planning. In Proc. AAAI.
[105] Bylander, T. (1994). The computational complexity of propositional STRIPS planning. Artificial Intelligence, 69:165–204.
[106] Calvanese, D.,Giacomo, G.D., and Vardi, M.Y. (2002).Reasoning about actions and planning in ltl action theories. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 593–602. Morgan Kaufmann.
[107] Castellini, C.,Giunchiglia, E., and Tacchella, A. (2001). Improvements to SAT-based conformant planning. In Cesta, A. and Borrajo, D., editors, Proc. European Conf. on Planning (ECP).
[108] Castellini, C., Giunchiglia, E., and Tacchella, A. (2003). SAT-based planning in complex domains: Concurrency, constraints and nondeterminism. Artificial Intellegence, 147:85–118.
[109] Castillo, L., Fdez-Olivares, J., and Garcia-Perez, O. (2006a). Efficiently handling temporal knowledge in an HTN planner. In Proc. ICAPS, pp. 1–10.
[110] Castillo, L., Fdez-Olivares, J., Garcıa-Perez, O., and Palao, F. (2006b). Efficiently handling temporal knowledge in an HTN planner. In Proc. ICAPS, pp. 63–72.
[111] Cesta, A. and Oddi, A. (1996). Gaining efficiency and flexibility in the simple temporal problem. In Intl. Symp. on Temporal Representation and Reasoning (TIME).
[112] Cesta, A., Oddi, A., and Smith, S.F. (2002).Aconstraint-based method for project scheduling with time windows. J. Heuristics, 8(1):109–136.
[113] Champandard, A., Verweij, T., and Straatman, R. (2009). The AI for Killzone 2's multiplayer bots. In Game Developers Conf. (GDC).
[114] Chapman, D. (1987). Planning for conjunctive goals. Artificial Intelligence, 32:333–379.
[115] Chatilla, R., Alami, R., Degallaix, B., and Laruelle, H. (1992). Integrated planning and execution control of autonomous robot actions. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pp. 2689–2696.
[116] Chrpa, L. (2010). Combining learning techniques for classical planning:Macro-operators and entanglements. In IEEE Intl. Conf. on Tools with AI (ICTAI), volume 2, pp. 79–86. IEEE.
[117] Cimatti, A., Giunchiglia, F.,Pecchiari, P.,Pietra, B.,Profeta, J.,Romano, D.,Traverso, P., and Yu, B. (1997). A provably correct embedded verifier for the certification of safety critical software. In Intl. Conf. on Computer Aided Verification (CAV), pp. 202–213.
[118] Cimatti, A., Micheli, A., and Roveri, M. (2012a). Solving temporal problems using SMT: Strong controllability. In Proc. Int. Conf. Principles and Practice of Constraint Programming (CP).
[119] Cimatti, A., Micheli, A., and Roveri, M. (2012b). Solving temporal problems using SMT: Weak controllability. In Proc. AAAI.
[120] Cimatti, A.,Micheli, A., and Roveri, M. (2015). Strong temporal planning with uncontrollable durations:A state-space approach. In Proc. AAAI, pp. 1–7.
[121] Cimatti, A., Pistore, M.,Roveri, M., and Traverso, P. (2003).Weak, strong, and strong cyclic planning via symbolic model checking. Artificial Intelligence, 147(1–2):35–84.
[122] Cimatti, A., Roveri, M., and Traverso, P. (1998a). Automatic OBDD-based generation of universal plans in non-deterministic domains. In Proc. AAAI, pp. 875–881.
[123] Cimatti, A., Roveri, M., and Traverso, P. (1998b). Strong planning in non-deterministic domains via model checking. In Proc.AIPS, pp. 36–43.
[124] Clasen, J., Roger, G., Lakemeyer, G., and Nebel, B. (2012). Platas—integrating planning and the action language Golog. KI-Künstliche Intelligenz, 26(1):61–67.
[125] Coates, A., Abbeel, P., and Ng, A. (2009). Apprenticeship learning for helicopter control. Communications ACM, 52(7):97.
[126] Coles, A.and Smith, A. (2007). Marvin: a heuristic search planner with onlinemacro-action learning. J. Artificial Intelligence Research, 28:119–156.
[127] Coles, A. I., Fox, M., Long, D., and Smith, A. J. (2008). Planning with problems requiring temporal coordination. In Proc. AAAI.
[128] Coles, A. J., Coles, A., Fox, M., and Long, D. (2012). COLIN: planning with continuous linear numeric change. J. Artificial Intelligence Research.
[129] Conrad, P., Shah, J., and Williams, B. C. (2009). Flexible execution of plans with choice. In Proc. ICAPS.
[130] Coradeschi, S. and Saffiotti, A. (2002). Perceptual anchoring: a key concept for plan execution in embedded systems. In Beetz, M., Hertzberg, J., Ghallab, M., and Pollack, M. E., editors, Advances in Plan-Based Control of Robotic Agents, pp. 89–105. Springer-Verlag.
[131] Cormen, T., Leirson, C., Rivest, R., and Stein, C. (2001). Introduction to Algorithms. MIT Press.
[132] Culberson, J. C. and Schaeffer, J. (1998). Pattern databases. Computational Intelligence, 14(3):318–334.
[133] Currie, K. and Tate, A. (1991). O-Plan: The open planning architecture. Artificial Intelligence, 52(1):49–86.
[134] Dai, P. and Hansen, E. A. (2007). Prioritizing Bellman backups without a priority queue. In Proc. ICAPS, pp. 113–119.
[135] Dal Lago, U., Pistore, M., and Traverso, P. (2002). Planning with a language for extended goals. In Proc. AAAI, pp. 447–454.
[136] Daniele, M., Traverso, P., and Vardi, M. (1999). Strong cyclic planning revisited. In Proc. European Conf. on Planning (ECP), pp. 35–48.
[137] De Giacomo, G., Iocchi, L.,Nardi, D.,and Rosati, R. (1997).Description logic-based framework for planning with sensing actions. In Intl. Wksp. on Description Logics.
[138] De Giacomo, G., Iocchi, L., Nardi, D., and Rosati, R. (1999).A theory and implementation of cognitive mobile robots. J. Logic and Computation, 9(5):759–785.
[139] De Giacomo, G., Patrizi, F., and Sardina, S. (2013). Automatic behavior composition synthesis. Artificial Intelligence, 196.
[140] de la Rosa, T. and McIlraith, S. (2011). Learning domain control knowledge for TLPlan and beyond. In ICAPS Wksp. on Learning and Planning.
[141] Dean, T., Firby, R., and Miller, D. (1988).Hierarchical planning involving deadlines, travel time and resources. Computational Intelligence, 6(1):381–398.
[142] Dean, T., Givan, R., and Leach, S. (1997). Model reduction techniques for computing approximately optimal solutions for Markov decision processes. In Proc. Conf. on Uncertainty in AI (UAI), pp. 124–131.
[143] Dean, T. and Kanazawa, K. (1989).Amodel for reasoning about persistence and causation. Computational Intelligence, 5(3):142–150.
[144] Dean, T. and Lin, S.-H. (1995). Decomposition techniques for planning in stochastic domains. In Proc. IJCAI, pp. 1121–1127.
[145] Dean, T. and McDermott, D. (1987). Temporal data base management. Artificial Intelligence, 32(1):1–55.
[146] Dean, T. L. and Wellman, M. (1991). Planning and Control.Morgan Kaufmann.
[147] Dechter, R., Meiri, I., and Pearl, J. (1991). Temporal constraint networks. Artificial Intelligence, 49:61–95.
[148] Deisenroth, M.P.,Neumann, G.,and Peters, J. (2013).Asurvey on policy search for robotics. Foundations and Trends in Robotics, 2(1–2):1–142.
[149] Della Penna, G., Magazzeni, D., Mercorio, F., and Intrigila, B. (2009). UPMurphi: A tool for universal planning on PDDL+ problems. In Proc. ICAPS.
[150] Dennett, D. (1996). Kinds of Minds. Perseus.
[151] Derman, C. (1970). Finite State Markovian Decision Processes.Academic Press.
[152] Despouys, O. and Ingrand, F. (1999). Propice-Plan: Toward a unified framework for planning and execution. In Proc. European Conf. on Planning (ECP).
[153] Dietterich, T.G. (2000). Hierarchical reinforcement learning with the maxq value function decomposition. J. Artificial Intelligence Research, 13:227–303.
[154] Do, M.B. and Kambhampati, S. (2001). Sapa:Adomain independent heuristicmetric temporal planner. In Proc. European Conf. on Planning (ECP), pp. 109–121.
[155] Doherty, P. and Kvarnstrom, J. (1999). TALplanner: An empirical investigation of a temporal logic-based forward chaining planner. In Intl. Symp. on Temporal Representation and Reasoning (TIME), pp. 47–54.
[156] Doherty, P. and Kvarnstrom, J. (2001). TALplanner: A temporal logic based planner. AI Magazine, 22(3):95–102.
[157] Doherty, P., Kvarnstrom, J., and Heintz, F. (2009a). A temporal logic-based planning and execution monitoring framework for unmanned aircraft systems. J. Autonomous Agents and Multi-Agent Syst., 19(3):332–377.
[158] Doherty, P., Kvarnstrom, J., and Heintz, F. (2009b). A temporal logic-based planning and execution monitoring framework for unmanned aircraft systems. J. Autonomous Agents and Multi-Agent Syst., 19(3):332–377.
[159] Domshlak, C., Karpas, E., and Markovitch, S. (2012). Online speedup learning for optimal planning. J. Artificial Intelligence Research.
[160] Doran, J.E. and Michie, D. (1966).Experiments with the graph traverser program.Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 294(1437):235–259.
[161] Dorf, R. C. and Bishop, R.H. (2010). Modern Control Systems. Prentice Hall.
[162] Dousson, C.,Gaborit, P.,and Ghallab, M.(1993).Situation recognition:Representation and algorithms. In Proc. IJCAI, pp. 166–172.
[163] Dousson, C. and LeMaigat, P. (2007).Chronicle recognition improvement using temporal focusing and hierarchization. In Proc. IJCAI, pp. 324–329.
[164] Drakengren, T. and Jonsson, P. (1997). Eight maximal tractable subclasses of Allen's algebra with metric time. J. Artificial Intelligence Research, 7:25–45.
[165] Dvorak, F., Bit-Monnot, A., Ingrand, F., and Ghallab, M. (2014). A flexible ANML actor and planner in robotics. In Finzi, A. and Orlandini, A., editors, ICAPS Wksp. on Planning and Robotics, pp. 12–19.
[166] Eaton, J. H. and Zadeh, L.A. (1962). Optimal pursuit strategies in discrete state probabilistic systems. Trans. ASME, 84:23–29.
[167] Edelkamp, S. (2001).Planning with pattern databases. In Proc.European Conf.on Planning (ECP).
[168] Edelkamp, S. (2002). Symbolic pattern databases in heuristic search planning. In Proc. AIPS, pp. 274–283.
[169] Edelkamp, S. (2003). Taming numbers and durations in the model checking integrated planning system. J. Artificial Intelligence Research, 20:195–238.
[170] Edelkamp, S. and Helmert, M. (1999).Exhibiting knowledge in planning problems to minimize state encoding length. In Biundo, S. and Fox, M., editors, Proc. European Conf. on Planning (ECP), volume 1809 of LNAI, pp. 135–147. Springer.
[171] Edelkamp, S. and Helmert, M. (2000).On the implementation of MIPS. In AIPS Wksp. on Model-Theoretic Approaches to Planning, pp. 18–25.
[172] Edelkamp, S. and Kissmann, P. (2009). Optimal symbolic planning with action costs and preferences. In Proc. IJCAI, pp. 1690–1695.
[173] Edelkamp, S., Kissmann, P., and Rohte, M. (2014). Symbolic and explicit search hybrid through perfect hash functions – A case study in Connect Four. In Proc. ICAPS.
[174] Edelkamp, S., Kissmann, P., and Torralba, A. (2015). BDDs strike back (in AI planning). In Proc. AAAI, pp. 4320–4321.
[175] Effinger, R.,Williams, B., and Hofmann, A. (2010).Dynamic execution of temporally and spatially flexible reactive programs. In AAAI Wksp. on Bridging the Gap between Task and Motion Planning, pp. 1–8.
[176] El-Kholy, A. and Richard, B. (1996). Temporal and resource reasoning in planning: the ParcPlan approach. In Proc. ECAI, pp. 614–618.
[177] Elkawkagy, M., Bercher, P., Schattenberg, B., and Biundo, S. (2012). Improving hierarchical planning performance by the use of landmarks. Proc. AAAI.
[178] Emerson, E. A. (1990). Temporal and modal logic. In van Leeuwen, J., editor, Handbook of Theoretical Computer Sci., Volume B: Formal Models and Semantics, pp. 995–1072. Elsevier.
[179] Erol, K., Hendler, J., and Nau, D. S. (1994a). HTN planning: Complexity and expressivity. In Proc. AAAI.
[180] Erol, K., Hendler, J., and Nau, D. S. (1994b). Semantics for hierarchical task-network planning. Technical Report CS TR-3239, Univ. of Maryland.
[181] Erol, K., Hendler, J., and Nau, D. S. (1994c). UMCP:A sound and complete procedure for hierarchical task-network planning. In Proc. AIPS, pp. 249–254.
[182] Erol, K., Nau, D. S., and Subrahmanian, V. S. (1995). Complexity, decidability and undecidability results for domain-independent planning. Artificial Intelligence, 76(1–2):75–88.
[183] Estlin, T.A.,Chien, S.,and Wang, X. (1997). An argument for a hybrid HTN/operator-based approach to planning. In Proc. European Conf. on Planning (ECP), pp. 184–196.
[184] Etzioni, O., Hanks, S., Weld, D. S., Draper, D., Lesh, N., and Williamson, M. (1992). An approach to planning with incomplete information. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 115–125.
[185] Eyerich, P., Mattmuller, R., and Roger, G. (2009). Using the context-enhanced additive heuristic for temporal and numeric planning. In Proc. ICAPS.
[186] Eyerich, P., Mattmuller, R., and Roger, G. (2012). Using the context-enhanced additive heuristic for temporal and numeric planning. In Prassler, E., Zollner, M., Bischoff, R., and Burgard, W.,editors,Towards ServiceRobots for Everyday Environments:RecentAdvances in Designing Service Robots for Complex Tasks in Everyday Environments, pp. 49–64. Springer.
[187] Fargier, H., Jourdan, M., Layaa, N., and Vidal, T. (1998). Using temporal constraint networks to manage temporal scenario of multimedia documents. In ECAI Wksp. on Spatial and Temporal Reasoning.
[188] Feng, Z., Dearden, R., Meuleau, N., and Washington, R. (2004).Dynamic programming for structured continuous Markov decision problems. In Proc. AAAI, pp. 154–161.
[189] Feng, Z. and Hansen, E.A. (2002). Symbolic heuristic search for factoredMarkov decision processes. In Proc. AAAI, pp. 455–460.
[190] Feng, Z., Hansen, E. A., and Zilberstein, S. (2002). Symbolic generalization for on-line planning. In Proc. Conf. on Uncertainty in AI (UAI), pp. 209–216.
[191] Ferguson, D. I. and Stentz, A. (2004). Focussed propagation ofMDPs for path planning. In IEEE Intl. Conf. on Tools with AI (ICTAI), pp. 310–317.
[192] Fernandez, F. and Veloso, M.M.(2006).Probabilistic policy reuse in a reinforcement learning agent. In Proc. AAMAS, pp. 720–727.ACM.
[193] Ferraris, P. and Giunchiglia, E. (2000). Planning as satisfiability in nondeterministic domains. In Proc. AAAI.
[194] Ferrein, A. and Lakemeyer, G. (2008). Logic-based robot control in highly dynamic domains. Robotics and Autonomous Systems, 56(11):980–991.
[195] Fichtner, M., Grosmann, A., and Thielscher, M. (2003). Intelligent execution monitoring in dynamic environments. Fundamenta Informaticae, 57(2–4):371–392.
[196] Fikes, R. E. (1971). Monitored execution of robot plans produced by STRIPS. In <I>IFIP Congress.
[197] Fikes, R. E. and Nilsson, N. J. (1971). STRIPS:A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2(3):189–208.
[198] Finzi, A., Pirri, F., and Reiter, R. (2000). Open world planning in the situation calculus. In Proc. AAAI, pp. 754–760.
[199] Firby, R. J. (1987). An investigation into reactive planning in complex domains. In Proc. AAAI, pp. 202–206. AAAI Press.
[200] Fisher, M.,Gabbay, D. M., and Vila, L., editors (2005). Handbook of Temporal Reasoning in Artificial Intelligence. Elsevier.
[201] Foka, A. and Trahanias, P. (2007).Real-time hierarchical POMDPs for autonomous robot navigation. Robotics and Autonomous Systems, 55:561–571.
[202] Forestier, J.P.and Varaiya, P. (1978). Multilayer control of large Markov chains.IEEE Trans. Automation and Control, 23:298–304.
[203] Fox, M. and Long, D. (2000). Utilizing automatically inferred invariants in graph construction and search. In Proc. ICAPS, pp. 102–111.
[204] Fox, M. and Long, D. (2003). PDDL2.1: An extension to PDDL for expressing temporal planning domains. J. Artificial Intelligence Research, 20:61–124.
[205] Fox, M. and Long, D. (2006). Modelling mixed discrete-continuous domains for planning. J. Artificial Intelligence Research, 27:235–297.
[206] Frank, J. and Jonsson, A. K. (2003).Constraint-based attribute and interval planning. Constraints, 8(4).
[207] Fraser, G., Steinbauer, G., and Wotawa, F. (2004).Plan execution in dynamic environments. In Proc. Intl. Cognitive Robotics Workshop, pp. 208–217. Springer.
[208] Fratini, S.,Cesta, A., De Benedictis, R., Orlandini, A., and Rasconi, R. (2011).APSI-based deliberation in goal oriented autonomous controllers. In Symp. on Adv. in Space Technologies in Robotics and Automation (ASTRA).
[209] Fu, J., Ng, V.,Bastani, F.B., and Yen, I.-L. (2011). Simple and fast strong cyclic planning for fully-observable nondeterministic planning problems. In Proc. IJCAI, pp. 1949–1954.
[210] Fusier, F., Valentin, V., Bremond, F., Thonnat, M., Borg, M., Thirde, D., and Ferryman, J. (2007).Video understanding for complex activity recognition. Machine Vision and Applications, 18(3–4):167–188.
[211] Garcia, C. E., Prett, D. M., and Morari, M. (1989). Model predictive control: theory and practice – a survey. Automatica, 25(3):335–348.
[212] Garcia, F. and Laborie, P. (1995).Hierarchisation of the search space in temporal planning. In European Wksp. on Planning (EWSP), pp. 235–249.
[213] Garey, M.R. and Johnson, D.S. (1979). Computers and Intractability:AGuide to the Theory of NP-Completeness. W.H. Freeman.
[214] Garrido, A. (2002). A temporal plannnig system for level 3 durative actions of PDDL2.1. In AIPS Wksp. on Planning for Temporal Domains, pp. 56–66.
[215] Geffner, H. (2000). Functional Strips:A more flexible language for planning and problem solving. In Minker, J., editor, Logic-Based Artificial Intelligence, pp. 187–209. Kluwer.
[216] Geffner, H. (2003). PDDL 2.1: Representation vs. computation. J. Artificial Intelligence Research, 20:139–144.
[217] Geffner, H. and Bonet, B. (2013).A Concise Introduction toModels andMethods for Automated Planning. Morgan & Claypool.
[218] Geib, C. and Goldman, R. P. (2009). A probabilistic plan recognition algorithm based on plan tree grammars. Artificial Intelligence, 173:1101–1132.
[219] Gelly, S. and Silver, D. (2007). Combining online and offline knowledge in UCT. In Proc. Intl. Conf. on Machine Learning (ICML), pp. 273–280.
[220] Gerevini, A., Kuter, U., Nau, D. S., Saetti, A., and Waisbrot, N. (2008). Combining domain-independent planning and HTN planning: The Duet planner. In Proc. ECAI, pp. 573–577.
[221] Gerevini, A., Saetti, A., and Serina, I. (2003). Planning through stochastic local search and temporal action graphs in LPG. J. Artificial Intelligence Research, 20:239–290.
[222] Gerevini, A., Saetti, A., and Serina, I. (2005). Integrating planning and temporal reasoning for domains with durations and time windows. In Proc. IJCAI, volume 19, pp. 1226– 1232.
[223] Gerevini, A. and Schubert, L. (1996). Accelerating partial-order planners:Some techniques for effective search control and pruning. J. Artificial Intelligence Research, 5:95–137.
[224] Gerevini, A. and Schubert, L. (2000). Discovering state constraints inDISCOPLAN:Some new results. In Proc. AAAI.
[225] Gerevini, A. and Serina, I. (2002). LPG: A planner based on local search for planning graphs. In Proc. AIPS, pp. 968–973.
[226] Ghallab, M. (1996). On chronicles: Representation, on-line recognition and learning. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 597– 606.
[227] Ghallab, M., Alami, R., and Chatila, R. (1987). Dealing with time in planning and execution monitoring. In Bolles, R. and Roth, B., editors, Intl. Symp. on Robotics Research (ISRR), pp. 431–443. MIT Press.
[228] Ghallab, M. and Laruelle, H. (1994). Representation and control in IxTeT, a temporal planner. In Proc. AIPS, pp. 61–67.
[229] Ghallab, M. and Mounir-Alaoui, A. (1989). Managing efficiently temporal relations through indexed spanning trees. In Proc. IJCAI, pp. 1297–1303.
[230] Ghallab, M., Nau, D. S., and Traverso, P. (2004). Automated Planning: Theory and Practice. Morgann Kaufmann.
[231] Ghallab, M., Nau, D., and Traverso, P. (2014). The actor's view of automated planning and acting:A position paper. Artificial Intelligence, 208:1–17.
[232] Gil, Y. (summer 2005).Description logics and planning. AI Magazine.
[233] Gischer, J. L. (1988). The equational theory of pomsets. Theoretical Computer Science, 61(2):199–224.
[234] Giunchiglia, E. (2000). Planning as satisfiability with expressive action languages: Concurrency, constraints and nondeterminism. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 657–666.
[235] Giunchiglia, F. (1999). Using Abstrips abstractions – where do we stand? AI Review, 13(3):201–213.
[236] Giunchiglia, F. and Traverso, P. (1999). Planning as model checking. In Proc. European Conf. on Planning (ECP), pp. 1–20.
[237] Givan, R., Dean, T., and Greig, M. (2003). Equivalence notions and model minimization in Markov decision processes. Artificial Intelligence, 142:163–223.
[238] Golden, K., Etzioni, O., and Weld, D. (1994). Omnipotence without omniscience: Efficient sensor management for planning. In Proc. AAAI, pp. 1048–154.
[239] Goldman, R., Pelican, M., and Musliner, D. (1999). Hard real-time mode logic synthesis for hybrid control:A CIRCA-based approach. In Hybrid Systems and AI: Papers from the AAAI Spring Symp. AAAI Technical Report SS-99-05.
[240] Goldman, R.P., Musliner, D. J., Krebsbach, K.D., and Boddy, M.S. (1997). Dynamic abstraction planning. In Proc. AAAI, pp. 680–686. AAAI Press.
[241] Goldman, R. P., Musliner, D. J., and Pelican, M. J. (2000). Using model checking to plan hard real-time controllers. In AIPS Wksp. on Model-Theoretic Approaches to Planning.
[242] Golumbic, M. and Shamir, R. (1993). Complexity and algorithms for reasoning about time: a graph-theoretic approach. J. ACM, 40(5):1108–1133.
[243] Gonzalez-Ferrer, A., Fernandez-Olivares, J., Castillo, L., et al. (2009). JABBAH: a java application framework for the translation between business process models and HTN. In Proc. Intl.Competition onKnowledge Engineering for Planning and Scheduling (ICKEPS).
[244] Gopal, M. (1963). Control Systems: Principles and Design. McGraw-Hill.
[245] Gregory, P., Long, D., and Fox, M. (2007). A meta-CSP model for optimal planning. In Abstraction, Reformulation, and Approximation, pp. 200–214. Springer.
[246] Gregory, P., Long, D., Fox, M., and Beck, J.C. (2012). Planning modulo theories: Extending the planning paradigm. In Proc. ICAPS.
[247] Gruber, T. (2009). Ontology. In Encyclopedia ofDatabase Systems,pp. 1963–1965. Springer.
[248] Guestrin, C., Hauskrecht, M., and Kveton, B. (2004). Solving factored MDPs with continuous and discrete variables. In Proc. Conf. on Uncertainty in AI (UAI), pp. 235–242.
[249] Guestrin, C., Koller, D., Parr, R., and Venkataraman, S. (2003). Efficient solution algorithms for factored MDPs. J. Artificial Intelligence Research, 19:399–468.
[250] Guez, A. and Pineau, J. (2010). Multi-tasking SLAM. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pp. 377–384.
[251] Hahnel, D., Burgard, W., and Lakemeyer, G. (1998). GOLEX – bridging the gap between logic (GOLOG) and a real robot. In Proc.Annual German Conf. on AI (KI), pp. 165–176. Springer.
[252] Hanks, S. and Firby, R. J. (1990). Issues and architectures for planning and execution. In Proc. Wksp. on Innovative Approaches to Planning, Scheduling and Control, pp. 59–70. Morgan Kaufmann.
[253] Hansen, E.A. (2011). Suboptimality bounds for stochastic shortest path problems. In Proc. Conf. on Uncertainty in AI (UAI), pp. 301–310.
[254] Hansen, E. A. and Zilberstein, S. (2001). LAO*: A heuristic search algorithm that finds solutions with loops. Artificial Intelligence, 129(1):35–62.
[255] Hart, P. E., Nilsson, N. J., and Raphael, B. (1968). A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst., Man, and Cybernetics, pp. 1556– 1562.
[256] Hart, P.E., Nilsson, N. J., and Raphael, B. (1972). Correction to a formal basis for the heuristic determination of minimum cost paths.ACM SIGART Bulletin, 37:28–29.
[257] Hartanto, R. and Hertzberg, J. (2008). Fusing DL reasoning with HTN planning. In Proc. Annual German Conf. on AI (KI), pp. 62–69. Springer.
[258] Haslum, P. (2009). Admissible makespan estimates for PDDL2.1 temporal planning. In ICAPS Wksp. on Heuristics for Domain-Independent Planning.
[259] Haslum, P., Bonet, B., and Geffner, H. (2005). New admissible heuristics for domainindependent planning. In Proc. AAAI.
[260] Haslum, P., Botea, A., Helmert, M., Bonet, B., and Koenig, S. (2007). Domain-independent construction of pattern database heuristics for cost-optimal planning. In Proc. AAAI, volume 7, pp. 1007–1012.
[261] Haslum, P. and Geffner, H. (2000). Admissible heuristics for optimal planning. In Proc. AIPS, pp. 140–149.
[262] Haslum, P. and Geffner, H. (2001). Heuristic plannnig with time and resources. In Proc. European Conf. on Planning (ECP), pp. 121–132.
[263] Hauskrecht, M., Meuleau, N., Kaelbling, L. P., Dean, T., and Boutilier, C. (1998). Hierarchical solution of Markov decision processes using macro-actions. In Proc. Conf. on Uncertainty in AI (UAI), pp. 220–229.
[264] Hawes, N. (2011). A survey of motivation frameworks for intelligent systems. Artificial Intelligence, 175(5):1020–1036.
[265] Heintz, F., Kvarnstrom, J., and Doherty, P. (2010). Bridging the sense-reasoning gap: DyKnow – stream-based middleware for knowledge processing. Advanced Engineering Informatics, 24(1):14–26.
[266] Helmert, M. (2004). A planning heuristic based on causal graph analysis. In Proc. ICAPS.
[267] Helmert, M. (2006). The Fast Downward planning system. J. Artificial Intelligence Research, 26:191–246.
[268] Helmert, M. (2009). Concise finite-domain representations for PDDL planning tasks. Artificial Intelligence, 173(5):503–535.
[269] Helmert, M. and Domshlak, C. (2009). Landmarks, critical paths and abstractions:What's the difference anyway? In Proc. ICAPS, pp. 162–169.
[270] Helmert, M. and Geffner, H. (2008). Unifying the causal graph and additive heuristics. In Proc. ICAPS, pp. 140–147.
[271] Helmert, M., Haslum, P., and Hoffmann, J. (2007). Flexible abstraction heuristics for optimal sequential planning. In Proc. ICAPS, pp. 176–183.
[272] Helmert, M., Haslum, P., and Hoffmann, J. (2008). Explicit-state abstraction:Anew method for generating heuristic functions. In Proc. AAAI, pp. 1547–1550.
[273] Helmert, M., Haslum, P., Hoffmann, J., and Nissim, R. (2014). Merge-and-shrink abstraction: A method for generating lower bounds in factored state spaces. J. ACM, 61(3):16.
[274] Henzinger, T.A. (1996). The theory of hybrid automata. In IEEE Symp. on Logic in Computer Sci., pp. 278–292.
[275] Hoey, J., St-Aubin, R., Hu, A., and Boutilier, C. (1999). SPUDD: Stochastic planning using decision diagrams. In Proc. Conf. on Uncertainty in AI (UAI), pp. 279–288.
[276] Hoffmann, J. (2001). FF: The Fast-Forward planning system. AI Magazine, 22(3):57–62.
[277] Hoffmann, J. (2003). The metric-FF planning system: Translating “ignoring delete lists” to numeric state variables. J. Artificial Intelligence Research, 20:291–341.
[278] Hoffmann, J. (2005). Where “ignoring delete lists”works: local search topology in planning benchmarks. J. Artificial Intelligence Research, pp. 685–758.
[279] Hoffmann, J. and Brafman, R. (2005). Contingent planning via heuristic forward search with implicit belief states. In Proc. ICAPS.
[280] Hoffmann, J. and Nebel, B. (2001). The FF planning system: Fast plan generation through heuristic search. J. Artificial Intelligence Research, 14:253–302.
[281] Hoffmann, J., Porteous, J., and Sebastia, L. (2004). Ordered landmarks in planning. J. Artificial Intelligence Research, 22:215–278.
[282] Hofmann, A. G. and Williams, B. C. (2010). Exploiting spatial and temporal flexibility for exploiting spatial and temporal flexibility for plan execution of hybrid, under-actuated systems. In Cognitive Robotics.
[283] Hogg, C., Kuter, U., and Munoz-Avila, H. (2010). Learning methods to generate good plans: Integrating HTN learning and reinforcement learning. In Proc. AAAI.
[284] Hongeng, S., Nevatia, R., and Bremond, F. (2004). Video-based event recognition: activity representation and probabilistic recognition methods. Computer Vision and Image Understanding, 96(2):129–162.
[285] Hooker, J. N. (2006). Operations research methods in constraint programming. In Rossi, F., van Beek, P., and Walsh, T., editors, Handbook of Constraint Programming, pp. 527–570. Elsevier.
[286] Hopcroft, J. E., Motwani, R., and Ullman, J. D. (2006). Introduction to Automata Theory, Languages, and Computation. Addison-Wesley.
[287] Horowitz, S. S. E. and Rajasakaran, S. (1996). Computer Algorithms. W.H. Freeman.
[288] Howard, R. A. (1971). Dynamic Probabilistic Systems. Wiley.
[289] Huang, R., Chen, Y., and Zhang, W. (2009). An optimal temporally expressive planner: Initial results and application to P2P network optimization. In Proc. ICAPS.
[290] Ibaraki, T. (1976). Theoretical comparision of search strategies in branch and bound. International Journal of Computer and Information Sciences, 5:315–344.
[291] ICAPS (2015). ICAPS competitions. http://icaps-conference.org/index.php/Main/ Competitions. [Accessed: 16 August 2015].
[292] Ingham, M.D., Ragno, R. J., and Williams, B. C. (2001).A reactive model-based programming language for robotic space explorers. In Intl. Symp. on Artificial Intell., Robotics and Automation in Space (i-SAIRAS).
[293] Ingrand, F., Chatilla, R., Alami, R., and Robert, F. (1996). PRS: A high level supervision and control language for autonomous mobile robots. In IEEE Intl. Conf. on Robotics and Automation (ICRA), pp. 43–49.
[294] Ingrand, F. and Ghallab, M. (2015). Deliberation for Autonomous Robots: A Survey. Artificial Intelligence (In Press).
[295] Ivankovic, F., Haslum, P., Thiebaux, S., Shivashankar, V., and Nau, D. (2014). Optimal planning with global numerical state constraints. In Proc. ICAPS.
[296] Iwen, M. and Mali, A.D. (2002). Distributed graphplan. In IEEE Intl. Conf. on Tools with AI (ICTAI), pp. 138–145. IEEE.
[297] Jensen, R. and Veloso, M. (2000). OBDD-based universal planning for synchronized agents in non-deterministic domains. J. Artificial Intelligence Research, 13:189–226.
[298] Jensen, R., Veloso, M., and Bryant, R. (2003). Guided symbolic universal planning. In Proc. ICAPS.
[299] Jensen, R. M., Veloso, M. M., and Bowling, M. H. (2001). OBDD-based optimistic and strong cyclic adversarial planning. In Proc. European Conf. on Planning (ECP).
[300] Jimenez, S., de La Rosa, T., Fernandez, S., Fernandez, F., and Borrajo, D. (2012). A review of machine learning for automated planning. The Knowledge Engg. Review, 27(4):433–467.
[301] Jonsson, A. K., Morris, P. H., Muscettola, N., Rajan, K., and Smith, B. D. (2000). Planning in interplanetary space: Theory and practice. In AIPS, pp. 177–186.
[302] Jonsson, P., Drakengren, T., and Backstrom, C. (1999). Computational complexity of relating time points and intervals. Artificial Intelligence, 109:273–295.
[303] Judah, K., Fern, A. P., and Dietterich, T.G. (2012). Active imitation learning via reduction to IID active learning. In Proc. Conf. on Uncertainty in AI (UAI), pp. 428–437.
[304] Kabanza, F., Barbeau, M., and St-Denis, R. (1997). Planning control rules for reactive agents. Artificial Intelligence, 95(1):67–113.
[305] Kaelbling, L. P., Littman, M. L., and Cassandra, A. (1998). Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99–134.
[306] Kambhampati, S. (1993). On the utility of systematicity: Understanding the trade-offs between redundancy and commitment in partial-order planning. In Proc. IJCAI, pp. 1380–1385.
[307] Kambhampati, S. (1995). A comparative analysis of partial order planning and task reduction planning. SIGART Bulletin, 6(1).
[308] Kambhampati, S. (2003). Are we comparing Dana and Fahiem or SHOP and TLPlan? A critique of the knowledge-based planning track at ICP. http://rakaposhi.eas.asu.edu/ kbplan.pdf.
[309] Kambhampati, S. and Hendler, J. A. (1992). A validation-structure-based theory of plan modification and reuse. Artificial Intelligence, 55:193–258.
[310] Kambhampati, S., Knoblock, C. A., and Yang, Q. (1995). Planning as refinement search: A unified framework for evaluating design tradeoffs in partial-order planning. Artificial Intelligence, 76(1–2):167–238.
[311] Kambhampati, S. and Nau, D. S. (1996). On the nature and role of modal truth criteria in planning. Artificial Intelligence, 82(2).
[312] Kambhampati, S. and Srivastava, B. (1995). Universal classical planner: An algorithm for unifying state-space and plan-space planning. In Proc. European Conf. on Planning (ECP).
[313] Karabaev, E. and Skvortsova, O. (2005). Aheuristic search algorithm for solving first-order MDPs. In Proc. Conf. on Uncertainty in AI (UAI), pp. 292–299.
[314] Karaman, S. and Frazzoli, E. (2012). Sampling-based algorithms for optimal motion planning with deterministic μ-calculus specifications. In American Control Conference (ACC), pp. 735–742. IEEE.
[315] Karlsson, L., Bouguerra, A., Broxvall, M., Coradeschi, S., and Saffiotti, A. (2008). To secure an anchor –Arecovery planning approach to ambiguity in perceptual anchoring. AI Communincations, 21(1):1–14.
[316] Karpas, E. and Domshlak, C. (2009). Cost-optimal planning with landmarks. In Proc. IJCAI, pp. 1728–1733.
[317] Karpas, E., Wang, D., Williams, B. C., and Haslum, P. (2015). Temporal landmarks: What must happen, and when. In Proc. ICAPS.
[318] Katz, M. and Domshlak, C. (2008). Optimal additive composition of abstraction-based admissible heuristics. In Proc. ICAPS, pp. 174–181.
[319] Katz, M. and Domshlak, C. (2009). Structural-pattern databases. In Proc. ICAPS.
[320] Kautz, H. and Allen, J. (1986). Generalized plan recognition. In Proc. AAAI, pp. 32–37.
[321] Kautz, H., McAllester, D., and Selman, B. (1996). Encoding plans in propositional logic. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 374–384.
[322] Kautz, H. and Selman, B. (1992). Planning as satisfiability. In Proc. ECAI.
[323] Kautz, H. and Selman, B. (1996). Pushing the envelope: Planning, propositional logic, and stochastic search. In Proc. AAAI, pp. 1194–1201.
[324] Kautz, H.A., Thomas, W., and Vardi, M.Y., editors (2006). Synthesis and Planning, Dagstuhl Seminar Proceedings.
[325] Kearns, M., Mansour, Y., and Ng, A. (2002). A sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49:193–208.
[326] Kelleher, G. and Cohn, A. G. (1992). Automatically synthesising domain constraints from operator descriptions. In Proc. ECAI, pp. 653–655.
[327] Keller, T. and Eyerich, P. (2012). PROST: Probabilistic planning based on UCT. Proc. ICAPS, pp. 119–127.
[328] Khatib, L., Morris, P., Morris, R., and Rossi, F. (2001). Temporal constraint reasoning with preferences. In Proc. IJCAI.
[329] Khatib, S. and Siciliano, B. (2007). Handbook of Robotics. Springer.
[330] Kiesel, S. and Ruml, W. (2014). Planning under temporal uncertainty using hindsight optimization. In ICAPS Wksp. on Planning and Robotics, pp. 1–11.
[331] Kissmann, P. and Edelkamp, S. (2009). Solving fully-observable non-deterministic planning problems via translation into a general game. In Proc. Annual German Conf. on AI (KI), pp. 1–8. Springer.
[332] Knight, R., Rabideau, G., Chien, S., Engelhardt, B., and Sherwood, R. (2001). Casper: space exploration through continuous planning. IEEE Intelligent Systems, 16(5):70–75.
[333] Knoblock, C. (1992). An analysis of ABSTRIPS. In Proc.AIPS.
[334] Knoblock, C.A. (1994). Automatically generating abstractions for planning. Artificial Intelligence, 68(2):243–302.
[335] Knoblock, C. A., Tenenberg, J. D., and Yang, Q. (1991). Characterizing abstraction hierarchies for planning. In Proc. AAAI, pp. 692–698.
[336] Knoblock, C. A. and Yang, Q. (1994). Evaluating the trade-offs in partial-order planning algorithms. In AAAI Wksp. on Comparative Analysis of AI Planning Systems.
[337] Knoblock, C. A. and Yang, Q. (1995). Relating the performance of partial-order planning algorithms to domain features. SIGART Bulletin, 6(1).
[338] Knuth, D. E. and Moore, R.W. (1975). An analysis of alpha-beta pruning. Artificial Intelligence, 6:293–326.
[339] Kocsis, L. and Szepesvari, C. (2006). Bandit based Monte-Carlo planning. In Proc. European Conf. on Machine Learning (ECML), volume 4212 of LNAI, pp. 1–12. Springer.
[340] Koehler, J. (1998). Planning under resource constraints. In Proc. ECAI, pp. 489–493.
[341] Koehler, J. (1999). Handling of conditional effects and negative goals in IPP. Technical Report 128, Albert-Ludwigs-Universitat Freiburg.
[342] Koenig, S. (2001). Minimax real-time heuristic search. Artificial Intelligence, 129(1–2):165–197.
[343] Koenig, S. and Simmons, R. (1998). Solving robot navigation problems with initial pose uncertainty using real-time heuristic search. In Proc. AIPS.
[344] Koenig, S. and Simmons, R. G. (1995). Real-time search in non-deterministic domains. In Proc. IJCAI, pp. 1660–1669.
[345] Koller, D. and Friedman, N. (2009). Probabilistic Graphical Models: Principles and Techniques. MIT Press.
[346] Kolobov, A., Mausam, , and Weld, D. (2010). SixthSense: Fast and reliable recognition of dead ends in MDPs. In Proc. AAAI.
[347] Kolobov, A., Mausam, , and Weld, D. (2012). Stochastic shortest path MDPs with dead ends. Proc. ICAPS Workshop HSDIP.
[348] Kolobov, A., Mausam, , Weld, D., and Geffner, H. (2011). Heuristic search for generalized stochastic shortest path MDPs. In Proc. ICAPS.
[349] Kolobov, A. and Weld, D. (2009). ReTrASE: integrating paradigms for approximate probabilistic planning. In Proc. IJCAI.
[350] Korf, R. (1990). Real-time heuristic search. Artificial Intelligence, 42(2–3):189–211.
[351] Korf, R.E. (1985). Depth-first iterative-deepening: an optimal admissible tree search. Artificial Intelligence, 27:97–109.
[352] Korf, R. E. (1987). Planning as search: A quantitative approach. Artificial Intelligence, 33:65–88.
[353] Korf, R. E. (1993). Linear-space best-first search. Artificial Intelligence, 62(1):41–78.
[354] Koubarakis, M. (1997). From local to global consistency in temporal constraint networks. Theoretical Computer Science, 173(1):89–112.
[355] Kruger, V., Kragic, D., Ude, A., and Geib, C. (2007). The meaning of action: a review on action recognition and mapping. Advanced Robotics, 21(13):1473–1501.
[356] Kumar, V. and Kanal, L. (1983a). The composite decision process:A unifying formulation for heuristic search, dynamic programming and branch and bound procedures. In Proc. AAAI, pp. 220–224.
[357] Kumar, V. and Kanal, L. (1983b). Ageneral branch and bound formulation for understanding and synthesizing and/or tree search procedures. Artificial Intelligence, pp. 179–198.
[358] Kupferman, O., Madhusudan, P., Thiagarajan, P.S., and Vardi, M.Y. (2000). Open systems in reactive environments: Control and synthesis. In Proc. Intl. Conf. on Concurrency Theory (CONCUR), pp. 92–107.
[359] Kupferman, O. and Vardi, M. Y. (2001). Synthesizing distributed systems. In IEEE Symp. on Logic in Computer Sci., pp. 389–398.
[360] Kurzhanskiy, A. A. and Varaiya, P. (2007). Ellipsoidal techniques for reachability analysis of discrete-time linear systems. IEEE Trans. Automat. Contr., 52(1):26–38.
[361] Kuter, U. and Nau, D. (2005). Using domain-configurable search control for probabilistic planning. In Proc. AAAI, pp. 1169–1174.
[362] Kuter, U., Nau, D. S., Pistore, M., and Traverso, P. (2005). A hierarchical task-network planner based on symbolic model checking. In Proc. ICAPS, pp. 300–309.
[363] Kuter, U., Nau, D. S., Pistore, M., and Traverso, P. (2009). Task decomposition on abstract states, for planning under nondeterminism. Artificial Intelligence, 173:669–695.
[364] Kuter, U., Nau, D. S., Reisner, E., and Goldman, R. (2008). Using classical planners to solve nondeterministic planning problems. In Proc. ICAPS, pp. 190–197.
[365] Kuter, U., Sirin, E., Nau, D. S., Parsia, B., and Hendler, J. (2004). Information gathering during planning for web service composition. In McIlraith, S. A., Plexousakis, D., and van Harmelen, F., editors, Proc. Intl. Semantic Web Conf. (ISWC), volume 3298 of LNCS, pp. 335–349. Springer.
[366] Kvarnstrom, J. and Doherty, P. (2001). TALplanner:Atemporal logic based forward chaining planner. Annals of Mathematics and Artificial Intelligence, 30:119–169.
[367] Kvarnstrom, J., Doherty, P., and Haslum, P. (2000). Extending TALplanner with concurrency and resources. In Proc. European Conf. on Planning (ECP).
[368] Kveton, B., Hauskrecht, M., and Guestrin, C. (2006). Solving factored MDPs with hybrid state and action variables. J. Artificial Intelligence Research, 27:153–201.
[369] Laborie, P. (2003). Algorithms for propagating resource constraints in ai planning and scheduling: Existing approaches and new results. Artificial Intelligence, 143(2):151–188.
[370] Laborie, P. and Ghallab, M. (1995). Planning with sharable resource constraints. In Proc. IJCAI, pp. 1643–1649.
[371] Laird, J., Rosenbloom, P., and Newell, A. (2012). Universal Subgoaling and Chunking: The Automatic Generation and Learning of Goal Hierarchies, volume 11. Springer Science & Business Media.
[372] Laporte, C. and Arbel, T. (2006). Efficient discriminant viewpoint selection for active Bayesian recognition. Intl. J. Robotics Research, 68(3):267–287.
[373] Lawler, E. L. and Wood, D. E. (1966). Branch-and-bound methods: A survey. Operations Research, 14(4):699–719.
[374] Le Guillou, X., Cordier, M.-O., Robin, S., Roze, L., et al. (2008). Chronicles for on-line diagnosis of distributed systems. In Proc. ECAI, volume 8, pp. 194–198.
[375] Lemai-Chenevier, S. and Ingrand, F. (2004). Interleaving temporal planning and execution in robotics domains. In Proc. AAAI.
[376] Lemaignan, S., Espinoza, R. R., Mosenlechner, L., Alami, R., and Beetz, M. (2010). ORO, a knowledge management platform for cognitive architectures in robotics. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Syst. (IROS).
[377] Lesperance, Y. H. J., Lin, L. F., Marcus, D., Reiter, R., and Scherl, R. (1994). A logical approach to high-level robot programming – a progress report. In AAAI Fall Symp. on Control of the Physical World by Intelligent Agents. AAAI Technical Report FS-94-03.
[378] Levesque, H., Reiter, R., Lesperance, Y., Lin, F., and Scherl, R. (1997a). GOLOG: A logic programming language for dynamic domains. J. Logic Programming, 31:59–84.
[379] Levesque, H. J., Reiter, R., Lesperance, Y., Lin, F., and Scherl, R. (1997b). GOLOG: a logic programming language for dynamic domains. J. Logic Progr., 31:59–83.
[380] Levine, S. J. and Williams, B. C. (2014). Concurrent plan recognition and execution for human-robot teams. In Proc. ICAPS.
[381] Li, H.X. and Williams, B.C. (2008). Generative planning for hybrid systems based on flow tubes. In Proc. ICAPS, pp. 206–213.
[382] Liaskos, S., McIlraith, S.A., Sohrabi, S., and Mylopoulos, J. (2010). Integrating preferences into goal models for requirements engineering. In Intl.Requirements Engg. Conf., pp. 135– 144.
[383] Liatsos, V. and Richard, B. (1999). Scalability in planning. In Biundo, S. and Fox, M., editors, Proc. European Conf. on Planning (ECP), volume 1809 of LNAI, pp. 49–61. Springer.
[384] Lifschitz, V. (1987). On the semantics of STRIPS. In Georgeff, M. P. and Lansky, A. L., editors,Reasoning about Actions and Plans: Proc. 1986 Wksp., pp. 1–9. Morgan Kaufmann. Reprinted in [15], pp. 523–530.
[385] Ligozat, G. (1991).On generalized interval calculi. In Proc. AAAI, pp. 234–240.
[386] Likhachev, M., Gordon, G. J., and Thrun, S. (2004). Planning forMarkov decision processes with sparse stochasticity. In Adv. in Neural Information Processing Syst. (Proc. NIPS), volume 17.
[387] Lin, S. (1965). Computer solutions of the traveling salesman problem. Bell System Technical Journal, 44(10):2245–2269.
[388] Little, I., Aberdeen, D., and Thiebaux, S. (2005). Prottle:A probabilistic temporal planner. In Proc. AAAI, pp. 1181–1186.
[389] Little, I. and Thiebaux, S. (2007). Probabilistic planning vs. replanning. In ICAPSWksp. on the Intl. Planning Competition.
[390] Liu, Y. and Koenig, S. (2006). Functional value iteration for decision-theoretic planning with general utility functions. In Proc. AAAI.
[391] Lohr, J., Eyerich, P., Keller, T., and Nebel, B. (2012). A planning based framework for controlling hybrid systems. In Proc. ICAPS.
[392] Lohr, J., Eyerich, P., Winkler, S., and Nebel, B. (2013). Domain predictive control under uncertain numerical state information. In Proc. ICAPS.
[393] Long, D. and Fox, M. (1999). Efficient implementation of the plan graph in STAN. J. Artificial Intelligence Research, 10(1–2):87–115.
[394] Long, D. and Fox, M. (2003a). The 3rd international planning competition: Results and analysis. J. Artificial Intelligence Research, 20:1–59.
[395] Long, D. and Fox, M. (2003b). Exploiting a graphplan framework in temporal planning. In Proc. ICAPS, pp. 52–61.
[396] Lotem, A. and Nau, D. S. (2000). New advances in GraphHTN: Identifying independent subproblems in large HTN domains. In Proc. AIPS, pp. 206–215.
[397] Lotem, A., Nau, D. S., and Hendler, J. (1999). Using planning graphs for solving HTN problems. In Proc. AAAI, pp. 534–540.
[398] Magnenat, S., Chappelier, J.C., and Mondada, F. (2012). Integration of online learning into HTN planning for robotic tasks. In AAAI Spring Symposium.
[399] Maliah, S., Brafman, R., Karpas, E., and Shani, G. (2014). Partially observable online contingent planning using landmark heuristics. In Proc. ICAPS.
[400] Malik, J. and Binford, T. (1983). Reasoning in time and space. In Proc. IJCAI, pp. 343–345.
[401] Mansouri, M. and Pecora, F. (2016). Robot waiters: A case for hybrid reasoning with different types of knowledge. J. Experimental & Theoretical Artificial Intelligence.
[402] Marthi, B., Russell, S., and Wolfe, J. (2007). Angelic semantics for high-level actions. In Proc. ICAPS.
[403] Marthi, B., Russell, S., and Wolfe, J. (2008). Angelic hierarchical planning: Optimal and online algorithms. In Proc. ICAPS, pp. 222–231.
[404] Marthi, B.M., Russell, S. J., Latham, D., and Guestrin, C. (2005). Concurrent hierarchical reinforcement learning. In Proc. AAAI, p. 1652.
[405] Mattmuller, R., Ortlieb, M., Helmert, M., and Bercher, P. (2010). Pattern database heuristics for fully observable nondeterministic planning. In Proc. ICAPS, pp. 105–112.
[406] Mausam, , Bertoli, P., and Weld, D. (2007). A hybridized planner for stochastic domains. In Proc. IJCAI, pp. 1972–1978.
[407] Mausam, and Kolobov, A. (2012). Planning with Markov Decision Processes: An AI Perspective. Morgan & Claypool.
[408] Mausam, and Weld, D. (2005). Concurrent probabilistic temporal planning. In Proc. ICAPS.
[409] Mausam, and Weld, D. (2006). Probabilistic temporal planning with uncertain durations. In Proc. AAAI, pp. 880–887.
[410] Mausam, and Weld, D. (2008). Planning with durative actions in stochastic domains. J. Artificial Intelligence Research, 31(1):33–82.
[411] McAllester, D. and Rosenblitt, D. (1991). Systematic nonlinear planning. In Proc. AAAI, pp. 634–639.
[412] McCarthy, J. (1990). Formalizing Common Sense: Papers by John McCarthy. Ablex Publishing.
[413] McCarthy, J. and Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial intelligence. InMeltzer, B. and Michie, D., editors, Machine Intelligence 4, pp. 463– 502. Edinburgh Univ. Press. Reprinted in [412].
[414] McDermott, D. (1982). A temporal logic for reasoning about processes and plans. Cognitive Science, 6:101–155.
[415] McDermott, D. (1991). A reactive plan language.Technical Report YALEU/CSD/RR 864, Yale Univ.
[416] McDermott, D. M. (2000). The 1998 AI planning systems competition. AI Magazine, 21(2):35.
[417] McIlraith, S. A. and Son, T. C. (2002). Adapting GOLOG for composition of semantic web services. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 482–496.
[418] McMahan, H. B. and Gordon, G. J. (2005). Fast exact planning in Markov decision processes. In Proc. ICAPS, pp. 151–160.
[419] Meiri, I. (1990). Faster Constraint satisfaction algorithms for temporal reasoning. Tech. report R-151, UC Los Angeles.
[420] Meuleau, N., Benazera, E., Brafman, R. I., and Hansen, E. A. (2009). A heuristic search approach to planning with continuous resources in stochastic domains. J. Artificial Intelligence Research, 34(1):27.
[421] Meuleau, N. and Brafman, R. I. (2007). Hierarchical heuristic forward search in stochastic domains. In Proc. IJCAI, pp. 2542–2549.
[422] Miguel, I., Jarvis, P., and Shen, Q. (2000). Flexible graphplan. In Proc. ECAI, pp. 506–510.
[423] Minton, S., Bresina, J., and Drummond, M. (1991). Commitment strategies in planning: A comparative analysis. In Proc. IJCAI, pp. 259–265.
[424] Minton, S., Drummond, M., Bresina, J., and Philips, A. (1992). Total order vs. partial order planning: Factors influencing performance. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 83–92.
[425] Mitten, L. G. (1970). Branch and bound methods: General formulations and properties. Operations Research, 18:23–34.
[426] Moeslund, T. B., Hilton, A., and Kruger, V. (2006). A survey of advances in visionbased human motion capture and analysis. Computer Vision and Image Understanding, 104(2–3):90–126.
[427] Moffitt, M. D. (2011). On the modelling and optimization of preferences in constraintbased temporal reasoning. Artificial Intelligence, 175(7):1390–1409.
[428] Moffitt, M. D. and Pollack, M. E. (2005). Partial constraint satisfaction of disjunctive temporal problems. In Proc. Intl. Florida AI Research Soc. Conf. (FLAIRS), pp. 715–720.
[429] Molineaux, M., Klenk, M., and Aha, D. (2010). Goal-driven autonomy in a Navy strategy simulation. In Proc. AAAI, pp. 1548–1554.
[430] Morisset, B. and Ghallab, M. (2002a). Learning how to combine sensory-motor modalities for a robust behavior. In Beetz, M., Hertzberg, J., Ghallab, M., and Pollack, M., editors, Advances in Plan-based Control of Robotics Agents, volume 2466 of LNAI, pp. 157–178. Springer.
[431] Morisset, B. and Ghallab, M. (2002b). Synthesis of supervision policies for robust sensorymotor behaviors. In Intl. Conf. on Intell. and Autonomous Syst. (IAS), pp. 236–243.
[432] Morris, P. (2014). Dynamic controllability and dispatchability relationships. In Integration of AI and OR, pp. 464–479.
[433] Morris, P., Muscettola, N., and Vidal, T. (2001). Dynamic control of plans with temporal uncertainty. In Proc. IJCAI, pp. 494–502.
[434] Morris, P.H. and Muscettola, N. (2005). Temporal dynamic controllability revisited. In Proc. AAAI, pp. 1193–1198.
[435] Muise, C., McIlraith, S. A., and Belle, V. (2014). Non-deterministic planning with conditional effects. In Proc. ICAPS.
[436] Muise, C. J., McIlraith, S.A., and Beck, J. C. (2012). Improved non-deterministic planning by exploiting state relevance. In Proc. ICAPS.
[437] Munos, R. and Moore, A.W. (2002). Variable resolution discretization in optimal control. Machine Learning, 49:291–323.
[438] Munoz-Avila, H., Aha, D.W., Nau, D. S., Weber, R., Breslow, L., and Yaman, F. (2001). SiN: Integrating case-based reasoning with task decomposition. In Proc. IJCAI.
[439] Muscettola, N., Dorais, G., Fry, C., Levinson, R., and Plaunt, C. (2002). IDEA: Planning at the core of autonomous reactive agents. In Intl. Wksp. on Planning and Scheduling for Space (IWPSS).
[440] Muscettola, N., Morris, P. H., and Tsamardinos, I. (1998a). Reformulating temporal plans for efficient execution. In Principles of Knowledge Representation and Reasoning, pp. 444– 452.
[441] Muscettola, N., Nayak, P. P., Pell, B., and Williams, B. C. (1998b). Remote Agent: To boldly go where no AI system has gone before. Artificial Intelligence, 103:5–47.
[442] Myers, K.L. (1999). CPEF:Acontinuous planning and execution framework. AI Magazine, 20(4):63–69.
[443] Nareyek, A., Freuder, E. C., Fourer, R., Giunchiglia, E., Goldman, R. P., Kautz, H., Rintanen, J., and Tate, A. (2005). Constraints and AI planning. IEEE Intelligent Systems, 20(2):62–72.
[444] Nau, D. S., Au, T.-C., Ilghami, O., Kuter, U., Munoz-Avila, H., Murdock, J.W., Wu, D., and Yaman, F. (2005). Applications of SHOP and SHOP2. IEEE Intelligent Systems, 20(2):34–41.
[445] Nau, D. S., Au, T.-C., Ilghami, O., Kuter, U.,Murdock, J.W., Wu, D., and Yaman, F. (2003). SHOP2: An HTN planning system. J. Artificial Intelligence Research, 20:379–404.
[446] Nau, D. S., Cao, Y., Lotem, A., and Munoz-Avila, H. (1999). SHOP: Simple hierarchical ordered planner. In Proc. IJCAI, pp. 968–973.
[447] Nau, D. S., Kumar, V., and Kanal, L.N. (1984). General branch and bound, and its relation to A* and AO*. Artificial Intelligence, 23(1):29–58.
[448] Nau, D. S., Munoz-Avila, H., Cao, Y., Lotem, A., and Mitchell, S. (2001). Total-order planning with partially ordered subtasks. In Proc. IJCAI.
[449] Nebel, B. and Burckert, H. (1995). Reasoning about temporal relations:a maximal tractable subclass of Allen's interval algebra. J. ACM, 42(1):43–66.
[450] Newell, A. and Ernst, G. (1965). The search for generality. In Proc. IFIP Congress, volume 65, pp. 17–24.
[451] Newell, A. and Simon, H. A. (1963). GPS, a program that simulates human thought. In Feigenbaum, E. A. and Feldman, J. A., editors, Computers and Thought. McGraw- Hill.
[452] Newton, M. A. H., Levine, J., Fox, M., and Long, D. (2007). Learning macro-actions for arbitrary planners and domains. In Proc. ICAPS.
[453] Ng, A. and Jordan, M. (2000). PEGASUS: a policy search method for large MDPs and POMDPs. In Proc. Conf. on Uncertainty in AI (UAI), pp. 406–415.
[454] Nguyen, N. and Kambhampati, S. (2001). Reviving partial order planning. In Proc. IJCAI.
[455] Nicolescu, M. N. and Mataric, M. J. (2003). Natural methods for robot task learning: instructive demonstrations, generalization and practice. In Proc. AAMAS, pp. 241– 248.
[456] Nieuwenhuis, R., Oliveras, A., and Tinelli, C. (2006). Solving SAT and SAT modulo theories: From an abstract Davis-Putnam-Logemann-Loveland procedure to DPLL (T). J. ACM, 53(6):937–977.
[457] Nikolova, E. and Karger, D. R. (2008). Route planning under uncertainty: The Canadian traveller problem. In Proc. AAAI, pp. 969–974.
[458] Nilsson, M., Kvarnstrom, J., and Doherty, P. (2014a). EfficientIDC: A faster incremental dynamic controllability algorithm. In Proc. ICAPS.
[459] Nilsson, M., Kvarnstrom, J., and Doherty, P. (2014b). Incremental dynamic controllability in cubic worst-case time. In Intl. Symp. on Temporal Representation and Reasoning.
[460] Nilsson, N. (1980). Principles of Artificial Intelligence. Morgan Kaufmann.
[461] Oaksford, M. and Chater, N. (2007). Bayesian rationality the probabilistic approach to human reasoning. Oxford Univ. Press.
[462] Oates, T. and Cohen, P. R. (1996). Learning planning operators with conditional and probabilistic effects. In Proc.AAAI Spring Symposium on Planning with Incomplete Information for Robot Problems, pp. 86–94.
[463] Ong, S. C. W., Png, S. W., Hsu, D., and Lee, W. S. (2010). Planning under uncertainty for robotic tasks with mixed observability. Intl. J. Robotics Research, 29(8):1053–1068.
[464] Papadimitriou, C. (1994). Computational Complexity. AddisonWesley.
[465] Parr, R. and Russell, S. J. (1998). Reinforcement learning with hierarchies of machines. In Adv. in Neural Information Processing Syst. (Proc.NIPS), pp. 1043–1049.
[466] Pasula, H., Zettlemoyer, L. S., and Kaelbling, L. P. (2004). Learning probabilistic relational planning rules. In Proc. ICAPS, pp. 73–82.
[467] Pearl, J. (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley.
[468] Pecora, F., Cirillo, M., Dell'Osa, F., Ullberg, J., and Saffiotti, A. (2012). A constraint-based approach for proactive, context-aware human support. J. Ambient Intell. and Smart Environments, 4(4):347–367.
[469] Pednault, E. (1988). Synthesizing plans that contain actions with context-dependent effects. Computational Intelligence, 4:356–372.
[470] Pednault, E. P. (1989). ADL:Exploring the middle ground between STRIPS and the situation calculus. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR), pp. 324–332.
[471] Penberthy, J. and Weld, D. S. (1994). Temporal planning with continuous change. In Proc. AAAI, pp. 1010–1015.
[472] Penberthy, J. S. and Weld, D. (1992). UCPOP:A sound, complete, partial order planner for ADL. In Proc. Intl. Conf. on Principles of Knowledge Representation and Reasoning (KR).
[473] Penna, G. D., Intrigila, B., Magazzeni, D., and Mercorio, F. (2009). Upmurphi: a tool for universal planning on PDDL+ problems. In Proc. ICAPS, pp. 19–23.
[474] Peot, M. and Smith, D. (1992). Conditional nonlinear planning. In Proc.AIPS, pp. 189–197.
[475] Peters, J. and Ng, A. Y. (2009). Special issue on robot learning. Autonomous Robots, 27(1–2).
[476] Petrick, R. and Bacchus, F. (2004). Extending the knowledge-based approach to planning with incomplete information and sensing. In Proc. ICAPS, pp. 2–11.
[477] Pettersson, O. (2005). Execution monitoring in robotics: A survey. Robotics and Autonomous Systems, 53(2):73–88.
[478] Piaget, J. (1951). The Psychology of Intelligence. Routledge.
[479] Piaget, J. (2001). Studies in Reflecting Abstraction. Psychology Press.
[480] Pineau, J., Gordon, G. J., and Thrun, S. (2002). Policy-contingent abstraction for robust robot control. In Proc. Conf. on Uncertainty in AI (UAI), pp. 477–484.
[481] Pineau, J., Montemerlo, M., Pollack, M. E., Roy, N., and Thrun, S. (2003). Towards robotic assistants in nursing homes: Challenges and results. Robotics and Autonomous Systems, 42(3–4):271–281.
[482] Pistore, M., Bettin, R., and Traverso, P. (2001). Symbolic techniques for planning with extended goals in non-deterministic domains. In Proc.European Conf.on Planning (ECP), LNAI. Springer.
[483] Pistore, M., Spalazzi, L., and Traverso, P. (2006). A minimalist approach to semantic annotations for web processes compositions. In Euro. SemanticWebConf. (ESWC), pp. 620–634.
[484] Pistore, M. and Traverso, P. (2001). Planning as model checking for extended goals in nondeterministic domains. In Proc. IJCAI, pp. 479–484. Morgan Kaufmann.
[485] Pistore, M. and Traverso, P. (2007). Assumption-based composition and monitoring of web services. In Test and Analysis of Web Services, pp. 307–335. Springer.
[486] Pistore, M., Traverso, P., and Bertoli, P. (2005). Automated composition of web services by planning in asynchronous domains. In Proc. ICAPS, pp. 2–11.
[487] Planken, L. R. (2008). Incrementally solving the STP by enforcing partial path consistency. In Proc.Wksp. of theUKPlanning and Scheduling Special Interest Group (PlanSIG), pp. 87–94.
[488] Pnueli, A. and Rosner, R. (1989a). On the synthesis of a reactive module. In Proc. ACM Conf. on Principles of Programming Languages, pp. 179–190.
[489] Pnueli, A. and Rosner, R. (1989b). On the synthesis of an asynchronous reactive module. In Proc. Intl. Colloq. Automata, Langs. and Program. (ICALP), pp. 652–671.
[490] Pnueli, A. and Rosner, R. (1990). Distributed reactive systems are hard to synthesize. In 31st Annual Symposium on Foundations of Computer Science, pp. 746–757.
[491] Pohl, I. (1970). Heuristic search viewed as path finding in a graph. Artificial Intelligence, 1(3):193–204.
[492] Pollack, M.E. and Horty, J. F. (1999). There's more to life than making plans: Plan management in dynamic, multiagent environments. AI Magazine, 20(4):1–14.
[493] Porteous, J., Sebastia, L., and Hoffmann, J. (2001). On the extraction, ordering, and usage of landmarks in planning. In Proc. European Conf. on Planning (ECP).
[494] Powell, J., Molineaux, M., and Aha, D. (2011). Active and interactive discovery of goal selection knowledge. In FLAIRS.
[495] Prentice, S. and Roy, N. (2009). The belief roadmap: Efficient planning in belief space by factoring the covariance. IJRR, 28(11–12):1448–1465.
[496] Pryor, L. and Collin, G. (1996). Planning for contingency: A decision based approach. J. Artificial Intelligence Research, 4:81–120.
[497] Puterman, M.L. (1994). MarkovDecision Processes: Discrete StochasticDynamic Programming. Wiley.
[498] Py, F., Rajan, K., and McGann, C. (2010). A systematic agent framework for situated autonomous systems. In Proc. AAMAS, pp. 583–590.
[499] Pynadath, D. V. and Wellman, M. P. (2000). Probabilistic state-dependent grammars for plan recognition. In Proc. Conf. on Uncertainty in AI (UAI), pp. 507–514.
[500] Quiniou, R., Cordier, M.-O., Carrault, G., and Wang, F. (2001). Application of ILP to cardiac arrhythmia characterization for chronicle recognition. In Inductive Logic Programming, pp. 220–227. Springer.
[501] Rabideau, G., Knight, R., Chien, S., Fukunaga, A., and Govindjee, A. (1999). Iterative repair planning for spacecraft operations in the ASPEN system. In Intl. Symp. on Artificial Intell., Robotics and Automation in Space (i-SAIRAS).
[502] Rabiner, L. and Juang, B. H. (1986). An introduction to hidden Markov models. IEEE ASSP Mag., 3(1):4–16.
[503] Rajan, K. and Py, F. (2012). T-REX: Partitioned inference for AUV mission control. In Roberts, G. N. and Sutton, R., editors, Further Advances in Unmanned Marine Vehicles, pp. 171–199. The Institution of Engg. and Technology.
[504] Rajan, K., Py, F., and Barreiro, J. (2012). Towards deliberative control in marine robotics. In Marine Robot Autonomy, pp. 91–175. Springer.
[505] Ramirez, M. and Geffner, H. (2010). Probabilistic plan recognition using off-the-shelf classical planners. In Proc. AAAI, pp. 1121–1126.
[506] Ramirez, M., Yadav, N., and Sardina, S. (2013). Behavior composition as fully observable non-deterministic planning. In Proc. ICAPS.
[507] Ramirez, M. and Sardina, S. (2014). Directed fixed-point regression-based planning for non-deterministic domains. In Proc. ICAPS.
[508] Reingold, E., Nievergelt, J., and Deo, N. (1977). Combinatorial Optimization. PrenticeHall.
[509] Richter, S., Helmert, M., and Westphal, M. (2008). Landmarks revisited. In Proc. AAAI, volume 8, pp. 975–982.
[510] Richter, S. and Westphal, M. (2010). The LAMA planner: Guiding cost-based anytime planning with landmarks. J. Artificial Intelligence Research, 39(1):127–177.
[511] Rintanen, J. (1999). Constructing conditional plans by a theorem-prover. J. Artificial Intelligence Research, 10:323–352.
[512] Rintanen, J. (2000). An iterative algorithm for synthesizing invariants. In Proc. AAAI, pp. 1–6.
[513] Rintanen, J. (2002). Backward plan construction for planning as search in belief space. In Proc.AIPS.
[514] Rintanen, J. (2005). Conditional planning in the discrete belief space. In Proc. IJCAI.
[515] Rodriguez-Moreno, M. D., Oddi, A., Borrajo, D., and Cesta, A. (2006). Ipss: A hybrid approach to planning and scheduling integration. IEEE Trans. Knowledge and Data Engg. (TDKE), 18(12):1681–1695.
[516] Ross, S., Pineau, J., Paquet, S., and Chaib-Draa, B. (2008). Online planning algorithms for POMDPs. J. Artificial Intelligence Research, 32:663–704.
[517] Russell, S. and Norvig, P. (2009). Artificial Intelligence: A Modern Approach. Prentice- Hall.
[518] Rybski, P. E., Yoon, K., Stolarz, J., and Veloso, M. M. (2007). Interactive robot task training through dialog and demonstration. In Conference on Human-Robot Interaction, pp. 49–56.
[519] Sacerdoti, E. (1974). Planning in a hierarchy of abstraction spaces. Artificial Intelligence, 5:115–135.
[520] Sacerdoti, E. (1975). The nonlinear nature of plans. In Proc. IJCAI, pp. 206–214.Reprinted in [15], pp. 162–170.
[521] Samadi, M., Kollar, T., and Veloso, M. (2012). Using theWeb to interactively learn to find objects. In Proc. AAAI, pp. 2074–2080.
[522] Samet, H. (2006). Foundations of multidimensional and metric data structures. Morgan Kauffmann.
[523] Sandewall, E. (1994). Features and Fluents: The Representation of Knowledge about Dynamical Systems. Oxford Univ. Press.
[524] Sandewall, E. and Ronnquist, R. (1986). A representation of action structures. In Proc. AAAI, pp. 89–97.
[525] Sanner, S. (2010). Relational dynamic influence diagram language (RDDL): Language description. Technical report, NICTA.
[526] Santana, P.H.R.Q.A. and Williams, B.C. (2014). Chance-constrained consistency for probabilistic temporal plan networks. In Proc. ICAPS.
[527] Scherrer, B. and Lesner, B. (2012). On the use of non-stationary policies for stationary infinite-horizon Markov decision processes. In Adv. in Neural Information Processing Syst. (Proc. NIPS), pp. 1826–1834.
[528] Schultz, D. G. and Melsa, J. L. (1967). State functions and linear control systems. McGraw- Hill.
[529] Shah, M., Chrpa, L., Jimoh, F., Kitchin, D., McCluskey, T., Parkinson, S., and Vallati, M. (2013). Knowledge engineering tools in planning: State-of-the-art and future challenges. In ICAPS Knowledge Engg. for Planning and Scheduling (KEPS), pp. 53–60.
[530] Shani, G., Pineau, J., and Kaplow, R. (2012). A survey of point-based POMDP solvers. J. Autonomous Agents and Multi-Agent Syst., pp. 1–51.
[531] Shaparau, D., Pistore, M., and Traverso, P. (2006). Contingent planning with goal preferences. In Proc. AAAI, pp. 927–935.
[532] Shaparau, D., Pistore, M., and Traverso, P. (2008). Fusing procedural and declarative planning goals for nondeterministic domains. In Proc. AAAI, pp. 983–990.
[533] Shivashankar, V., Alford, R., Kuter, U., and Nau, D. (2013). The GoDeL planning system: A more perfect union of domain-independent and hierarchical planning. In Proc. IJCAI, pp. 2380–2386.
[534] Shoahm, Y. and McDermott, D. (1988). Problems in formal temporal reasoning. Artificial Intelligence, 36:49–61.
[535] Shoenfield, J. R. (1967). Mathematical Logic. Academic Press.
[536] Shoham, Y. (1987). Temporal logic in AI: semantical and ontological considerations. Artificial Intelligence, 33:89–104.
[537] Sigaud, O. and Peters, J. (2010). From Motor Learning to Interaction Learning in Robots, volume 264 of Studies in Computational Intelligence. Springer.
[538] Silver, D. and Veness, J. (2010). Monte-Carlo planning in large POMDPs. In Adv. in Neural Information Processing Syst. (Proc.NIPS).
[539] Simmons, R. (1992). Concurrent planning and execution for autonomous robots. IEEE Control Systems, 12(1):46–50.
[540] Simmon, R. (1994). Structured control for autonomous robots. IEEE Trans. Robotics and Automation, 10(1):34–43.
[541] Simmons, R. and Apfelbaum, D. (1998). A task description language for robot control. In IEEE/RSJ Intl. Conf. on Intelligent Robots and Syst. (IROS), pp. 1931–1937.
[542] Simpkins, C., Bhat, S., Isbell, Jr., C., and Mateas, M. (2008). Towards adaptive programming: integrating reinforcement learning into a programming language. In Proc.ACMSIGPLANConf. on Object-Oriented Progr.Syst., Lang., and Applications (OOPSLA),pp. 603–614. ACM.
[543] Simpson, R. M., Kitchin, D.E., and McCluskey, T. (2007). Planning domain definition using GIPO. The Knowledge Engineering Review, 22(2):117–134.
[544] Sirin, E., Parsia, B., Wu, D., Hendler, J., and Nau, D. S. (2004). HTN planning for Web service composition using SHOP2. J.Web Semant. (JWS), 1(4):377–396.
[545] Smith, D. E., Frank, J., and Cushing, W. (2008). The ANML language. ICAPS Wksp. on Knowledge Engg. for Planning and Scheduling (KEPS).
[546] Smith, D. E., Frank, J., and Jonsson, A. K. (2000). Bridging the gap between planning and scheduling. The Knowledge Engineering Review, 15(1):47–83.
[547] Smith, D.E. and Weld, D. (1999a). Temporal planning with mutual exclusion reasoning. In Proc. IJCAI.
[548] Smith, D. E. and Weld, D. S. (1998). Conformant Graphplan. In Proc. AAAI, pp. 889–896.
[549] Smith, D. E. and Weld, D. S. (1999b). Temporal planning with mutual exclusion reasoning. In Proc. IJCAI, pp. 326–337.
[550] Smith, S. J. J., Hebbar, K., Nau, D. S., and Minis, I. (1997). Integrating electrical and mechanical design and process planning. In Mantyla, M., Finger, S., and Tomiyama, T., editors, Knowledge Intensive CAD, pp. 269–288. Chapman and Hall.
[551] Smith, S. J. J., Nau, D.S.,and Throop, T. (1998). Computer bridge:Abig win for AI planning. AI Magazine, 19(2):93–105.
[552] Smith, T. and Simmons, R. (2004). Heuristic search value iteration for POMDPs. In Proc. Conf. on Uncertainty in AI (UAI).
[553] Sohrabi, S., Baier, J.A., and McIlraith, S.A. (2009). Htn planning with preferences. In Proc. IJCAI, pp. 1790–1797.
[554] Sohrabi, S. and McIlraith, S.A. (2010). Preference-based web service composition:A middle ground between execution and search. In Proc. Intl. Semantic Web Conf. (ISWC), pp. 713–729. Springer.
[555] Sridharan, M., Wyatt, J. L., and Dearden, R.(2008). HiPPo:Hierarchical POMDPsfor planning information processing and sensing actions on a robot. In Proc. ICAPS, pp. 346–354.
[556] Srivastava, B. (2000). Realplan:Decoupling causal and resource reasoning in planning. In Proc. AAAI, pp. 812–818.
[557] Stedl, J. and Williams, B. (2005). A fast incremental dynamic controllability algorithm. In Proc. ICAPS Wksp. on Plan Execution.
[558] Stulp, F. and Beetz, M. (2008). Refining the execution of abstract actions with learned action models. J. Artificial Intelligence Research, 32(1):487–523.
[559] Taha, H. A. (1975). Integer Programming: Theory, Applications, and Computations. Academic Press.
[560] Tarjan, R. E. (1972). Depth-first search and linear graph algorithms. SIAM J. Computing, 1(2):146–160.
[561] Tate, A. (1977). Generating project networks. In Proc. IJCAI, pp. 888–893.
[562] Tate, A., Drabble, B., and Kirby, R. (1994). O-Plan2: An Architecture for Command, Planning and Control. Morgan-Kaufmann.
[563] Teglas, E., Vul, E., Girotto, V., Gonzalez, M., Tenenbaum, J. B., and Bonatti, L. L. (2011). Pure reasoning in 12-month-old infants as probabilistic inference. Science, 332(6033):1054–1059.
[564] Teichteil-Konigsbuch, F. (2012a). Fast incremental policy compilation from plans in hybrid probabilistic domains. In Proc. ICAPS.
[565] Teichteil-Konigsbuch, F. (2012b). Stochastic safest and shortest path problems. In Proc. AAAI, pp. 1825–1831.
[566] Teichteil-Konigsbuch, F., Infantes, G., and Kuter, U. (2008). RFF:Arobust, FF-based MDP planning algorithm for generating policies with low probability of failure. In Proc. ICAPS.
[567] Teichteil-Konigsbuch, F., Kuter, U., and Infantes, G. (2010). Incremental plan aggregation for generating policies in MDPs. In Proc. AAMAS, pp. 1231–1238.
[568] Teichteil-Konigsbuch, F., Vidal, V., and Infantes, G. (2011). Extending classical planning heuristics to probabilistic planning with dead-ends. In Proc. AAAI, pp. 1–6.
[569] Tenorth, M. and Beetz, M. (2013). KnowRob: A knowledge processing infrastructure for cognition-enabled robots. Intl. J. Robotics Research, 32(5):566–590.
[570] Traverso, P., Veloso, M., and Giunchiglia, F., editors (2000). AIPS Wksp. on Model- Theoretic Approaches to Planning.
[571] van den Briel, M., Vossen, T., and Kambampati, S. (2005). Reviving integer programming approaches for AI planning:A branch-and-cut framework. In Proc. ICAPS, pp. 310–319.
[572] van den Briel, M., Vossen, T., and Kambhampati, S. (2008). Loosely coupled formulations for automated planning:An integer programming perspective. J. Artificial Intelligence Research, 31:217–257.
[573] Vaquero, T. S., Romero, V., Tonidandel, F., and Silva, J. R. (2007). itSIMPLE 2.0: An integrated tool for designing planning domains. In Proc. ICAPS, pp. 336–343.
[574] Vaquero, T. S., Silva, J. R., and Beck, J. C. (2011). A brief review of tools and methods for knowledge engineering for planning & scheduling. In ICAPS Knowledge Engg. for Planning and Scheduling (KEPS), pp. 7–15.
[575] Vardi, M. Y. (1995). An automata-theoretic approach to fair realizability and synthesis. In Proc. Intl. Conf. on Computer Aided Verification, pp. 267–278.
[576] Vardi, M.Y. (2008). From verification to synthesis. In Proc. Intl. Conf. on Verified Software: Theories, Tools, Experiments.
[577] Vattam, S., Klenk, M., Molineaux, M., and Aha, D. W. (2013). Breadth of approaches to goal reasoning:A research survey. In ACS Wksp. on Goal Reasoning.
[578] Velez, J., Hemann, G., Huang, A., Posner, I., and Roy, N. (2011). Planning to perceive: Exploiting mobility for robust object detection. In Proc. ICAPS.
[579] Veloso, M. and Stone, P. (1995). FLECS: planning with a flexible commitment strategy. J. Artificial Intelligence Research, 3:25–52.
[580] Veloso, M.M. and Rizzo, P. (1998). Mapping planning actions and partially-ordered plans into execution knowledge. In Wksp. on Integrating Planning, Scheduling and Execution in Dynamic and Uncertain Environments, pp. 94–97.
[581] Vere, S. (1983). Planning in time: Windows and duration for activities and goals. IEEE Trans. Pattern Analysis and Machine Intell., 5(3):246–264.
[582] Verfaillie, G., Pralet, C., and Michel, L. (2010). How to model planning and scheduling problems using timelines. The Knowledge Engineering Review, 25:319–336.
[583] Verma, V., Estlin, T., Jonsson, A. K., Pasareanu, C., Simmons, R., and Tso, K. (2005). Plan execution interchange language (PLEXIL) for executable plans and command sequences. In Intl. Symp. on Artificial Intell., Robotics and Automation in Space (i-SAIRAS).
[584] Vernhes, S., Infantes, G., and Vidal, V. (2013). Problem splitting using heuristic search in landmark orderings. In Proc. IJCAI, pp. 2401–2407. AAAI Press.
[585] Verweij, T. (2007). A hierarchically-layered multiplayer bot system for a first-person shooter. Master's thesis, Vrije Universiteit of Amsterdam.
[586] Vidal, T. and Fargier, H. (1999). Handling contingency in temporal constraint networks: from consistency to controllabilities. J. Experimental & Theoretical Artificial Intelligence.
[587] Vidal, T. and Ghallab, M. (1996). Dealing with uncertain durations in temporal constraints networks dedicated to planning. In ECAI, pp. 48–52.
[588] Vilain, M. and Kautz, H. (1986). Constraint propagation algorithms for temporal reasoning. In Proc. AAAI, pp. 377–382.
[589] Vilain, M., Kautz, H., and van Beek, P. (1989). Constraint propagation algorithms for temporal reasoning: a revised report. In de Kleer, J. and Weld, D. S., editors, Readings in Qualitative Reasoning about Physical Systems. Morgan-Kaufmann.
[590] Vodrazka, J. and Chrpa, L. (2010). Visual design of planning domains. In Wksp. on Knowledge Engg. for Planning and Scheduling (KEPS), pp. 68–69.
[591] Vu, V.-T., Bremond, F., and Thonnat, M. (2003). Automatic video interpretation: A novel algorithm for temporal scenario recognition. In Proc. IJCAI, pp. 1295–1300.
[592] Waibel, M., Beetz, M., Civera, J., D'Andrea|R., Elfring, J., Galvez-Lopez, D., Haussermann, K., Janssen, R., Montiel, J. M. M., Perzylo, A., Schiessle, B., Tenorth, M., Zweigle, O., and van de Molengraft, R. (2011). RoboEarth. IEEE Robotics and Automation Magazine, 18(2):69–82.
[593] Waldinger, R. (1977). Achieving several goals simultaneously. In Machine Intelligence 8, pp. 94–138. Halstead and Wiley. Reprinted in [15], pp. 118–139.
[594] Walsh, T. J. and Littman, M.L. (2008). Efficient learning of action schemas and Web-service descriptions. In Proc. AAAI.
[595] Wang, F. Y., Kyriakopoulos, K. J., Tsolkas, A., and Saridis, G. N. (1991). A Petri-net coordination model for an intelligent mobile robot. IEEE Trans. Syst., Man, and Cybernetics, 21(4):777–789.
[596] Warren, D. H. D. (1976). Generating conditional plans and programs. In Proc. Summer Conf. on Artificial Intelligence and Simulation of Behaviour.
[597] Weir, A.A.S., Chappell, J., and Kacelnik, A. (2002). Shaping of hooks in New Caledonian crows. Science, 297(5583):981.
[598] Weld, D. (1999). Recent advances in AI planning. AI Magazine, 20(2):93–122.
[599] Weld, D. S. (1994). An introduction to least commitment planning. AI Magazine, 15(4):27–61.
[600] Weld, D. S., Anderson, C. R., and Smith, D. E. (1998). Extending Graphplan to handle uncertainty and sensing actions. In Proc. AAAI, pp. 897–904.
[601] Weld, D. S. and Etzioni, O. (1994). The first law of robotics (a call to arms). In Proc. AAAI, pp. 1042–1047.
[602] Wilkins, D. (2000). Using the SIPE-2 planning system:A manual for version 6.1. Technical report, SRI International.
[603] Wilkins, D. and desJardins, M. (2001). A call for knowledge-based planning. AI Magazine, 22(1):99–115.
[604] Wilkins, D. E. (1988). Practical Planning: Extending the Classical AI Planning Paradigm. Morgan Kaufmann.
[605] Wilkins, D. E. and Myers, K. L. (1995). A common knowledge representation for plan generation and reactive execution. J. Logic and Computation, 5(6):731–761.
[606] Williams, B. C. and Abramson, M. (2001). Executing reactive, model-based programs through graph-based temporal planning. In Proc. IJCAI.
[607] Williams, B.C. and Nayak, P.P. (1996). Amodel-based approach to reactive self-configuring systems. In Proc. AAAI, pp. 971–978.
[608] Wilson, A., Fern, A. P., and Tadepalli, P. (2012). A bayesian approach for policy learning from trajectory preference queries. In Adv. in Neural Information Processing Syst. (Proc. NIPS), pp. 1142–1150.
[609] Wingate, D. and Seppi, K.D. (2005). Prioritization methods for accelerating MDP solvers. J. Machine Learning Research, 6:851–881.
[610] Wittgenstein, L. (1999). Philosophical Investigations. Prentice Hall.
[611] Wongpiromsarn, T., Topcu, U., Ozay, N., Xu, H., and Murray, R.M. (2011). TuLiP:a software toolbox for receding horizon temporal logic planning. In 14th Intl. Conf. on Hybrid Syst.: Computation and Control, pp. 313–314.ACM.
[612] Wu, Y. and Huang, T. S. (1999). Vision-based gesture recognition:A review. In Braffort, A., Gherbi, R., Gibet, S., Teil, D., and Richardson, J., editors, Gesture-Based Communication in Human-Computer Interaction, pp. 103–115. Springer.
[613] Xie, F., Muller, M., and Holte, R. (2015). Understanding and improving local exploration for gbfs. In Proc. ICAPS.
[614] Xu, Y., Fern, A., and Yoon, S.W. (2007). Discriminative learning of beam-search heuristics for planning. In Proc. IJCAI.
[615] Yang, Q. (1990). Formalizing planning knowledge for hierarchical planning. Computational Intelligence, 6(1):12–24.
[616] Yang, Q. (1997). Intelligent Planning: A Decomposition and Abstraction Based Approach. Springer.
[617] Yang, Q., Wu, K., and Jiang, Y. (2007). Learning action models from plan examples using weighted MAX-SAT. Artificial Intelligence, 171(2):107–143.
[618] Yoon, S., Fern, A., and Givan, R. (2006). Learning heuristic functions from relaxed plans. In Proc. ICAPS.
[619] Yoon, S., Fern, A.P., and Givan, R. (2007). FF-replan:Abaseline for probabilistic planning. In Proc. ICAPS, pp. 352–359.
[620] Yoon, S., Fern, A. P., Givan, R., and Kambhampati, S. (2008). Probabilistic planning via determinization in hindsight. In Proc. AAAI.
[621] Younes, H. and Littman, M. (2004). PPDDL: The probabilistic planning domain definition language. Technical report,CMU.
[622] Younes, H. and Simmons, R. (2002). On the role of ground actions in refinement planning. In Proc. AIPS, pp. 54–62.
[623] Younes, H. and Simmons, R. (2003). VHPOP: Versatile heuristic partial order planner. J. Artificial Intelligence Research.
[624] Younes, H. and Simmons, R. (2004). Solving generalized semi-Markov decision processes using continuous phase-type distributions. In Proc. AAAI, pp. 742–747.
[625] Zhang, W. (1999). State-space search: Algorithms, complexity, extensions, and applications. Springer Science & Business Media.
[626] Zhuo, H. H., Hu, D. H., Hogg, C., Yang, Q., and Munoz-Avila, H. (2009). Learning HTN method preconditions and action models from partial observations. In Proc. IJCAI, pp. 1804–1810.
[627] Zhuo, H.H., Yang, Q., Hu, D. H., and Li, L. (2010). Learning complex action models with quantifiers and logical implications. Artificial Intelligence, 174(18):1540–1569.
[628] Zimmerman, T. and Kambhampati, S. (2003). Learning-assisted automated planning: Looking back, taking stock, going forward. AI Magazine, 24(2):73.
[629] Ziparo, V. A., Iocchi, L., Lima, P. U., Nardi, D., and Palamara, P. F. (2011). Petri net plans. J. Autonomous Agents and Multi-Agent Syst., 23(3):344–383.