Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-19T17:02:03.563Z Has data issue: false hasContentIssue false

Modular lazy search for Constraint Satisfaction Problems

Published online by Cambridge University Press:  29 August 2001

THOMAS NORDIN
Affiliation:
Pacific Software Research Center, Oregon Graduate Institute & Portland State University, Portland, OR, USA (e-mail: nordin@cse.ogi.edu, apt@cs.pdx.edu)
ANDREW TOLMACH
Affiliation:
Pacific Software Research Center, Oregon Graduate Institute & Portland State University, Portland, OR, USA (e-mail: nordin@cse.ogi.edu, apt@cs.pdx.edu)
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

We describe a unified, lazy, declarative framework for solving constraint satisfaction problems, an important subclass of combinatorial search problems. These problems are both practically significant and computationally hard. Finding solutions involves combining good general-purpose search algorithms with problem-specific heuristics. Conventional imperative algorithms are usually implemented and presented monolithically, which makes them hard to understand and reuse, even though new algorithms often are combinations of simpler ones. Lazy functional languages, such as Haskell, encourage modular structuring of search algorithms by separating the generation and testing of potential solutions into distinct functions communicating through an explicit, lazy intermediate data structure. But only relatively simple search algorithms have been treated this way in the past. Our framework uses a generic generation and pruning algorithm parameterized by a labeling function that annotates search trees with conflict sets. We show that many advanced imperative search algorithms, including conflict-directed backjumping, backmarking, minimal forward checking, and fail-first dynamic variable ordering, can be obtained by suitable instantiation of the labeling function. More importantly, arbitrary combinations of these algorithms can be built by simply composing their labeling functions. Our modular algorithms are as efficient as the monolithic imperative algorithms in the sense that they make the same number of consistency checks, and most of our algorithms are within a constant factor of their imperative counterparts in runtime and space usage. We believe our framework is especially well-suited for experimenting to find good combinations of algorithms for specific problems.

Type
Research Article
Copyright
© 2001 Cambridge University Press

Footnotes

Work supported, in part, by the US Air Force Materiel Command under contract F19628-96-C-0161, and by NSF grant CA-9703218.
Submit a response

Discussions

No Discussions have been published for this article.