To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Education matters. The lack of education on functional programming languages and techniques is visible on a daily basis. Our students, co-workers, friends, and colleagues just don't know enough about these ideas and therefore often fail to implement the best possible solutions for their programming problems.
The purpose of Educational Pearls is to address the education problem of our community from many different angles. It will include contributions on curricula issues, educational software support, and educational experiences. They will help teachers, professors, researchers, and software developers to promote functional programming languages and techniques in their respective contexts.
The following three figures (figures 10, 11 and 12) were not shown in the original published version of the article. These figures constitute the entire static semantics of the STAL type system.
We show that a non-duplicating transformation into Continuation-Passing Style (CPS) has no effect on control-flow analysis, a positive effect on binding-time analysis for traditional partial evaluation, and no effect on binding-time analysis for continuation-based partial evaluation: a monovariant control-flow analysis yields equivalent results on a direct-style program and on its CPS counterpart, a monovariant binding-time analysis yields less precise results on a direct-style program than on its CPS counterpart, and an enhanced monovariant binding-time analysis yields equivalent results on a direct-style program and on its CPS counterpart. Our proof technique amounts to constructing the CPS counterpart of flow information and of binding times. Our results formalize and confirm a folklore theorem about traditional binding-time analysis, namely that CPS has a positive effect on binding times. What may be more surprising is that the benefit does not arise from a standard refinement of program analysis, as, for instance, duplicating continuations. The present study is symptomatic of an unsettling property of program analyses: their quality is unpredictably vulnerable to syntactic accidents in source programs, i.e., to the way these programs are written. More reliable program analyses require a better understanding of the effect of syntactic change.
When I was a student, Simula was one of the languages taught in introductory programming language courses and I vividly remember a sticker one of our instructors had attached to the door of his office, saying “Simula does it with class”. I guess the same holds for Haskell except that Haskell replaces classes by type classes.
This paper presents Biglook, a widget library for an extended version of the Scheme programming language. It uses classes of a Clos-like object layer to represent widgets and Scheme closures to handle graphical events. Combining functional and object-oriented programming styles yields an original application programming interface that advocates a strict separation between the implementation of the graphical interfaces and the user-associated commands, enabling compact source code. The Biglook implementation separates the Scheme programming interface and the native back-end. This permits different ports for Biglook. The current version uses GTK+ and Swing graphical toolkits, while the previous release used Tk.
We characterize the impact of a linear $\beta$-reduction on the result of a control-flow analysis. (By ‘a linear $\beta$-reduction’ we mean the $\beta$-reduction of a linear $\lambda$-abstraction, i.e., of a $\lambda$-abstraction whose parameter occurs exactly once in its body.) As a corollary, we consider the administrative reductions of a Plotkin-style transformation into Continuation-Passing Style (CPS), and how they affect the result of a constraint-based control-flow analysis and, in particular, the least element in the space of solutions. We show that administrative reductions preserve the least solution. Preservation of least solutions solves a problem that was left open in Palsberg and Wand's article ‘CPS Transformation of Flow Information.’ Together, Palsberg and Wand's article and the present article show how to map in linear time the least solution of the flow constraints of a program into the least solution of the flow constraints of the CPS counterpart of this program, after administrative reductions. Furthermore, we show how to CPS transform control-flow information in one pass.
We consider the question of how a Continuation-Passing-Style (CPS) transformation changes the flow analysis of a program. We present an algorithm that takes the least solution to the flow constraints of a program and constructs in linear time the least solution to the flow constraints for the CPS-transformed program. Previous studies of this question used CPS transformations that had the effect of duplicating code, or of introducing flow sensitivity into the analysis. Our algorithm has the property that for a program point in the original program and the corresponding program point in the CPS-transformed program, the flow information is the same. By carefully avoiding both duplicated code and flow-sensitive analysis, we find that the most accurate analysis of the CPS-transformed program is neither better nor worse than the most accurate analysis of the original. Thus a compiler that needed flow information after CPS transformation could use the flow information from the original program to annotate some program points, and it could use our algorithm to find the rest of the flow information quickly, rather than having to analyze the CPS-transformed program.
We present functional implementations of Koda and Ruskey's algorithm for generating all ideals of a forest poset as a Gray code. Using a continuation-based approach, we give an extremely concise formulation of the algorithm's core. Then, in a number of steps, we derive a first-order version whose efficiency is comparable to that of a C implementation given by Knuth.
Textual question answering is a technique of extracting a sentence or text snippet from a document or document collection that responds directly to a query. Open-domain textual question answering presupposes that questions are natural and unrestricted with respect to topic. The question answering (Q/A) techniques, as embodied in today's systems, can be roughly divided into two types: (1) techniques for Information Seeking (IS), which localize the answer in vast document collections; and (2) techniques for Reading Comprehension (RC) that answer a series of questions related to a given document. Although these two types of techniques and systems are different, it is desirable to combine them for enabling more advanced forms of Q/A. This paper discusses an approach that successfully enhanced an existing IS system with RC capabilities. This enhancement is important because advanced Q/A, as exemplified by the ARDA AQUAINT program, is moving towards Q/A systems that incorporate semantic and pragmatic knowledge enabling dialogue-based Q/A. Because today's RC systems involve a short series of questions in context, they represent a rudimentary form of interactive Q/A which constitutes a possible foundation for more advanced forms of dialogue-based Q/A.
This study aims to improve the performance of identifying grammatical functions between an adnoun clause and a noun phrase in Korean. The key task is to determine the relation between the two constituents in terms of such functional categories as subject, object, adverbial and appositive. The problem is mainly caused by the fact that functional morphemes, which are considered to be crucial for identifying the relation, are omitted in the noun phrases. To tackle this problem, we propose to employ the Support Vector Machines (SVM) in determining the grammatical functions. Through an experiment with a tagged corpus for training SVMs, we found the proposed model to be more useful than both the Maximum Entropy Model (MEM) and the backed-off model.
This paper has two purposes. First, it suggests a formal approach for specifying and verifying lingware. This approach is based on a unified notation of the main existing formalisms for describing linguistic knowledge (i.e. Formal Grammars, Unification Grammars, HPSG, etc.) on the one hand, and the integration of data and processing on the other. Accordingly, a lingware specification includes all related aspects in a unified framework. This facilitates the development of a lingware system, since one has to follow a single development process instead of two separate ones. Secondly, it presents an environment for the formal specification of lingware, based on the suggested approach, which is neither restricted to a particular kind of application nor to a particular class of linguistic formalisms. This environment provides interfaces enabling the specification of both linguistic knowledge and functional aspects of a lingware system. Linguistic knowledge is specified with the usual grammatical formalisms, whereas functional aspects are specified with a suitable formal notation. Both descriptions will be integrated into the same framework to obtain a complete requirement specification that can be refined towards an executable program.
In this paper, we describe a system for coreference resolution and emphasize the role of evaluation for its design. The goal of the system is to group referring expressions (identified beforehand in narrative texts) into sets of coreferring expressions that correspond to discourse entities. Several knowledge sources are distinguished, such as referential compatibility between a referring expression and a discourse entity, activation factors for discourse entities, size of working memory, or meta-rules for the creation of discourse entities. For each of them, the theoretical analysis of its relevance is compared to scores obtained through evaluation. After looping through all knowledge sources, an optimal behavior is chosen, then evaluated on test data. The paper also discusses evaluation measures as well as data annotation, and compares the present approach to others in the field.
Constraint-based reasoning is often used to represent and find solutions to configuration problems. In the field of constraint satisfaction, the major focus has been on finding solutions to difficult problems. However, many real-life configuration problems, although not extremely complicated, have a huge number of solutions, few of which are acceptable from a practical standpoint. In this paper we present a value ordering heuristic for constraint solving that attempts to guide search toward solutions that are acceptable. More specifically, by considering weights that are assigned to values and sets of values, the heuristic can guide search toward solutions for which the total weight is within an acceptable interval. Experiments with random constraint satisfaction problems demonstrate that, when a problem has numerous solutions, the heuristic makes search extremely efficient even when there are relatively few solutions that fall within the interval of acceptable weights. In these cases, an algorithm that is very effective for finding a feasible solution to a given constraint satisfaction problem (the “maintained arc consistency” algorithm or MAC) does not find a solution in the same weight interval within a reasonable time when it is run without the heuristic.
Configuration problems often involve large product catalogs, and the given user requests can be met by many different kinds of parts from this catalog. Hence, configuration problems are often weakly constrained and have many solutions. However, many of those solutions may be discarded by the user as long as more interesting solutions are possible. The user often prefers certain choices to others (e.g., a red color for a car to a blue color) or prefers solutions that minimize or maximize certain criteria such as price and quality. In order to provide satisfactory solutions, a configurator needs to address user preferences and user wishes. Another important problem is to provide high-level features to control different reasoning tasks such as solution search, explanation, consistency checking, and reconfiguration. We address those problems by introducing a preference programming system that provides a new paradigm for expressing user preferences and user wishes and provides search strategies in a declarative and unified way, such that they can be embedded in a constraint and rule language. The preference programming approach is completely open and dynamic. In fact, preferences can be assembled from different sources such as business rules, databases, annotations of the object model, or user input. An advanced topic is to elicit preferences from user interactions, especially from explanations of why a user rejects proposed choices. Our preference programming system has successfully been used in different configuration domains such as loan configuration, service configuration, and other problems.
In the automotive industry, the compilation and maintenance of correct product configuration data is a complex task. Our work shows how formal methods can be applied to the validation of such business critical data. Our consistency support tool BIS works on an existing database of Boolean constraints expressing valid configurations and their transformation into manufacturable products. Using a specially modified satisfiability checker with an explanation component, BIS can detect inconsistencies in the constraints set and thus help increase the quality of the product data. BIS also supports manufacturing decisions by calculating the implications of product or production environment changes on the set of required parts. In this paper, we give a comprehensive account of BIS: the formalization of the business processes underlying its construction, the modifications of satisfiability-checking technology we found necessary in this context, and the software technology used to package the product as a client–server information system.
The paper introduces and discusses the notion of decomposition of a configuration problem within the framework of a structured logical approach. The paper describes under which conditions a given configuration problem can be decomposed into a set of noninteracting subproblems and how to exploit such a decomposition, both for improving the performance of the configurator and for supporting interactive configuration. Different kinds of decomposition are considered, but all of them exploit, as much as possible, the explicit representation of the partonomic relations in the language, a KL-One like representation formalism augmented with constraints for expressing complex interrole relations. The paper introduces a notion of boundness among constraints, which is used for formally specifying different types of decomposition. One decomposition strategy aims at singling out the components and subcomponents that are directly related to the constraints put by the user's requirements; the configurator exploits such decomposition by first configuring that portion of the product and then configuring the parts that are not related to the user's requirements. Another decomposition strategy verifies whether the set of constraints for the product to be configured can be split into a set of noninteracting problems. In such a case the configurator solves the configuration problem by splitting the whole search space into a set of smaller search spaces. Different combinations of these two decomposition techniques are considered, and the impact of the decomposition strategies on the performance of the configurator is evaluated via a set of experiments using the configuration of computer systems as a test bed. The results of the experiments show a significant reduction of the computational effort (both in terms of number of backtrackings and in CPU time) when decomposition strategies are used.
Today's economy exhibits a growing trend toward highly specialized solution providers cooperatively offering configurable products and services to their customers. This paradigm shift requires the extension of current standalone configuration technology with capabilities of knowledge sharing and distributed problem solving. In this context a standardized configuration knowledge representation language with formal semantics is needed in order to support knowledge interchange between different configuration environments. Languages such as Ontology Inference Layer (OIL) and DARPA Agent Markup Language (DAML+OIL) are based on such formal semantics (description logic) and are very popular for knowledge representation in the Semantic Web. In this paper we analyze the applicability of those languages with respect to configuration knowledge representation and discuss additional demands on expressivity. For joint configuration problem solving it is necessary to agree on a common problem definition. Therefore, we give a description logic based definition of a configuration problem and show its equivalence with existing consistency-based definitions, thus joining the two major streams in knowledge-based configuration (description logics and predicate logic/constraint based configuration).