To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Refactoring is the process of changing the design of a program without changing what it does. Typical refactorings, such as function extraction and generalisation, are intended to make a program more amenable to extension, more comprehensible and so on. Refactorings differ from other sorts of program transformation in being applied to source code, rather than to a ‘core’ language within a compiler, and also in having an effect across a code base, rather than to a single function definition, say. Because of this, there is a need to give automated support to the process. This paper reflects on our experience of building tools to refactor functional programs written in Haskell (HaRe) and Erlang (Wrangler). We begin by discussing what refactoring means for functional programming languages, first in theory, and then in the context of a larger example. Next, we address system design and details of system implementation as well as contrasting the style of refactoring and tooling for Haskell and Erlang. Building both tools led to reflections about what particular refactorings mean, as well as requiring analyses of various kinds, and we discuss both of these. We also discuss various extensions to the core tools, including integrating the tools with test frameworks; facilities for detecting and eliminating code clones; and facilities to make the systems extensible by users. We then reflect on our work by drawing some general conclusions, some of which apply particularly to functional languages, while many others are of general value.
This paper presents a library for programming with polymorphic dynamic types in the dependently typed programming language Agda. The resulting library allows dynamically typed values with a polymorphic type to be instantiated to a less general (possibly monomorphic) type without compromising type soundness.
This paper presents a semantics of self-adjusting computation and proves that the semantics is correct and consistent. The semantics introduces memoizing change propagation, which enhances change propagation with the classic idea of memoization to enable reuse of computations even when memory is mutated via side effects. During evaluation, computation reuse via memoization triggers a change-propagation algorithm that adapts the reused computation to the memory mutations (side effects) that took place since the creation of the computation. Since the semantics includes both memoization and change propagation, it involves both non-determinism (due to memoization) and mutation (due to change propagation). Our consistency theorem states that the non-determinism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: Self-adjusting programs are compatible with purely functional programming. We formalize the semantics and its meta-theory in the LF logical framework and machine check our proofs using Twelf.
Generative product design systems used in the context of mass customization are required to generate diverse solutions quickly and reliably without necessitating modification or tuning during use. When such systems are employed to allow for the mass customization of product form, they must be able to handle mass production and engineering constraints that can be time-consuming to evaluate and difficult to fulfill. These issues are related to how the constraints are handled in the generative design system. This article evaluates two promising sequential constraint-handling techniques and the often used weighted sum technique with regard to convergence time, convergence rate, and diversity of the design solutions. The application used for this purpose was a design system aimed at generating a table with an advanced form: a Voronoi diagram based structure. The design problem was constrained in terms of production as well as stability, requiring a time-consuming finite element evaluation. Regarding convergence time and rate, one of the sequential constraint-handling techniques performed significantly better than the weighted sum technique. Nevertheless, the weighted sum technique presented respectable results and therefore remains a relevant technique. Regarding diversity, none of the techniques could generate diverse solutions in a single search run. In contrast, the solutions from different searches were always diverse. Solution diversity is thus gained at the cost of more runs, but no evaluation of the diversity of the solutions is needed. This result is important, because a diversity evaluation function would otherwise have to be developed for every new type of design. Efficient handling of complex constraints is an important step toward mass customization of nontrivial product forms.
Machine learning techniques have been implemented to extract instances of semantic relations using diverse features based on linguistic knowledge, such as tokens, lemmas, PoS-tags, or dependency paths. However, there has been little work aiming to know which of these features works better in the relation extraction task, and less in languages other than English. In this paper, various features representing different levels of linguistic knowledge are systematically evaluated for biographical relation extraction. The effectiveness of these features was measured by training several supervised classifiers that only differ in the type of linguistic knowledge used to define their features. The experiments performed in this paper show that some basic linguistic knowledge (provided by lemmas and their combination in bigrams) behaves better than other complex features, such as those based on syntactic analysis. Furthermore, some feature combinations using different levels of analysis are proposed in order (i) to avoid feature overlapping as well as (ii) to evaluate the use of computationally inexpensive and widespread tools such as tokenization and lemmatization. This paper also describes two new freely available corpora for biographical relation extraction in Portuguese and Spanish, built by means of a distant-supervision strategy. Experiments were performed with five semantic relations and two languages, using these corpora.
Given an edge colouring of a graph with a set of m colours, we say that the graph is exactly m-coloured if each of the colours is used. We consider edge colourings of the complete graph on $\mathbb{N}$ with infinitely many colours and show that either one can find an exactly m-coloured complete subgraph for every natural number m or there exists an infinite subset X ⊂ $\mathbb{N}$ coloured in one of two canonical ways: either the colouring is injective on X or there exists a distinguished vertex v in X such that X\{v} is 1-coloured and each edge between v and X\{v} has a distinct colour (all different to the colour used on X\{v}). This answers a question posed by Stacey and Weidl in 1999. The techniques that we develop also enable us to resolve some further questions about finding exactly m-coloured complete subgraphs in colourings with finitely many colours.
Many components of a dependently typed programming language are by now well understood, for example, the underlying type theory, type checking, unification and evaluation. How to combine these components into a realistic and usable high-level language is, however, folklore, discovered anew by successive language implementors. In this paper, I describe the implementation of Idris, a new dependently typed functional programming language. Idris is intended to be a general-purpose programming language and as such provides high-level concepts such as implicit syntax, type classes and do notation. I describe the high-level language and the underlying type theory, and present a tactic-based method for elaborating concrete high-level syntax with implicit arguments and type classes into a fully explicit type theory. Furthermore, I show how this method facilitates the implementation of new high-level language constructs.
Design rationale (DR) explains why an artifact is designed the way it is. An explicit representation of DR is helpful to designers, allowing them to understand, improve, and reuse previous designs. The argumentation-based representation is the mainstream approach to DR representation. It has a semiformal graphical format to depict the structure of arguments for solving a design problem. This paper argues that because the design is not just a problem-solving process but also a cognitive activity that is continuously iterative and evolving, the conventional argumentation-based representation of DR has some inherent limitations. An improved, intent-driven representation model is proposed to capture and formalize the DR and its evolving history to support DR reuse. The model's knowledge structure, consisting of DR elements and their relationships, is detailed. A preliminary knowledge representation of the model based on Web Ontology Language is introduced. Furthermore, the context of DR is defined to document the complete DR and support effective traceability of design thinking. A graphical DR modeling system is developed, and an example is demonstrated to verify the system's application and the effectiveness of the proposed representation model. The paper provides an effective method to retain and manage a designer's implicit design knowledge, which has the potential to significantly improve the integrated management of product development knowledge.
Representation and reasoning with qualitative spatial relations is an important problem in artificial intelligence and has wide applications in the fields of geographic information system, computer vision, autonomous robot navigation, natural language understanding, spatial databases and so on. The reasons for this interest in using qualitative spatial relations include cognitive comprehensibility, efficiency and computational facility. This paper summarizes progress in qualitative spatial representation by describing key calculi representing different types of spatial relationships. The paper concludes with a discussion of current research and glimpse of future work.
The maritime environment still represents unexploited potential for modeling, management, and understanding of mobility data. The environment is diverse, open but partly ruled, and covers a large spectrum of ships, from small sailboats to supertankers, which generally exhibit type-related behaviors. Similarly to terrestrial or aerial domains, several real-time positioning systems, such as the Automatic Identification System (AIS), have been developed for keeping track of vessel movements. However, the huge amounts of data provided by these reporting systems are rarely used for knowledge discovery. This chapter aims at discussing different aspects of maritime mobilities understanding. This chapter enables readers to, first, understand the intrinsic behavior of maritime positioning systems and then proposes a methodology to illustrate the different steps leading to trajectory patterns for the understanding of outlier detection.
Maritime Traffic
The maritime environment has a huge impact on the world economy and our everyday lives. Beyond being a space where numerous marine species live, the sea is also a place where human activities (sailing, cruising, fishing, goods transportation, etc.) evolve and increase drastically. For example, world maritime trade of goods volume has doubled since the seventies and reached about 90% of global trade in terms of volume and 70% in terms of value. This ever increasing traffic leads to navigation difficulties and risks in coastal and crowded areas where numerous ships exhibit different movement objectives (sailing, fishing, etc.), which can be conflicting.