To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book develops the theory of typed feature structures, a data structure that generalizes both first-order terms and feature structures of unification-based grammars to include inheritance, typing, inequality, cycles and intensionality. The resulting synthesis serves as a logical foundation for grammars, logic programming and constraint-based reasoning systems. A logical perspective is adopted which employs an attribute-value description language along with complete equational axiomatizations of the various systems of feature structures. At the same time, efficiency concerns are kept in mind and complexity and representability results are provided. The application of feature structures to phrase structure grammars is described and completeness results are shown for standard evaluation strategies. Definite clause logic programs are treated as a special case of phrase structure grammars. Constraint systems are introduced and an enumeration technique is developed for solving arbitrary attribute-value logic constraints. This book, with its innovative approach to data structure, will be essential reading for researchers in computational linguistics, logic programming and knowledge representation. Its self-contained presentation makes it flexible enough to serve as both a research tool and a text book.
Concurrent Programming in ML focuses on the practical use of concurrency to implement naturally concurrent applications. In addition to a tutorial introduction to programming in Concurrent ML (CML), the book presents three extended examples using CML for practical systems programming: a parallel software build system, a simple concurrent window manager, and an implementation of distributed tuple spaces. CML, which is included as part of the SML of New Jersey (SML/NJ) distribution, combines the best features of concurrent programming and functional programming. This book also illustrates advanced SML programming techniques, and includes a chapter on the implementation of concurrency using features provided by the SML/NJ system. It will be of interest to programmers, students, and professional researchers working in computer language development.
This book discusses the connection between two areas of semantics, namely the semantics of databases and the semantics of natural language, and links them via a common view of the semantics of time. It is argued that a coherent theory of the semantics of time is an essential ingredient for the success of efforts to incorporate more 'real world' semantics into database models. This idea is a relatively recent concern of database research but it is receiving growing interest. The book begins with a discussion of database querying which motivates the use of the paradigm of Montague Semantics and discusses the details of the intensional logic ILs. This is followed by a description of the author's own model, the Historical Relational Data Model (HRDM) which extends the RDM to include a temporal dimension. Finally the database querying language QEHIII is defined and examples illustrate its use. A formal model for the interpretation of questions is presented in this work which will form the basis for much further research.
In 1989, Michael Rabin proposed a fundamentally new approach to the problems of fault-tolerant routing and memory management in parallel computation, based on the idea of information dispersal. Yuh-Dauh Lyuu developed this idea in a number of new and exciting ways in his PhD thesis. Further work has led to extensions of these methods to other applications such as shared memory emulations. This volume presents an extended and updated printing of Lyuu's thesis. It gives a detailed treatment of the information dispersal approach to the problems of fault-tolerance and distributed representations of information which have resisted rigorous analysis by previous methods.
The sheer complexity of computer systems has meant that automated reasoning, i.e. the ability of computers to perform logical inference, has become a vital component of program construction and of programming language design. This book meets the demand for a self-contained and broad-based account of the concepts, the machinery and the use of automated reasoning. The mathematical logic foundations are described in conjunction with practical application, all with the minimum of prerequisites. The approach is constructive, concrete and algorithmic: a key feature is that methods are described with reference to actual implementations (for which code is supplied) that readers can use, modify and experiment with. This book is ideally suited for those seeking a one-stop source for the general area of automated reasoning. It can be used as a reference, or as a place to learn the fundamentals, either in conjunction with advanced courses or for self study.
By
Andrew Herbert, Microsoft Research, Cambridge, United Kingdom
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
In 2005 Gilles Kahn discussed with Rick Rashid, Stephen Emmott and myself a proposal for Microsoft Research, Cambridge and INRIA to establish a joint research laboratory in France, building on the long-term informal collaboration between the two institutions. The research focus of the joint laboratory was an important point of discussion. In addition to building on our mutual strengths in areas such as software specification an important topic was a shared desire to create a programme of researching the area of computational science – using the concepts and methods of computer science to accelerate the pace of scientific development and explore the potential for new approaches to science exploiting computer science concepts and methods. This paper explores what computational science is and the contribution it can make to scientific progress. It is in large part abridged from a report “Towards 2020 Science” published by a group of experts assembled by Microsoft Research who met over three intense days to debate and consider the role and future of science, looking towards 2020 and, in particular, the importance and impact of computing and computer science in that vision.
Introduction
Computers have played an increasingly important role in science for 50 years. At the end of the twentieth century there was a transition from computers supporting scientists to do conventional science to computer science itself becoming part of the fabric of science and how science is done.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The Milner-Damas typing algorithm W is one of the classic algorithms in computer science. In this paper we describe a formalized soundness and completeness proof for this algorithm. Our formalization is based on names for both term and type variables, and is carried out in Isabelle/HOL using the Nominal Datatype Package. It turns out that in our formalization we have to deal with a number of issues that are often overlooked in informal presentations of W.
“Alpha-conversion always bites you when you least expect it.”
A remark made by Xavier Leroy when discussing with us the informal proof about W in his PhD thesis.
Introduction
Milner's polymorphic type system for ML is probably the most influential programming language type system. The second author learned about it from a paper by Clément et al. He was immediately taken by their view that type inference can be viewed as Prolog execution, in particular because the Isabelle system, which he had started to work on, was based on a similar paradigm as the Typol language developed by Kahn and his coworkers. Milner himself had provided the explicit type inference algorithm W and proved its soundness. Completeness was later shown by Damas and Milner. Neither soundness nor completeness of W are trivial because of the presence of the Let-construct (which is not expanded during type inference).
By
Pierre Bernhard, I3S, University of Nice-Sophia Antipolis and CNRS, France,
Frédéric Hamelin, I3S, University of Nice-Sophia Antipolis and CNRS, France
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Gilles Kahn and I were classmates at École Polytechnique where, in the academic year 1965–1966, he taught me programming (this was in MAGE 2, a translation in French of Fortran 2 I believe, on a punched tape computer SETI PALAS 250), then we met again and became good friends at Stanford University, where he was a computer science student while I was in aeronautics and astronautics. Our paths were to get closer starting in the spring of 1980 when we started planning and, from 1983 on, heading INRIA Sophia-Antipolis together.
Gilles has always believed that game theory was worth pursuing. He was adament that our laboratory should take advantage of my being conversant with that topic. He was instrumental in maintaining it alive in the lab.
He was to be later the president of INRIA who presided over the introduction of “biological systems” as a full-fledged scientific theme of INRIA. Although this was after I had left INRIA, this again met with my personal scientific taste. I had by then embraced behavioural ecology as my main domain of interest and of application of dynamic games, much thanks to Eric Wajnberg, from INRA, but also out of an old desire of looking into the ecological applications of these techniques.
It is why I think fit to write here a few words about games and behavioural ecology, and also population dynamics and evolution, which are closely related topics.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The evolution of programming languages involves isolating and describing abstractions that allow us to solve problems more elegantly, efficiently, and reliably, and then providing appropriate linguistic support for these abstractions. Ideally, a new abstraction can be described precisely with a mathematical semantics, and the semantics leads to logical techniques for reasoning about programs that use the abstraction. Gilles Kahn's early work on stream processing networks is a beautiful example of this process at work.
Gilles began thinking about parallel graph programs at Stanford, and he developed his ideas in a series of papers starting in 1971: and. Gilles' original motivation was to provide a formal model for reasoning about aspects of operating systems programming, based on early data flow models of computation. But the model he developed turned out to be of much more general interest, both in terms of program architecture and in terms of semantics. During his Edinburgh visit in 1975–76, Gilles and I collaborated on a prototype implementation of the model that allowed further development and experimentation, reported in. By 1976 it was clear that his model, while inspired by early data flow research, was also closely connected to several other developments, including coroutines, Landin's notion of streams, and the then emerging lazy functional languages.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Gilles Kahn was a serious scientist, but part of his style and effectiveness was in the great sense of curiosity and fun that he injected in the most technical topics. Some of his later projects involved connecting computing and the traditional sciences. I offer a perspective on the culture shock between biology and computing, in the style in which I would have explained it to him.
The nature of nature
In a now classic peer-reviewed commentary, “Can a Biologist Fix a Radio?”, Yuri Lazebnik describes the serious difficulties that scientists have in understanding biological systems. As an analogy, he describes the approach biologists would take if they had to study radios, instead of biological organisms, without having prior knowledge of electronics.
We would eventually find how to open the radios and will find objects of various shape, color, and size […]. We would describe and classify them into families according to their appearance. We would describe a family of square metal objects, a family of round brightly colored objects with two legs, round-shaped objects with three legs and so on. Because the objects would vary in color, we will investigate whether changing the colors affects the radio's performance. Although changing the colors would have only attenuating effects (the music is still playing but a trained ear of some people can discern some distortion), this approach will produce many publications and result in a lively debate.
By
Erik Sandewall, Linköping University and Royal Institute of Technology, Stockholm, Sweden
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The purpose of the research reported here was to explore an alternative way of organizing the general software structure in computers, eliminating the traditional distinctions between operating system, programming language, database system, and several other kinds of software. We observed that there is a lot of costly duplication of concepts and of facilities in the conventional architecture, and believe that most of that duplication can be eliminated if the software is organized differently. This article describes Leordo, an experimental software system that has been built in order to explore an alternative design and to try to verify the hypothesis that a much more compact design is possible and that concept duplication can be eliminated or at least greatly reduced. Definite conclusions in those respects can not yet be made, but the indications are positive and the design that has been
Introduction
Project goal and design goals
Leordo is a software project and an experimental software system that integrates capabilities that are usually found in several different software systems:
in the operating system
in the programming language and programming environment
in an intelligent agent system
in a text formatting system
and others more. I believe that it should be possible to make a much more concise, efficient, and user-friendly design of the total software system in the conventional (PC-type) computer by integrating capabilities and organizing them in a new way.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Dataflow models of computation have intrigued computer scientists since the 1970s. They were first introduced by Jack Dennis as a basis for parallel programming languages and architectures, and by Gilles Kahn as a model of concurrency. Interest in these models of computation has been recently rekindled by the resurrection of parallel computing, due to the emergence of multicore architectures. However, Dennis and Kahn approached dataflow very differently. Dennis' approach was based on an operational notion of atomic firings driven by certain firing rules. Kahn's approach was based on a denotational notion of processes as continuous functions on infinite streams. This paper bridges the gap between these two points of view, showing that sequences of firings define a continuous Kahn process as the least fixed point of an appropriately constructed functional. The Dennis firing rules are sets of finite prefixes satisfying certain conditions that ensure determinacy. These conditions result in firing rules that are strictly more general than the blocking reads of the Kahn–MacQueen implementation of Kahn process networks, and solve some compositionality problems in the dataflow model. This work was supported in part by the Center for Hybrid and Embedded Software Systems (CHESS) at UC Berkeley, which receives support from the National Science Foundation (NSF awards #0720882 (CSR-EHS: PRET), #0647591 (CSR-SGER), and #0720841 (CSR-CPS)), the US Army Research Office (ARO #W911NF-07-2-0019), the US Air Force Office of Scientific Research (MURI #FA9550-06-0312 and AF-TRUST #FA9550-06-1-0244), the Air Force Research Lab (AFRL), the State of California Micro Program, and the following companies: Agilent, Bosch, DGIST, National Instruments, and Toyota.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
This paper is an overview of 15 years of collaboration between R&D teams at Dassault Aviation and several research projects at INRIA. This collaboration was related to Gilles Kahn's work on generic programming environments, program transformation, and user interfaces for proof assistants.
It is also an evocation of personal memories about Gilles, my perception of the impact of the research he carried out and supervised, and his dedication to INRIA.
Introduction
Since 1990, Dassault Aviation has been working on some formal methods and programming tools developed at INRIA by Gilles' research group (CROAP) or by other groups led by scientists close to him such as Gérard Berry and Gérard Huet.
Formal methods, more specifically the synchronous languages Esterel and Lustre or the proof assistant Coq, have been evaluated and introduced in our engineering processes to enhance our development tools for safety critical software, especially software embedded in flight control systems.
As for the programming tools developed by CROAP with the generative environment Centaur, it happened that in 1995 some of its language-specific instantiations were targeting scientific computation. More precisely, they were designed to assist some classical transformations of large Fortran codes (ports, parallelization, differentiation). Since Dassault Aviation has always developed its computational fluid dynamics (CFD) codes in-house, there was some motivation in our company to experiment tools that claim to partially automate some time consuming tasks done manually at that time.
By
Bruno Courcelle, Institut Universitaire de France Université Bordeaux 1, Laboratoire Bordelais de Recherche en Informatique
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The communication by Gilles Kahn, Jean Vuillemin and myself at the second International Colloquium on Automata, Languages and Programming, held in Saarbrücken in 1974 is in French in the proceedings, and has not been published as a journal article. However, Todd Veldhuizen wrote in 2002 an English translation that is reproduced in the next chapter.
À propos Chapter 8
It was quite a surprise for me to receive a message from Todd Veldhuizen saying that he had translated from French a 30-year-old conference paper presented at the second International Colloquium on Automata, Languages and Programming, held in Saarbrücken in 1974, of which I am coauthor with G. Kahn and J. Vuillemin. He did that work because he felt the paper was “seminal”. First of all I would like to thank him for this work. The publication of his translation in a volume dedicated to the memory of Gilles Kahn is a testimony of the gratitude of Jean Vuillemin and myself to him, and the recognition of an important scientific contribution of Gilles among many others.
In this overview, I indicate a few research directions that can be traced back to that communication. I give only a few related references, this overview is not a thorough bibliographical review of related articles.
By
Bengt Nordström, Chalmers University of Technology and the University of Göteborg
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The structure of documents of various degree of formality, from scientific papers with layout information and programs with their documentation to completely formal proofs can be expressed by assigning a type to the abstract syntax tree of the document. By using dependent types – an idea from type theory – it is possible to express very strong syntactic criterion on wellformedness of documents. This structure can be used to automatically generate parsers, type checkers and structure-oriented editors.
Introduction
We are interested to find a general framework for describing the structure of many kinds of documents, such as
books and articles
“live” documents (like a web document with parts to be filled in)
programs
formal proofs.
Are there any good reasons why we use different programs to edit and print articles, programs and formal proofs? A unified view on these kinds of documents would make it possible to use only one structure-oriented editor to build all of them, and it would be easier to combine documents of different kinds, for instance scientific papers, programs with their documentation, informal and formal proofs and simple web forms.
Such a view requires that we have a good framework to express syntactic wellformedness (from things like the absence of a title in a footnote to correctness of a formal proof) and to express how the document should be edited and presented.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Semantics of programming languages and interactive environments for the development of proofs and programs are two important aspects of Gilles Kahn's scientific contributions. In his paper “The semantics of a simple language for parallel programming”, he proposed an interpretation of (deterministic) parallel programs (now called Kahn networks) as stream transformers based on the theory of complete partial orders (cpos). A restriction of this language to synchronous programs is the basis of the data-flow Lustre language which is used for the development of critical embedded systems.
We present a formalization of this seminal paper in the Coq proof assistant. For that purpose, we developed a general library for cpos. Our cpos are defined with an explicit function computing the least upper bound (lub) of an increasing sequence of elements. This is different from what Kahn developed for the standard Coq library where only the existence of lubs (for arbitrary directed sets) is required, giving no way to explicitly compute a fixpoint. We define a cpo structure for the type of possibly infinite streams. It is then possible to define formally what is a Kahn network and what is its semantics, achieving the goal of having the concept closed under composition and recursion. The library is illustrated with an example taken from the original paper as well as the Sieve of Eratosthenes, an example of a dynamic network.
By
Jean-Jacques Lévy, INRIA and Microsoft Research–INRIA Joint Centre
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Asclepios is the name of a research project team officially launched on November 1st, 2005 at INRIA Sophia-Antipolis, to study the Analysis and Simulation of Biological and Medical Images. This research project team follows a previous one, called Epidaure, initially dedicated to Medical Imaging and Robotics research. These two project teams were strongly supported by Gilles Kahn, who used to have regular scientific interactions with their members. More generally, Gilles Kahn had a unique vision of the growing importance of the interaction of the Information Technologies and Sciences with the Biological and Medical world. He was one of the originators of the creation of a specific BIO theme among the main INRIA research directions, which now regroups 16 different research teams including Asclepios, whose research objectives are described and illustrated in this article.
Introduction
The revolution of biomedical images and quantitative medicine
There is an irreversible evolution of medical practice toward more quantitative and personalized decision processes for prevention, diagnosis and therapy. This evolution is supported by a continually increasing number of biomedical devices providing in vivo measurements of structures and processes inside the human body, at scales varying from the organ to the cellular and even molecular level. Among all these measurements, biomedical images of various forms increasingly play a central role.
Facing the need for more quantitative and personalized medicine based on larger and more complex sets of measurements, there is a crucial need for developing: (1) advanced image analysis tools capable of extracting the pertinent information from biomedical images and signals; (2) advanced models of the human body to correctly interpret this information; (3) large distributed databases to calibrate and validate these models.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Gilles Kahn, pour moi, ce fut d'abord un article, en français s'il vous plait, texte qui fut le point de départ de ma recherche:
G. Kahn et G. Plotkin, Domaines concrets, TR IRIA-Laboria 336 (1978), paru en version anglaise en 1993 – signe de son influence dans le temps – dans le volume d'hommage à Corrado Böhm.
On ne pouvait imaginer un meilleur appât pour le jeune homme que j'étais, arrivé à l'informatique par le fruit d'une hésitation entre mathé matiques (intimidantes) et langues (les vraies). Un autre collègue trop tôt disparu, Maurice Gross, m'avait aidé à choisir une tierce voie et m'avait guidé vers le DEA d'Informatique Théorique de Paris 7. Les cours de Luc Boasson et de Dominique Perrin m'avaient déjà bien ferré, mais la rencontre des domaines concrets m'a définitivement “attrapé”, et parce qu'il s'agissait de structures ressemblant aux treillis – rencontrés assez tôt dans ma scolarité grâce aux Leçons d'Algèbre Moderne de Paul Dubreil et Marie-Louise Dubreil Jacotin que m'avait conseillées mon professeur de mathématiques –, et parce que Gérard Berry qui m'avait mis ce travail entre les mains avait une riche idée pour bâtir sur cette pierre.
L'idée directrice de cet article était de donner une définiton générale de structure de données, comprenant les listes, les arbres, les enregistrements, les enregistrements avec variantes, etc…, et, comme l'on fait dans toute bonne mathématique, une bonne notion de morphisme entre ces structures: Cette définition était donnée sous deux facettes équivalentes et reliées par un théorème de représentation: l'une concrète, en termes de cellules (nœeuds d'arbres, champs d'enrigistrements, …) et de valeurs, l'autre abstraite, en termes d'ordres partiels.