To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Aarne Ranta, Chalmers University of Technology and University of Gothenburg
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Grammars of natural languages are needed in programs such as natural language interfaces and dialogue systems, but also more generally, in software localization. Writing grammar implementations is a highly specialized task. For various reasons, no libraries have been available to ease this task. This paper shows how grammar libraries can be written in GF (Grammatical Framework), focusing on the software engineering aspects rather than the linguistic aspects. As an implementation of the approach, the GF Resource Grammar Library currently comprises ten languages. As an application, a translation system from formalized mathematics to text in three languages is outlined.
Introduction
How can we generate natural language text from a formal specification of meaning, such as a formal proof? Coscoy, Kahn and Théry studied the problem and built a program that worked for all proofs constructed in the Coq proof assistant. Their program translates structural text components, such as we conclude that, but leaves propositions expressed in formal language:
We conclude that Even(n) → Odd(Succ(n)).
A similar decision is made in Isar, whereas Mizar permits English-like expressions for some predicates. One reason for stopping at this level is certainly that typical users of proof systems are comfortable with reading logical formulas, so that only the proof-level formalization needs translation.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
This essay is dedicated in admiration to the memory of Gilles Kahn, a friend and guide for 35 years. I have been struck by the confidence and warmth expressed towards him by the many French colleagues whom he guided. As a non-Frenchman I can also testify that colleagues in other countries have felt the same.
I begin by recalling two events separated by 30 years; one private to him and me, one public in the UK. I met Gilles in Stanford University in 1972, when he was studying for the PhD degree – which, I came to believe, he found unnecessary to acquire. His study was, I think, thwarted by the misunderstanding of others. I was working on two different things: on computer-assisted reasoning in a logic of Dana Scott based upon domain theory, which inspired me, and on models of interaction – which I believed would grow steadily in importance (as indeed they have). There was hope to unite the two. Yet it was hard to relate domain theory to the non-determinism inherent in interactive processes. I remember, but not in detail, a discussion of this connection with Gilles. The main thing I remember is that he ignited. He had got the idea of the domain of streams which, developed jointly with David MacQueen, became one of the most famous papers in informatics; a model of deterministic processes linked by streams of data.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Adjoint algorithms are a powerful way to obtain the gradients that are needed in scientific computing. Automatic differentiation can build adjoint algorithms automatically by source transformation of the direct algorithm. The specific structure of adjoint algorithms strongly relies on reversal of the sequence of computations made by the direct algorithm. This reversal problem is at the same time difficult and interesting. This paper makes a survey of the reversal strategies employed in recent tools and describes some of the more abstract formalizations used to justify these strategies.
Why build adjoint algorithms?
Gradients are a powerful tool for mathematical optimization. The Newton method for example uses the gradient to find a zero of a function, iteratively, with an excellent accuracy that grows quadratically with the number of iterations. In the context of optimization, the optimum is a zero of the gradient itself, and therefore the Newton method needs second derivatives in addition to the gradient. In scientific computing the most popular optimization methods, such as BFGS, all give best performances when provided gradients too.
In real-life engineering, the systems that must be simulated are complex: even when they are modeled by classical mathematical equations, analytic resolution is totally out of reach. Thus, the equations must be discretized on the simulation domain, and then solved, for example, iteratively by a computer algorithm.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
The split of a multihop, point-to-point TCP connection consists in replacing a plain, end-to-end TCP connection by a cascade of TCP connections. In such a cascade, connection n feeds connection n + 1 through some proxy node n. This technique is used in a variety of contexts. In overlay networks, proxies are often peers of the underlying peer-to-peer network. split TCP is also already proposed and largely adopted in wireless networks at the wired/wireless interface to separate links with vastly different characteristics. In order to avoid losses in the proxies, a backpressure mechanism is often used in this context.
In this paper we develop a model for such a split TCP connection aimed at the analysis of throughput dynamics on both links as well as of buffer occupancy in the proxy. The two main variants of split TCP are considered: that with backpressure and that without. The study consists of two parts: the first part is purely experimental and is based on ns2 simulations. It allows us to identify complex interaction phenomena between TCP flow rates and proxy buffer occupancy, which seem to have been ignored by previous work on split TCP. The second part of the paper is of a mathematical nature. We establish the basic equations that govern the evolution of such a cascade and prove some of the experimental observations made in the first part.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
I have spent 29 years of my life with INRIA. I had seen the beginning of the institute in 1967. I was appointed president of the institute in 1984 and I left it in 1996, after 12 years as president.
Gilles Kahn joined IRIA in the late 1960s and made his entire career in the institute. He passed away while he was president, after a courageous fight against a dreadful illness, which unfortunately did not leave him any chance. I knew him for more than 35 years and I have an accurate vision on the role he played in the development of the institute. More globally, I have seen his influence in the computer science community in France and in Europe. This article gives me the opportunity to understand what was behind such a leadership and to recall some of the challenges that INRIA has faced during more than three decades. What we can learn from the past and from the action of former great leaders is extremely helpful for the future.
This is probably the best way to be faithful to Gilles and to remain close to his thoughts.
Historical IRIA
Why IRIA?
INRIA was born as an evolution of IRIA.
IRIA was created in 1967 as a part of a set of decisions taken under the leadership of General de Gaulle. It was a time of bold decisions towards research at large, based on clear political goals.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
Gilles Kahn was born in Paris on April 17th, 1946 and died in Garches, near Paris, on February 9th, 2006. He received an engineering diploma from Ecole Polytechnique (class of 1964), studied for a few years in Stanford and then joined the computer science branch of the French Atomic Energy Commission (CEA), which was to become the CISI company. He joined the French research institute in computer science and control theory (IRIA, later renamed INRIA) in 1976. He stayed with this institute until his death, at which time he was the chief executive officer of the institute. He was a member of Academia Europaea from 1995 and a member of the French Academy of Science from 1997.
Gilles Kahn's scientific achievements
Gilles Kahn's scientific interests evolved from the study of programming language semantics to the design and implementation of programming tools and the study of the interaction between programming activities and proof verification activities. In plain words, these themes addressed three questions. How do programmers tell a computer to perform a specific task? What tools can we provide to programmers to help them in their job? In particular, how can programmers provide guarantees that computers will perform the task that was requested?
Programming language semantics
In the early 1970s, Gilles Kahn proposed that programs should be described as collections of processes communicating through a network of channels, a description style that is now known as Kahn networks.
By
Yves Bertot, INRIA Sophia-Antipolis Méditerranée
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
We describe how the formal description of a programming language can be encoded in the Coq theorem prover. Four aspects are covered: Natural semantics (as advocated by Gilles Kahn), axiomatic semantics, denotational semantics, and abstract interpretation. We show that most of these aspects have an executable counterpart and describe how this can be used to support proofs about programs.
Introduction
Nipkow demonstrated in that theorem provers could be used to formalize many aspects of programming language semantics. In this paper, we want to push the experiment further to show that this formalization effort also has a practical outcome, in that it makes it possible to integrate programming tools inside theorem provers in an uniform way. We re-visit the study of operational, denotational semantics, axiomatic semantics, and weakest pre-condiction calculus as already studied by Nipkow and we add a small example of a static analysis tool based on abstract interpretation.
To integrate the programming tools inside the theorem prover we rely on the possibility to execute the algorithms after they have been formally described and proved correct, a technique known as reflection. We also implemented a parser, so that the theorem prover can be used as a playground to experiment on sample programs. We performed this experiment using the Coq system. The tools that are formally described can also be “extracted” outside the proof environment, so that they become stand alone programs.
By
Thierry Coquand, Chalmers University of Technology and Göteborg University,
Yoshiki Kinoshita, National Institute of Advanced Industrial Science and Technology (AIST), Japan,
Bengt Nordström, Chalmers University of Technology and Göteborg University,
Makoto Takeyama, National Institute of Advanced Industrial Science and Technology (AIST), Japan
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
This paper presents a formal description of a small functional language with dependent types. The language contains data types, mutual recursive/inductive definitions and a universe of small types. The syntax, semantics and type system is specified in such a way that the implementation of a parser, interpreter and type checker is straight-forward. The main difficulty is to design the conversion algorithm in such a way that it works for open expressions. The paper ends with a complete implementation in Haskell (around 400 lines of code).
Introduction
We are going to describe a small language with dependent types, its syntax, operational semantics and type system. This is in the spirit of the paper “A simple applicative language: Mini-ML” by Clément, Despeyroux, and Kahn, where they explain a small functional language. From them we have borrowed the idea of using patterns instead of variables in abstractions and let-bindings. It gives an elegant way to express mutually recursive definitions. We also share with them the view that a programming language should not only be formally specified, but it should also be possible to reason about the correctness of its implementation. There should be a small step from the formal operational semantics to an interpreter and also between the specification of the type system to a type checker.
Edited by
Yves Bertot,Gérard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Jean-Jacques Lévy, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Gordon Plotkin, University of Edinburgh
By considering the size of the logical network needed to perform a given computational task, the intrinsic difficulty of that task can be examined. Boolean function complexity, the combinatorial study of such networks, is a subject that started back in the 1950s and has today become one of the most challenging and vigorous areas of theoretical computer science. The papers in this book stem from the London Mathematical Society Symposium on Boolean Function Complexity held at Durham University in July 1990. The range of topics covered will be of interest to the newcomer to the field as well as the expert, and overall the papers are representative of the research presented at the Symposium. Anyone with an interest in Boolean Function complexity will find that this book is a necessary purchase.
Belief revision is a topic of much interest in theoretical computer science and logic, and it forms a central problem in research into artificial intelligence. In simple terms: how do you update a database of knowledge in the light of new information? What if the new information is in conflict with something that was previously held to be true? An intelligent system should be able to accommodate all such cases. This book contains a collection of research articles on belief revision that are completely up to date and an introductory chapter that presents a survey of current research in the area and the fundamentals of the theory. Thus this volume will be useful as a textbook on belief revision.
This book is concerned with techniques for formal theorem-proving, with particular reference to Cambridge LCF (Logic for Computable Functions). Cambridge LCF is a computer program for reasoning about computation. It combines the methods of mathematical logic with domain theory, the basis of the denotational approach to specifying the meaning of program statements. Cambridge LCF is based on an earlier theorem-proving system, Edinburgh LCF, which introduced a design that gives the user flexibility to use and extend the system. A goal of this book is to explain the design, which has been adopted in several other systems. The book consists of two parts. Part I outlines the mathematical preliminaries, elementary logic and domain theory, and explains them at an intuitive level, giving reference to more advanced reading; Part II provides sufficient detail to serve as a reference manual for Cambridge LCF. It will also be a useful guide for implementors of other programs based on the LCF approach.
Most books on data structures assume an imperative language like C or C++. However, data structures for these languages do not always translate well to functional languages such as Standard ML, Haskell, or Scheme. This book describes data structures from the point of view of functional languages, with examples, and presents design techniques so that programmers can develop their own functional data structures. It includes both classical data structures, such as red-black trees and binomial queues, and a host of new data structures developed exclusively for functional languages. All source code is given in Standard ML and Haskell, and most of the programs can easily be adapted to other functional languages. This handy reference for professional programmers working with functional languages can also be used as a tutorial or for self-study.
We study infinite words u over an alphabet $\mathcal{A}$satisfying the property $\mathcal{P} :~\mathcal{P}(n)+\mathcal{P}(n+1) = 1+ \#\mathcal{A}\ {\rm for\ any}\ n \in\mathbb{N}$, where $\mathcal{P}(n)$ denotes the number ofpalindromic factors of length n occurring in the language of u.We study also infinite words satisfying a strongerproperty $\mathcal{PE}$: every palindrome of u has exactly one palindromic extension in u. For binary words, the properties $\mathcal{P}$ and $\mathcal{PE}$coincide and these properties characterize Sturmian words, i.e.,words with the complexity C(n) = n + 1 for any $n \in \mathbb{N}$. In this paper, we focus on ternary infinite wordswith the language closed under reversal. For such words u,we prove that if C(n) = 2n + 1 for any $n \in \mathbb{N}$,then u satisfies the property $\mathcal{P}$ andmoreover u is rich in palindromes. Also a sufficient condition for the property $\mathcal{PE}$ is given.We construct a word demonstrating that $\mathcal{P}$ on a ternaryalphabet does not imply $\mathcal{PE}$.
We show that the complete list of regular excluded minors for the class of signed-graphic matroids is M*(G1),. . . , M*(G29), R15, R16. Here G1,. . . , G29 are the vertically 2-connected excluded minors for the class of projective-planar graphs and R15 and R16 are two regular matroids that we will define in the article.
We consider the t-improper chromatic number of the Erdős–Rényi random graph Gn,p. The t-improper chromatic number χt(G) is the smallest number of colours needed in a colouring of the vertices in which each colour class induces a subgraph of maximum degree at most t. If t = 0, then this is the usual notion of proper colouring. When the edge probability p is constant, we provide a detailed description of the asymptotic behaviour of χt(Gn,p) over the range of choices for the growth of t = t(n).
Aimed at an audience of researchers and graduate students in computational geometry and algorithm design, this book uses the Geometric Spanner Network Problem to showcase a number of useful algorithmic techniques, data structure strategies, and geometric analysis techniques with many applications, practical and theoretical. The authors present rigorous descriptions of the main algorithms and their analyses for different variations of the Geometric Spanner Network Problem. Though the basic ideas behind most of these algorithms are intuitive, very few are easy to describe and analyze. For most of the algorithms, nontrivial data structures need to be designed, and nontrivial techniques need to be developed in order for analysis to take place. Still, there are several basic principles and results that are used throughout the book. One of the most important is the powerful well-separated pair decomposition. This decomposition is used as a starting point for several of the spanner constructions.
This paper presents two extensions of the second order polymorphiclambda calculus, system F, with monotone (co)inductive types supporting(co)iteration, primitive (co)recursion and inversion principles asprimitives. One extension is inspired by the usual categoricalapproach to programming by means of initial algebras and finalcoalgebras; whereas the other models dialgebras, and can be seen as an extension of Hagino'scategorical lambda calculus within the framework of parametricpolymorphism. The systems are presented in Curry-style, and are proven to be terminating andtype-preserving. Moreovertheir expressiveness is shown by means of several programmingexamples, going from usual data types to lazy codata types such as streamsor infinite trees.
Let m > 2 be an integer, let C2m denote the cycle of length 2m on the set of vertices [−m, m) = {−m, −m + 1, . . ., m − 2, m − 1}, and let G = G(m, d) denote the graph on the set of vertices [−m, m)d, in which two vertices are adjacent if and only if they are adjacent in C2m in one coordinate, and equal in all others. This graph can be viewed as the graph of the d-dimensional torus. We prove that one can delete a fraction of at most of the vertices of G so that no topologically non-trivial cycles remain. This is tight up to the logd factor and improves earlier estimates by various researchers.