To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction. We describe a constructive theory of computable functionals, based on the partial continuous functionals as their intendend domain. Such a task had long ago been started by Dana Scott [30], under the well-known abbreviation LCF. However, the prime example of such a theory, Per Martin-Löf's type theory [23] in its present form deals with total (structural recursive) functionals only. An early attempt of Martin-Löf [24] to give a domain theoretic interpretation of his type theory has not even been published, probably because it was felt that a more general approach — such as formal topology [13] — would be more appropriate.
Here we try to make a fresh start, and do full justice to the fundamental notion of computability in finite types, with the partial continuous functionals as underlying domains. The total ones then appear as a dense subset [20, 15, 7, 31, 27, 21], and seem to be best treated in this way.
Abstract. Threads as contained in a thread algebra emerge from the behavioral abstraction from programs in an appropriate program algebra. Threads may make use of services such as stacks, and a thread using a single stack is called a pushdown thread. Equivalence of pushdown threads is decidable. Using this decidability result, an alternative to Cohen's impossibility result on virus detection is discussed and some results on risk assessment services are proved.
Problems in the area of text and document processing can often be described as text rewriting tasks: given an input text, produce a new text by applying some fixed set of rewriting rules. In its simplest form, a rewriting rule is given by a pair of strings, representing a source string (the “original”) and its substitute. By a rewriting dictionary, we mean a finite list of such pairs; dictionary-based text rewriting means to replace in an input text occurrences of originals by their substitutes. We present an efficient method for constructing, given a rewriting dictionary D, a subsequential transducer that accepts any text t as input and outputs the intended rewriting result under the so-called “leftmost-longest match” replacement with skips, t'. The time needed to compute the transducer is linear in the size of the input dictionary. Given the transducer, any text t of length |t| is rewritten in a deterministic manner in time O(|t|+|t'|), where t' denotes the resulting output text. Hence the resulting rewriting mechanism is very efficient. As a second advantage, using standard tools, the transducer can be directly composed with other transducers to efficiently solve more complex rewriting tasks in a single processing step.
The paper describes the methodology used to develop a construction domain ontology, taking into account the wealth of existing semantic resources in the sector ranging from dictionaries to thesauri. Given the characteristics and settings of the construction industry, a modular, architecture-centric approach was adopted to structure and develop the ontology. The paper argues that taxonomies provide an ideal backbone for any ontology project. Therefore, a construction industry standard taxonomy was used to provide the seeds of the ontology, enriched and expanded with additional concepts extracted from large discipline-oriented document bases using information retrieval (IR) techniques.
We present a simple and intuitive unsound corpus-driven approximation method for turning unification-based grammars, such as HPSG, CLE, or PATR-II into context-free grammars (CFGs). Our research is motivated by the idea that we can exploit (large-scale), hand-written unification grammars not only for the purpose of describing natural language and obtaining a syntactic structure (and perhaps a semantic form), but also to address several other very practical topics. Firstly, to speed up deep parsing by having a cheap recognition pre-flter (the approximated CFG). Secondly, to obtain an indirect stochastic parsing model for the unification grammar through a trained PCFG, obtained from the approximated CFG. This gives us an efficient disambiguation model for the unification-based grammar. Thirdly, to generate domain-specific subgrammars for application areas such as information extraction or question answering. And finally, to compile context-free language models which assist the acoustic model of a speech recognizer. The approximation method is unsound in that it does not generate a CFG whose language is a true superset of the language accepted by the original unification-based grammar. It is a corpus-driven method in that it relies on a corpus of parsed sentences and generates broader CFGs when given more input samples. Our open approach can be fine-tuned in different directions, allowing us to monotonically come close to the original parse trees by shifting more information into the context-free symbols. The approach has been fully implemented in JAVA.
The goal of semantic search is to improve on traditional search methods by exploiting the semantic metadata. In this paper, we argue that supporting iterative and exploratory search modes is important to the usability of all search systems. We also identify the types of semantic queries the users need to make, the issues concerning the search environment and the problems that are intrinsic to semantic search in particular. We then review the four modes of user interaction in existing semantic search systems, namely keyword-based, form-based, view-based and natural language-based systems. Future development should focus on multimodal search systems, which exploit the advantages of more than one mode of interaction, and on developing the search systems that can search heterogeneous semantic metadata on the open semantic Web.
In this paper we discuss the task of dialogue act recognition as a part of interpreting user utterances in context. To deal with the uncertainty that is inherent in natural language processing in general and dialogue act recognition in particular we use machine learning techniques to train classifiers from corpus data. These classifiers make use of both lexical features of the (Dutch) keyboard-typed utterances in the corpus used, and context features in the form of dialogue acts of previous utterances. In particular, we consider probabilistic models in the form of Bayesian networks to be proposed as a more general framework for dealing with uncertainty in the dialogue modelling process.
The development of quantum walks in the context of quantum computation, as generalisations of random walk techniques, has led rapidly to several new quantum algorithms. These all follow a unitary quantum evolution, apart from the final measurement. Since logical qubits in a quantum computer must be protected from decoherence by error correction, there is no need to consider decoherence at the level of algorithms. Nonetheless, enlarging the range of quantum dynamics to include non-unitary evolution provides a wider range of possibilities for tuning the properties of quantum walks. For example, small amounts of decoherence in a quantum walk on the line can produce more uniform spreading (a top-hat distribution), without losing the quantum speed up. This paper reviews the work on decoherence, and more generally on non-unitary evolution, in quantum walks and suggests what future questions might prove interesting to pursue in this area.
Quantum physics, together with the experimental (and slightly controversial) quantum computing, induces a twist in our vision of computation, and hence, since computing and logic are intimately linked, in our approach to logic and foundations. In this paper, we discuss the most mistreated notion of logic, truth.
This special issue of Mathematical Structures in Computer Science contains several contributions related to the modern field of Quantum Information and Quantum Computing.
The first two papers deal with entanglement. The paper by R. Mosseri and P. Ribeiro presents a detailed description of the two- and three-qubit geometry in Hilbert space, dealing with the geometry of fibrations and discrete geometry. The paper by J.-G.Luque et al. is more algebraic and considers invariants of pure k-qubit states and their application to entanglement measurement.
In the longstanding debate in political economy about the feasibility of socialism, the Austrian School of Economists have argued that markets are an indispensable means of evaluating goods, hence a prerequisite for productive efficiency. Socialist models for non-market economic calculation have been strongly influenced by the equilibrium model of neoclassical economics. The Austrians contend that these models overlook the essence of the calculation problem by assuming the availability of knowledge that can be acquired only through the market process itself. But the debate in political economy has not yet considered the recent emergence of agent-based systems and their applications to resource allocation problems. Agent-based simulations of market exchange offer a promising approach to fulfilling the dynamic functions of knowledge encapsulation and discovery that the Austrians show to be performed by markets. Further research is needed in order to develop an agent-based approach to the calculation problem, as it is formulated by the Austrians. Given that the macro-level objectives of agent-based systems can be easily engineered, they could even become a desirable alternative to the real markets that the Austrians favour.
Pervasive computing is by its nature open and extensible, and must integrate the information from a diverse range of sources. This leads to a problem of information exchange, so sub-systems must agree on shared representations. Ontologies potentially provide a well-founded mechanism for the representation and exchange of such structured information. A number of ontologies have been developed specifically for use in pervasive computing, none of which appears to cover adequately the space of concerns applicable to application designers. We compare and contrast the most popular ontologies, evaluating them against the system challenges generally recognized within the pervasive computing community. We identify a number of deficiencies that must be addressed in order to apply the ontological techniques successfully to next-generation pervasive systems.
We give an algorithm allowing the construction of bases of local unitary invariants of pure k-qubit states from a knowledge of the polynomial covariants of the group of invertible local filtering operations. The simplest invariants obtained in this way are made explicit and compared with various known entanglement measures. Complete sets of generators are obtained for up to four qubits, and the structure of the invariant algebras is discussed in detail.
This paper reviews recent work related to the interplay between quantum information and computation on the one hand and classical and quantum chaos on the other.
First, we present several models of quantum chaos that can be simulated efficiently on a quantum computer. Then a discussion of information extraction shows that such models can give rise to complete algorithms including measurements that can achieve an increase in speed compared with classical computation. It is also shown that models of classical chaos can be simulated efficiently on a quantum computer, and again information can be extracted efficiently from the final wave function. The total gain can be exponential or polynomial, depending on the model chosen and the observable measured. The simulation of such systems is also economical in the number of qubits, allowing implementation on present-day quantum computers, some of these algorithms having been already experimentally implemented.
The second topic considered concerns the analysis of errors on quantum computers. It is shown that quantum chaos algorithms can be used to explore the effect of errors on quantum algorithms, such as random unitary errors or dissipative errors. Furthermore, the tools of quantum chaos allows a direct analysis of the effects of static errors on quantum computers. Finally, we consider the different resources used by quantum information, and show that quantum chaos has some precise consequences on entanglement generation, which becomes close to maximal. For another resource, interference, a proposal is presented for quantifying it, enabling a discussion on entanglement and interference generation in quantum algorithms.
This paper reviews recent attempts to describe the two- and three-qubit Hilbert space geometries. In the first part, this is done with the help of Hopf fibrations of hyperspheres. It is shown that the associated Hopf map is strongly sensitive to states’ entanglement content. In the two-qubit case, a generalisation of the celebrated one-qubit Bloch sphere representation is described. In the second part, we present Hilbert space discrete versions, which are comparable to polyhedral approximations of spheres in standard geometry.
The technological changes of the twentieth century obliged research institutes to rethink their role in society. A place for invention and reflection, and a centre for the preservation of our musical heritage, the Groupe de Recherches Musicales must open its doors from now on and increase its collaboration with other bodies in an enlarged form: an extended ‘group’, extramural, which shares its tools and thoughts with others.
Sixty years ago, musique concrète was born of the single-handed efforts of one man, Pierre Schaeffer. How did the first experiments become a School and produce so many rich works? As this issue of Organised Sound addresses various aspects of the GRM activities throughout sixty years of musical adventure, this article discusses the musical thoughts behind the advent and the development of the music created and theoretised at the Paris School formed by the Schaefferian endeavours. Particular attention is given to the early twentieth-century conceptions of musical sounds and how poets, artists and musicians were expressing their quest for, as Apollinaire put it, ‘new sounds new sounds new sounds’. The questions of naming, gesture, sound capture, processing and diffusion are part of the concepts thoroughly revisited by the GRMC, then the GRM in 1958, up to what is known as acousmatic music. Other contributions, such as Teruggi's, give readers insight into the technical environments and innovations that took place at the GRM. This present article focuses on the remarkable unity of the GRM. This unity has existed alongside sixty years of activity and dialogue with researchers of other fields and constant attention to the latter-day scientific, technological and philosophical ideas which have had a strong influence in shaping the development of GRM over the course of its history.
It is a habit to invoke Aristotle when dealing, within the arts, with ‘Nature’ – that the Man–Artist (and not only the musician) would be, he says, ‘inclined’ to imitate. It is true, the history of music clearly attests to the temptation and of the ‘pleasure’ (as Aristotle also says) found in mimesis. We know that very lately in history musique concrète gives a new perspective to this question as well as to other questions, and changes the deal: because the sound objects of the world, of the whole world – the ‘noises’ – that needed to be imitated, can now be easily captured through technology, almost in a photographic way, and then they can be gathered, kept, and finally be composed. Hopefully Pierre Schaeffer, its genial inventor, has, concerning the question of nature within new music, a position that tears him apart, which is paradoxical, uncomfortable, fundamental: that the nature that is so easily captured, he does not want to exhibit; to understand the lessons, the hidden musical lessons, he only wants to examine it. This almost heroic model will only be partially followed by the composers (concrète, electroacoustic, acousmatic, anecdotical composers) who have been working during the last sixty years in this passionate domain. At the end of this article, the sketch of a typology, based on musical examples, tries to clarify the way nature is dealt with, when it appears in musique concrète.
When studying the history and evolution of the GRM, one of its outstanding features has been its continuous energy dedicated to developing machines, systems and, in recent years, software that would better serve composers' views and intentions. Unique discoveries were made that have become the fundamental concepts of sound manipulation and have influenced researchers and developers in the conception of new, but always somehow faithful to the original, tools for composition. Many steps pave this road, some are known and recognised, others were necessary failures that permitted inventors to re-focus and realise their thoughts.