To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Abstract. We survey and evaluate recent discussions about axiomatic theories of truth, with special attention to deflationary approaches. Then we propose a new account of the use of truth theories, called a transactional analysis. In this analysis, information is communicated between intelligent agents, which are modeled as individual axiomatic theories. We note the need in the course of communication to distinguish whether or not new information is considered trustworthy.
To say that what is is not, or that what is not is, is false; but to say that what is is, and what is not is not, is true; and therefore also he who says that a thing is or is not will say either what is true or what is false.
– Aristotle, Metaphysics, 1011b
This paper consists of three parts. First is a brief introduction; probably most of it will be very familiar material. Then I will describe and discuss some recent work on axiomatic theories of truth. Finally, I will suggest an alternative way of thinking about axiomatic theories of truth, which I call a transactional approach. The famous quotation from Aristotle (shown above, and chosen in honor of the conference at which this paper was presented) is not really the starting point, but includes one little feature which deserves attention for later reference: the use of the word “say”.
Abstract. Computations in spaces like the real numbers are not done on the points of the space itself but on some representation. If one considers only computable points, i.e., points that can be approximated in a computable way, finite objects as the natural numbers can be used for this. In the case of the real numbers such an indexing can e.g. be obtained by taking the Gödel numbers of those total computable functions that enumerate a fast Cauchy sequence of rational numbers. Obviously, the numbering is only a partial map. It will be seen that this is not a consequence of a bad choice, but is so by necessity. The paper will discuss some consequences. All is done in a rather general topological framework.
Abstract. In this paper I introduce a sequent system for the propositional modal logic S5. Derivations of valid sequents in the system are shown to correspond to proofs in a novel natural deduction system of circuit proofs (reminiscient of proofnets in linear logic [9, 15], or multipleconclusion calculi for classical logic [22, 23, 24]).
The sequent derivations and proofnets are both simple extensions of sequents and proofnets for classical propositional logic, in which the new machinery—to take account of the modal vocabulary—is directly motivated in terms of the simple, universal Kripke semantics for S5. The sequent system is cut-free (the proof of cut-elimination is a simple generalisation of the systematic cut-elimination proof in Belnap's Display Logic [5, 21, 26]) and the circuit proofs are normalising.
The 2005 European Summer Meeting of the Association for Symbolic Logic was held in Athens, Greece, July 28–August 3, 2005. The meeting was called Logic Colloquium 2005 and its sessions, except the opening one, which took place in the Main Building, took place in the building of the Department of Mathematics of the University of Athens. It was attended by 198 participants (and 25 accompanying persons) from 29 different countries. The organizing body was the Inter-Departmental Graduate Program in Logic and Algorithms (MPLA) of the University of Athens, the National Technical University of Athens and the University of Patras. Financial support was provided by the Association for Symbolic Logic, the Athens Chamber of Commerce and Industry, the Bank of Greece, the Graduate Program in Logic and Algorithms, IVI Loutraki Water Co., the Hellenic Parliament, Katoptro Publications, Kleos S. A., the Ministry of National Education and Religious Affairs, Mythos Beer Co., the National and Kapodistrian University of Athens, the National Bank of Greece and Sigalas Wine Co.
The Program Committee consisted of Chi Tat Chong (Singapore), Costas Dimitracopoulos (Athens), Hartry Field (New York), Gerhard Jäger (Bern), George Metakides (Patras), Ludomir Newelski (Wroclaw), Dag Normann (Oslo), Rohit Parikh (New York), John Steel (Berkeley), Stevo Todorčević (Paris), John Tucker (Swansea), Frank Wagner (Lyon) and Stan Wainer (Leeds, Chair).
Introduction. We describe a constructive theory of computable functionals, based on the partial continuous functionals as their intendend domain. Such a task had long ago been started by Dana Scott [30], under the well-known abbreviation LCF. However, the prime example of such a theory, Per Martin-Löf's type theory [23] in its present form deals with total (structural recursive) functionals only. An early attempt of Martin-Löf [24] to give a domain theoretic interpretation of his type theory has not even been published, probably because it was felt that a more general approach — such as formal topology [13] — would be more appropriate.
Here we try to make a fresh start, and do full justice to the fundamental notion of computability in finite types, with the partial continuous functionals as underlying domains. The total ones then appear as a dense subset [20, 15, 7, 31, 27, 21], and seem to be best treated in this way.
Abstract. Threads as contained in a thread algebra emerge from the behavioral abstraction from programs in an appropriate program algebra. Threads may make use of services such as stacks, and a thread using a single stack is called a pushdown thread. Equivalence of pushdown threads is decidable. Using this decidability result, an alternative to Cohen's impossibility result on virus detection is discussed and some results on risk assessment services are proved.
Problems in the area of text and document processing can often be described as text rewriting tasks: given an input text, produce a new text by applying some fixed set of rewriting rules. In its simplest form, a rewriting rule is given by a pair of strings, representing a source string (the “original”) and its substitute. By a rewriting dictionary, we mean a finite list of such pairs; dictionary-based text rewriting means to replace in an input text occurrences of originals by their substitutes. We present an efficient method for constructing, given a rewriting dictionary D, a subsequential transducer that accepts any text t as input and outputs the intended rewriting result under the so-called “leftmost-longest match” replacement with skips, t'. The time needed to compute the transducer is linear in the size of the input dictionary. Given the transducer, any text t of length |t| is rewritten in a deterministic manner in time O(|t|+|t'|), where t' denotes the resulting output text. Hence the resulting rewriting mechanism is very efficient. As a second advantage, using standard tools, the transducer can be directly composed with other transducers to efficiently solve more complex rewriting tasks in a single processing step.
The paper describes the methodology used to develop a construction domain ontology, taking into account the wealth of existing semantic resources in the sector ranging from dictionaries to thesauri. Given the characteristics and settings of the construction industry, a modular, architecture-centric approach was adopted to structure and develop the ontology. The paper argues that taxonomies provide an ideal backbone for any ontology project. Therefore, a construction industry standard taxonomy was used to provide the seeds of the ontology, enriched and expanded with additional concepts extracted from large discipline-oriented document bases using information retrieval (IR) techniques.
We present a simple and intuitive unsound corpus-driven approximation method for turning unification-based grammars, such as HPSG, CLE, or PATR-II into context-free grammars (CFGs). Our research is motivated by the idea that we can exploit (large-scale), hand-written unification grammars not only for the purpose of describing natural language and obtaining a syntactic structure (and perhaps a semantic form), but also to address several other very practical topics. Firstly, to speed up deep parsing by having a cheap recognition pre-flter (the approximated CFG). Secondly, to obtain an indirect stochastic parsing model for the unification grammar through a trained PCFG, obtained from the approximated CFG. This gives us an efficient disambiguation model for the unification-based grammar. Thirdly, to generate domain-specific subgrammars for application areas such as information extraction or question answering. And finally, to compile context-free language models which assist the acoustic model of a speech recognizer. The approximation method is unsound in that it does not generate a CFG whose language is a true superset of the language accepted by the original unification-based grammar. It is a corpus-driven method in that it relies on a corpus of parsed sentences and generates broader CFGs when given more input samples. Our open approach can be fine-tuned in different directions, allowing us to monotonically come close to the original parse trees by shifting more information into the context-free symbols. The approach has been fully implemented in JAVA.
The goal of semantic search is to improve on traditional search methods by exploiting the semantic metadata. In this paper, we argue that supporting iterative and exploratory search modes is important to the usability of all search systems. We also identify the types of semantic queries the users need to make, the issues concerning the search environment and the problems that are intrinsic to semantic search in particular. We then review the four modes of user interaction in existing semantic search systems, namely keyword-based, form-based, view-based and natural language-based systems. Future development should focus on multimodal search systems, which exploit the advantages of more than one mode of interaction, and on developing the search systems that can search heterogeneous semantic metadata on the open semantic Web.
In this paper we discuss the task of dialogue act recognition as a part of interpreting user utterances in context. To deal with the uncertainty that is inherent in natural language processing in general and dialogue act recognition in particular we use machine learning techniques to train classifiers from corpus data. These classifiers make use of both lexical features of the (Dutch) keyboard-typed utterances in the corpus used, and context features in the form of dialogue acts of previous utterances. In particular, we consider probabilistic models in the form of Bayesian networks to be proposed as a more general framework for dealing with uncertainty in the dialogue modelling process.
The development of quantum walks in the context of quantum computation, as generalisations of random walk techniques, has led rapidly to several new quantum algorithms. These all follow a unitary quantum evolution, apart from the final measurement. Since logical qubits in a quantum computer must be protected from decoherence by error correction, there is no need to consider decoherence at the level of algorithms. Nonetheless, enlarging the range of quantum dynamics to include non-unitary evolution provides a wider range of possibilities for tuning the properties of quantum walks. For example, small amounts of decoherence in a quantum walk on the line can produce more uniform spreading (a top-hat distribution), without losing the quantum speed up. This paper reviews the work on decoherence, and more generally on non-unitary evolution, in quantum walks and suggests what future questions might prove interesting to pursue in this area.
Quantum physics, together with the experimental (and slightly controversial) quantum computing, induces a twist in our vision of computation, and hence, since computing and logic are intimately linked, in our approach to logic and foundations. In this paper, we discuss the most mistreated notion of logic, truth.
This special issue of Mathematical Structures in Computer Science contains several contributions related to the modern field of Quantum Information and Quantum Computing.
The first two papers deal with entanglement. The paper by R. Mosseri and P. Ribeiro presents a detailed description of the two- and three-qubit geometry in Hilbert space, dealing with the geometry of fibrations and discrete geometry. The paper by J.-G.Luque et al. is more algebraic and considers invariants of pure k-qubit states and their application to entanglement measurement.
In the longstanding debate in political economy about the feasibility of socialism, the Austrian School of Economists have argued that markets are an indispensable means of evaluating goods, hence a prerequisite for productive efficiency. Socialist models for non-market economic calculation have been strongly influenced by the equilibrium model of neoclassical economics. The Austrians contend that these models overlook the essence of the calculation problem by assuming the availability of knowledge that can be acquired only through the market process itself. But the debate in political economy has not yet considered the recent emergence of agent-based systems and their applications to resource allocation problems. Agent-based simulations of market exchange offer a promising approach to fulfilling the dynamic functions of knowledge encapsulation and discovery that the Austrians show to be performed by markets. Further research is needed in order to develop an agent-based approach to the calculation problem, as it is formulated by the Austrians. Given that the macro-level objectives of agent-based systems can be easily engineered, they could even become a desirable alternative to the real markets that the Austrians favour.
Pervasive computing is by its nature open and extensible, and must integrate the information from a diverse range of sources. This leads to a problem of information exchange, so sub-systems must agree on shared representations. Ontologies potentially provide a well-founded mechanism for the representation and exchange of such structured information. A number of ontologies have been developed specifically for use in pervasive computing, none of which appears to cover adequately the space of concerns applicable to application designers. We compare and contrast the most popular ontologies, evaluating them against the system challenges generally recognized within the pervasive computing community. We identify a number of deficiencies that must be addressed in order to apply the ontological techniques successfully to next-generation pervasive systems.