To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Abstract. We survey and evaluate recent discussions about axiomatic theories of truth, with special attention to deflationary approaches. Then we propose a new account of the use of truth theories, called a transactional analysis. In this analysis, information is communicated between intelligent agents, which are modeled as individual axiomatic theories. We note the need in the course of communication to distinguish whether or not new information is considered trustworthy.
To say that what is is not, or that what is not is, is false; but to say that what is is, and what is not is not, is true; and therefore also he who says that a thing is or is not will say either what is true or what is false.
– Aristotle, Metaphysics, 1011b
This paper consists of three parts. First is a brief introduction; probably most of it will be very familiar material. Then I will describe and discuss some recent work on axiomatic theories of truth. Finally, I will suggest an alternative way of thinking about axiomatic theories of truth, which I call a transactional approach. The famous quotation from Aristotle (shown above, and chosen in honor of the conference at which this paper was presented) is not really the starting point, but includes one little feature which deserves attention for later reference: the use of the word “say”.
Abstract. Computations in spaces like the real numbers are not done on the points of the space itself but on some representation. If one considers only computable points, i.e., points that can be approximated in a computable way, finite objects as the natural numbers can be used for this. In the case of the real numbers such an indexing can e.g. be obtained by taking the Gödel numbers of those total computable functions that enumerate a fast Cauchy sequence of rational numbers. Obviously, the numbering is only a partial map. It will be seen that this is not a consequence of a bad choice, but is so by necessity. The paper will discuss some consequences. All is done in a rather general topological framework.
Abstract. In this paper I introduce a sequent system for the propositional modal logic S5. Derivations of valid sequents in the system are shown to correspond to proofs in a novel natural deduction system of circuit proofs (reminiscient of proofnets in linear logic [9, 15], or multipleconclusion calculi for classical logic [22, 23, 24]).
The sequent derivations and proofnets are both simple extensions of sequents and proofnets for classical propositional logic, in which the new machinery—to take account of the modal vocabulary—is directly motivated in terms of the simple, universal Kripke semantics for S5. The sequent system is cut-free (the proof of cut-elimination is a simple generalisation of the systematic cut-elimination proof in Belnap's Display Logic [5, 21, 26]) and the circuit proofs are normalising.
The 2005 European Summer Meeting of the Association for Symbolic Logic was held in Athens, Greece, July 28–August 3, 2005. The meeting was called Logic Colloquium 2005 and its sessions, except the opening one, which took place in the Main Building, took place in the building of the Department of Mathematics of the University of Athens. It was attended by 198 participants (and 25 accompanying persons) from 29 different countries. The organizing body was the Inter-Departmental Graduate Program in Logic and Algorithms (MPLA) of the University of Athens, the National Technical University of Athens and the University of Patras. Financial support was provided by the Association for Symbolic Logic, the Athens Chamber of Commerce and Industry, the Bank of Greece, the Graduate Program in Logic and Algorithms, IVI Loutraki Water Co., the Hellenic Parliament, Katoptro Publications, Kleos S. A., the Ministry of National Education and Religious Affairs, Mythos Beer Co., the National and Kapodistrian University of Athens, the National Bank of Greece and Sigalas Wine Co.
The Program Committee consisted of Chi Tat Chong (Singapore), Costas Dimitracopoulos (Athens), Hartry Field (New York), Gerhard Jäger (Bern), George Metakides (Patras), Ludomir Newelski (Wroclaw), Dag Normann (Oslo), Rohit Parikh (New York), John Steel (Berkeley), Stevo Todorčević (Paris), John Tucker (Swansea), Frank Wagner (Lyon) and Stan Wainer (Leeds, Chair).
Introduction. We describe a constructive theory of computable functionals, based on the partial continuous functionals as their intendend domain. Such a task had long ago been started by Dana Scott [30], under the well-known abbreviation LCF. However, the prime example of such a theory, Per Martin-Löf's type theory [23] in its present form deals with total (structural recursive) functionals only. An early attempt of Martin-Löf [24] to give a domain theoretic interpretation of his type theory has not even been published, probably because it was felt that a more general approach — such as formal topology [13] — would be more appropriate.
Here we try to make a fresh start, and do full justice to the fundamental notion of computability in finite types, with the partial continuous functionals as underlying domains. The total ones then appear as a dense subset [20, 15, 7, 31, 27, 21], and seem to be best treated in this way.
Abstract. Threads as contained in a thread algebra emerge from the behavioral abstraction from programs in an appropriate program algebra. Threads may make use of services such as stacks, and a thread using a single stack is called a pushdown thread. Equivalence of pushdown threads is decidable. Using this decidability result, an alternative to Cohen's impossibility result on virus detection is discussed and some results on risk assessment services are proved.
Chapter 1 portrayed appeal to witness testimony as a distinctive argumentation scheme with a matching set of critical questions. This approach implies that any given instance of an appeal to witness testimony in a trial needs to be evaluated in the context of a dialogue, in line with the goal appropriate for that type of dialogue. Chapter 4 outlined several different abstract models of dialogue that have been identified in the literature on argumentation theory and computing. Chapter 5 outlined the characteristics of one particular type of dialogue called peirastic examination dialogue that is new on the scene and has been very little investigated in the literature. The most visible and best established instance of this type of dialogue is found in the examination procedure used in our legal system to question witnesses and other participants in a trial. In Sections 6 and 7 of Chapter 5, the abstract model of peirastic examination dialogue was illustrated by features of examination in a trial setting. Now a large question is raised: How can we apply these abstract dialectical models to the existing institution called the trial in law?
What sort of dialogue provides the right framework for making witness testimony a form of evidence in a trial? In this chapter we will concentrate on the adversarial theory, embodied in Anglo-American law, where the opposed advocacy arguments of both sides offer the trier a basis for judging which side has the stronger argument, or a strong enough argument to meet the requirements of proof.
There is a long tradition in philosophy, going back to Plato, of contempt for arguments based on witness testimony as being unreliable, subjective, misleading, and impossible to evaluate as evidence by objective standards. Any argument as fallible as one based on witness testimony is easily seen as subjective in nature, and simply beyond the range of any exact, objective treatment. Certainly the recent findings of social scientists (Loftus, 1979) have given us plenty of grounds for distrust of this fallible form of evidence. In this chapter, some notorious cases of lying witnesses and wrongful convictions based on false or inaccurate witness testimony dramatically illustrate the point. On the other hand, even in an age where video evidence seems to be usurping the place of eyewitness testimony, we could scarcely do without witness testimony as an important kind of evidence in trials and investigations. Thus it is a kind of evidence that is on a razor's edge. We need it, but it can go badly wrong. Thus it is important to study how it should be evaluated as a kind of evidence that can be strong in some cases and weak, or even erroneous and misleading, in others. Chapter 1 begins this process by stating and identifying the premises that witness testimony is based on as a type of argumentation, the conclusions that it leads to, the nature of the inferential link that joins them, and how it can be supported or rebutted.
To get closer to a useful method of analyzing and evaluating witness testimony as evidence, we need to look more closely at what actually happens in trials. What typically happens in a trial is that when a witness is examined, the examiner will ask a series of connected questions all designed to probe into the particulars of some situation. The answers given by the respondent will tend to hang together in a coherent unity, sometimes called a ‘story’. The use of this term implies a certain skepticism, suggesting that the story may not really be true, and that it may be fabricated, like a fictional story. So when the examiner probes into the story, she may test out its coherence, as well as trying to just elicit further details. At any rate, it seems to be the story itself that guides how the testimony is evaluated as evidence (Bench-Capon and Prakken, 2005). The so-called story is really just the collected set of assertions forming an account of some supposed event reported by the witness. But since the witness is (presumably) in a position to know about the subject he is being questioned about, as shown in Chapter 1, this collected set of assertions can be filtered through argumentation schemes to provide evidence. Because appeal to witness testimony is evidence, presumably based on a rational form of argument, conclusions can be drawn from what the witness says.
A plausibilistic argument is one that yields a conclusion that is an assumption that seems to be true, on the basis of the evidence at some point in a proceeding, but may be subject to retraction if new information comes into the case at a later point in the proceeding. The conclusion is drawn tentatively, and is subject to retraction if, as a story continues to unfold, new evidence comes in showing that it is not (likely) true. Plausibility has often been mistrusted, to some extent justifiably, because it is not only subject to defeat in some cases, but in other cases, it can be misleading, and even be the basis of fallacies, of the kind long studied in logic (Walton, 1995). And yet it is becoming more and more evident through recent work in AI that the majority of arguments we are familiar with, both in legal argumentation and in everyday conversational argumentation, are based on plausible reasoning of a kind that is weaker than deductive or inductive reasoning. It is often thought to be based on abductive inference, or inference to the best explanation. MacCrimmon (2001, p. 1455) cited the evidentiary rule that a person found in possession of a recently stolen item is the thief. On an abductive model, the inference is reasonable if the person's having stolen the item is the best explanation of how he came to possess it.
Trial lawyers tend to see a trial from an adversarial viewpoint and tend to be highly skeptical of the notion that the examination of a witness in court could be seen as a species of information-seeking dialogue. But if you look at the trial from a wider viewpoint, say that of a judge, part of the purpose of it should be to bring the true facts of a case to light. This aim can best be achieved through the testing of the arguments of both sides in an adversarial clash, we hold. But it should not be a pure quarrel. The trier is more likely to get a better idea of what the truth of the matter really is through the information that witnesses can provide. On this basis of what the trial should really be about, it is argued that ideally, in a trial, witness examination should be assumed to have the function of bringing out information. But in practice, especially given the adversarial system of Anglo-American common law, the purpose that the examining counsel has is that of advocacy. In Chapter 5 it is argued that the best way to normatively model the argumentation in such a trial is as persuasion dialogue based on a special type of examination dialogue that is a species of information-seeking dialogue.
This chapter will show that it is a special kind of information-seeking dialogue that is involved in legal examination. Information-seeking dialogue seems to be very common and, on the surface, unproblematic.
Evaluating argumentation in a dialogue model, in which two parties reason with each other, is an old idea that goes back to Aristotle's earlier writings, and even to the sophists. But after the Greeks, the idea lost favor, although it persisted for a time in the scholastic disputations of the middle ages. Aristotle's syllogistic dominated the field of logic for many centuries, until it was superseded by other forms of deductive calculi – propositional and quantifier logics. It was not until the advent of the Erlangen School in Germany that anyone tried to revive the dialogue model and to carry out a systematic program for constructing a system of calculation based on it. But it was not until Hamblin's construction of mathematical models of dialogue (1970, 1971) that a general structure of logical dialogue systems was put forward that was well enough developed to show promise of providing methods for evaluating arguments and fallacies that would hold practical interest for logicians. Alexy (1989) showed how such dialogue systems can be applied to legal argumentation, a program that is now being carried forward by a group of researchers in AI and law including Bench-Capon (1995), Prakken and Sartor (1996, 1998), Verheij (1996, 2000), and Lodder (1998, 1999). This line of research is now often called computational dialectics. It would appear that Gordon invented the term.