To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The natural semantics for standard ND rules for the quantifiers may come as something of a surprise, for it differs from both the objectual and the substitution interpretations (Section 14.3). The semantics that is expressed by the rules for the universal quantifier ∀, dubbed the sentential interpretation, is distinctly intensional and requires a side condition ensuring that the variables all denote in the same virtual domain (Section 14.4). Because of this, the sentential interpretation is not functional, and so it will be important to establish an isomorphism result to establish that it qualifies as a semantics (Section 14.5). One might hope that sequent systems that force a classical treatment of negation might be strong enough to eliminate the difference between the sentential interpretation and familiar readings of the quantifier, but Section 14.6 reveals that the difference persists. Section 14.7 shows how that difference may be exploited to establish that natural semantics (even for sequent systems with multiple conclusions) fails to be referential, that is, there is no guarantee that models of those systems can be treated as if their variables referred to objects in a virtual domain. Section 14.8 turns to the natural semantics for ∃, the existential quantifier. The surprise here is that ∃ lacks existential import. Section 14.9 points out that adoption of the omega rule is sufficient to force the substitution interpretation of the quantifiers. In Section 14.10, results of the chapter are marshaled for a criticism of Hacking’s program to limn the logical.
In the introduction of this book, it was claimed that model-theoretic inferentialism and proof-theoretic inferentialism need not be enemies, and that results of this book can actually be of service to the proof-theoretic tradition. This chapter provides details to support that claim. The fundamental problem to be solved in the proof-theoretic paradigm is to find an answer to the problem of tonk. The rules for tonk show that not every collection of rules can define a connective, and so proof-theoretic conditions must be found that are independently motivated, and distinguish those rules that successfully define connective meaning from those that do not. Sections 13.1 through 13.3 discuss conservativity, one of the most widely discussed constraints of this kind. Here it is shown that natural semantics may be used to help motivate that condition. Section 13.4 discusses uniqueness. In Section 13.5, notions of harmony based on the inversion principle and normalization of proofs are briefly reviewed. In the following section, a model-theoretic notion called unity is introduced, and compared with harmony notions in the proof-theoretic tradition (Section 13.7). In the final section (13.8), it is shown that natural semantics can be revised in a near-trivial way to provide proof-theoretic accounts of logical consequence and harmony. Brief comparisons are drawn between proof-theoretic ideas in the literature and these new proposals.
Conservation and connective definition
Our investigation into the prospects for model-theoretic inferentialism has been inspired by the idea that natural deduction rules provide a syntactic method for defining the meanings of the connectives. However, not every set of ND rules will do. Prior (1960) showed that there is a pair of introduction and elimination rules for a connective c that determines no coherent interpretation of c. (See also Wagner (1981) and Hart (1982).) For example, if we try to define the connective tonk by the rules A ⊢ A tonk B and A tonk B ⊢ B, we have the unwelcome result that A ⊢ B holds for any choice of A and B. The tonk rules don’t fix a meaning for tonk; they disastrously alter the nature of deduction in the system.
(This chapter draws heavily from Garson (2013) “Open Futures in the Foundations of Propositional Logic,” in Nuel Belnap on Indeterminism and Free Action, T. Mueller (ed.), Springer, New York.)
The natural semantics ‖PL‖ for classical logic has a number of interesting applications. This chapter will discuss its merits as a semantics that takes seriously the notion that the future is open (Section 10.1). Here future possibilities are represented as a branching structure, with choice points at each node. The ‖PL‖ models developed in Chapter 8 are well suited to this idea, since the frame <V, ≤> sets up that kind of structure. Furthermore, as Sections 10.3 and 10.4 show, the side condition ‖LF‖ and the quasi-truth reading ‖q∨‖ of disjunction match well with our concerns to locate the propositions that express events over which one might have control.
‖LF‖ If v(A)=f, then for some v′, v≤v′ and v′(A)=F.
‖q∨‖ v(A∨B)=t iff for all v′∈V, if v≤v′, then v′(A)=qT or v′(B)=qT.
In Section 10.5, it is shown that ‖PL‖ can be used to refute arguments that purport to show the inescapability of fatalism. The frame <V, ≤> is not entirely apt for modeling an open future because it need not satisfy the condition that there be no branching of possibilities in the past. Section 10.6 shows how this problem can be repaired.
The purpose of this book is to rigorously explore the inferentialist thesis that the rules of logic determine the meanings of the connectives. The answer to what a logic expresses about the connectives it regulates depends on how expressive power is defined. The purpose of this chapter is to explore a criterion for expressive power that has been common in the literature, and which was dubbed ‘deductive expression’ in Section 1.8. According to this benchmark, the news is not good for the inferentialist. In the overwhelming majority of cases, systems of logic underdetermine the meanings of the connectives they regulate (Section 2.2). Many different interpretations are compatible with what the rules deductively express. This chapter illustrates the problem in the case of propositional logic, reviewing negative results that are well known. Then we will consider an idea that would eliminate the underdetermination. It is that if any one connective is given the classical interpretation, then classical interpretations are fixed for all the others (Section 2.3). Although that condition seems fairly modest, it does not resolve the fundamental problem. The moral of the story will be that what is wrong is not the inferentialist’s thesis, but the benchmark we are using to adjudicate it. This will set the stage for better criteria that just happen to provide more optimistic answers.
Deductive expression defined
The main idea behind deductive expression is to ask what semantics is determined by the requirement that all the provable arguments of a system are valid. For example, suppose we have a set V of valuations, which, remember, are arbitrary (but consistent) functions from the wffs for propositional logic to the truth-values t and f. Assume that the arguments provable in a standard system for propositional logic are all V-valid. What does this tell us about the members of V? In particular, we want to know whether the valuations in V obey the familiar truth tables, or if not, whether they satisfy some other truth conditions for the connectives.
Where does meaning come from? There is no more compelling question in the philosophy of language. Referentialists seek an answer in a correspondence between word and object, statement and reality. Inferentialists look to an expression’s deductive role, its contribution to the web of relations that determine what follows from what. Logic is the perfect test bed for assessing the merits of inferentialism. The deductive role of the connectives for a given system is defined precisely by its rules. Whether the meanings of the connectives are determined by those roles is now a question with a rigorous answer. This book proves what some of those answers are, revealing both strengths and weaknesses in an inferentialist program for logic. The results reported here are only the tip of an iceberg, but they illustrate the important contribution that metalogic can play in resolving central puzzles in the philosophy of language.
To make headway on this project, we need to explore the options in syntax, in semantics, and in ways to plausibly bridge the two. On the syntactic side, we are faced with a rich variety in the systems of logic. This book examines only intuitionistic and classical rules for propositional logic, and then briefly, rules for quantified and modal systems. So this is just a start. A second important source of syntactic variation is rule format. The details about the way the rules of a logical system are formulated affect whether that system allows unintended interpretations of its connectives. In the same way that moving from first-order to second-order languages strengthens the expressive power of the logic, so does the move from axiomatic formulations, to natural deduction systems, and to sequent calculi with multiple conclusions. Answers to questions about what logics mean depend crucially on which format is chosen. The moral is that inferentialists who claim that inferential roles fix meaning are duty bound to specify what kind of rules undergird those roles.
(This chapter draws heavily from Brown and Garson (in preparation).)
Timothy Williamson (1994, p. 142 and elsewhere) presents a number of powerful arguments against supervaluationist treatments of vagueness. In light of the failure of those and other accounts that do not preserve classical logic, Williamson concludes that the only tenable theory of vagueness is epistemic. His view adopts the counterintuitive thesis that there are facts of the matter as to whether a borderline case counts as (for example) being bald, and that our inability to decide such a case is simply the reflection of limitations in our knowledge of that reality.
This chapter shows that ‖QL‖, the natural semantics for classical predicate logic, provides a semantics for vagueness that preserves the core intuition behind supervaluationism, namely that some sentences are undecided in truth value (Section 15.2). However, ‖QL‖ faces none of the problems put forward by Williamson (Section 15.3). Once this alternative to supervaluations is in place, pressures for accepting an epistemological account of vagueness can be resisted (Section 15.4).
We have already discussed intuitionistic logics that define ~ by ~A =df A→⊥. However, let us now consider the possibility that ~ is a primitive symbol of the language. In light of the ~ Definition Theorem of Section 6.4, it will come as no surprise that the natural deduction (ND) rules for intuitionistic negation express the intuitionistic truth condition ‖¬‖ (Section 8.1). More interesting results surface when we ask what is expressed by S~, the classical ND rules for negation (Section 8.2). It turns out that the condition ‖S~‖ that S~ expresses is intuitionistic, but it also includes a side condition ‖LL‖ corresponding to the requirement that (Double Negation) preserves validity. Sections 8.3–8.4 will explore the content of ‖LL‖. In Section 8.5, ‖LL‖ is recast in a form ‖LL′‖ that mentions no connective. Questions are raised about whether ‖S~‖ is a legitimate semantics for ~. The idea that ‖S~‖ is not acceptable is supported in Section 8.6, where it is shown that ‖S~‖ is not functional. That negative opinion of ‖S~‖ is tempered somewhat in Section 8.7, where it is shown that ‖LL′‖ makes a positive contribution to resolving the serious problems (discussed in Section 7.3) that bedevil the condition ‖∨‖ expressed by the disjunction rules S∨. Section 8.8 shows that despite the non-functionality of ‖S~‖, the condition ‖PL‖ expressed by the classical rules PL for negation and the other connectives is isomorphic to a perfectly respectable semantics, one that has already appeared in the literature. In light of that result, the situation is reassessed (Section 8.9). ‖PL‖ is defended by arguing that ‖LL‖ has nothing to do with the truth conditions for negation, and that therefore the meaning for negation expressed by the classical rules is intuitionistic.
Chapters to come will underscore the virtues of ‖PL‖. Chapter 10 deploys ‖PL‖ as a logic for an open future, and a foundation for systems that can handle human agency. Chapter 15 shows that ‖PL‖ is especially well suited for solving serious problems faced by supervaluation accounts of vagueness. Alien anthropologists who discover that we employ classical natural deduction rules for our reasoning will conclude that ‖PL‖ tells us what we mean by the connectives (if anything does). The upshot is that whether we adopt intuitionistic or classical ND rules, what we mean by ~ is intuitionistic.
The last chapter shows that the rules of propositional logic underdetermine a meaning for the connectives, at least when deductive expression is the benchmark to be used for determining what the system says about how to interpret them. One diagnosis of the problem is that deductive expression is entirely insensitive to how the rules of a logic are formulated. All that matters to whether a set of valuations is a deductivemodel of a system is what arguments are distinguished as provable by its rules. However, rule format can have an effect on expressive power when the criterion for what a system expresses takes details concerning rule formulation into account. The most direct way of doing this is to define expression via the notion of a local model of a rule. A set of valuations V is a local model of a rule when every member of V that satisfies the input(s) of the rule also satisfies its output. The relevant definitions leading to the notion of local expression are listed here. Since we will discuss (multiple conclusion) sequent systems in this chapter, we will speak more generally of satisfaction of sequents rather than satisfaction of arguments.
This chapter is an introduction to the natural semantics for modal logics. A pleasing result is that the basic modal logic K expresses the standard truth condition for □, where the accessibility relation R is defined as this is done in canonical models for modal logic (Section 16.1). Extensions of K such as the logics (M=T, S4, and S5) are treated in Section 16.2. Here we learn that some modal axioms involving □ (such as (M) and (4)) express their corresponding frame conditions, but others involving ◊ such as (B) and (5) do not. A more detailed treatment of the natural semantics for ◊ rules follows (Section 16.3). It shows that the interpretation of ◊ is novel and doubly intensional. Section 16.4 reveals how complications that arise for the completeness of quantified modal logic may be explained by the fact that the natural semantics for the quantifiers differs from the substitutional and objectual readings. The chapter closes (Section 16.5) with the description of an interesting but failed project. It is to modify the definition of validity to more faithfully capture what is expressed by natural deduction rules that involve the use of modal (or boxed) subproofs. Though clean results on the natural semantics of those systems are not available, we hope the reader will find the discussion an inspiration for further research using variations on the definition of validity.
In the last chapter, we established that classical propositional logic PL allows a non-classical interpretation of the connectives. In fact, the natural semantics ‖PL‖ for PL is a variant of an intuitionistic semantics, where (Refinability) is added as a side condition. The existence of non-classical interpretations for classical propositional logic is nothing new, for supervaluation semantics (van Fraassen, 1969) qualifies as another example. The purpose of this chapter is to compare ‖PL‖ with supervaluation semantics. There are interesting similarities. Section 9.3 shows that the partial truth tables for ‖PL‖ and supervaluations are identical. Furthermore, there is a way to map supervaluations in the canonical model of PL (Section 9.2). However, two differences will emerge that are decisive. Supervaluation semantics does not preserve the validity of the most basic rules of PL (Section 9.4). Furthermore, it will be argued in Section 9.5 that supervaluations do not really qualify as a semantics, for they do not provide truth conditions for the connectives of PL. The upshot of this discussion will be to recommend ‖PL‖ as better suited for philosophical applications where supervaluations have been popular. In coming chapters, we will demonstrate the point in detail by showing the value of ‖PL‖ for handling the open future (Chapter 10) and vagueness (Chapter 15).
Supervaluation semantics
Supervaluation semantics has many applications, but a seminal motivation was to provide for a Strawsonian theory of singular terms where some sentences lack truth-values because of the failure of the presupposition that their terms denote. Strawson (1967) argued that the value of ‘the present King of France is bald’ is not false as Russell would have thought. Instead it lacks a value because (presuming there is no present King of France) the sentence simply fails to make an evaluable statement.
In this chapter, we will lay the groundwork for showing that the natural semantics for natural deduction formulations of propositional logic is intuitionistic. In Section 5.1, we present Kripke’s semantics for intuitionistic logic, which is a variant of the semantics for the modal logic S4 (van Dalen, 1986, pp. 243ff.). In Section 5.2, intuitionistic models are introduced. Intuitionistic models are sets of valuations V that satisfy conditions for each of the connectives that resemble the corresponding truth conditions for Kripke models. Intuitionistic models, being sets of valuations of a certain kind, lack the structure found in Kripke models, notably the accessibility relation ⊆. In intuitionistic models, the analog ≤ of ⊆ has to be defined by the way valuations in V assign values to the wffs. Section 5.3 discusses the objection that the conditions that mention ≤ are therefore circular or fail to meet other standards for a successful account of recursive truth conditions. The proof that there is an isomorphism between structures generated by intuitionistic models and Kripke models (Section 5.4) helps respond to those objections. A further constraint for successful truth conditions is defined (Section 5.5), and it is shown that intuitionistic models meet it. This supports the contention that intuitionistic models count as a legitimate account of connective meaning, and prepares the way for the results on natural semantics that are proven in later chapters.
Kripke semantics for intuitionistic logic
Kripke’s semantics (1963) is a simplification of the topological semantics developed in the 1930s by Heyting and refined by Kreisel. The main idea is to define truth relative to the history of discovery of an idealized mathematician (or community of mathematicians). At each point in that history, a body of mathematical results has been developed. Since perfect memory of past results is presumed, that body of knowledge grows as time proceeds. At each stage in the history, the mathematician has choices concerning which topics should be investigated next. The collection of choices for future research can be modeled as a branching structure with forks representing the choice points.
The purpose of this book is to explore what rules of logic express about the meanings of the logical symbols they govern. Suppose that the only thing you know about the symbol ‘*’ is that the following rules govern its behavior. Given an English sentence of the form A*B, it follows that A, and it also follows that B. Given A and B together, it follows that A*B. Can you tell what the symbol ‘*’ means? Did you think that ‘*’ must mean what we mean by ‘and’ in English, and that the truth behavior of sentences involving ‘*’ must conform to the standard truth table for conjunction? If you did, can you be certain that the same deductive role for ‘*’ specified by these rules might not also allow some alternative (or unintended) interpretation for ‘*’?
The broader picture
Questions like this are special cases of a general concern in the philosophy of language. To what extent can the meanings of expressions of a language be defined by the roles they play in our reasoning? Does knowing the meaning of a sentence simply amount to knowledge of which sentences entail it and which ones it entails? Would it be possible at least in principle for alien anthropologists from a planet circling the star Alpha Centauri (who know initially nothing about our language) to learn what our sentences mean by simply investigating the way we reason from one to another in different circumstances?
It is time to draw this book to a close, and to reflect on what has been accomplished. I hope its main contribution will be to spark future research on what logics mean. So far, only a few excursions have been taken in a vast landscape of questions about natural semantics and its philosophical applications. A handful of answers have been given to the main question that is posed by this book, namely how or whether inferential rules governing an expression fix its truth conditions. However, we have just scratched the surface. For example, more needs to be done to fully understand classical quantification. Results for free logics and systems for generalized quantifiers have not even been attempted. Exploring rules in the logic programming tradition and the concept of negation as failure could be very fruitful. Modal logics and their quantified extensions are also promising territory for new results, to say nothing of tense logics, multi-modal logics, dynamic logic, and inquisitive logic. Despite the fact that natural semantics seems wedded to standard structural rules, it is still possible to obtain some results for relevance logic (Garson, 1990, Section 5). Modifications to the definition of validity mentioned in Section 16.5 promise to allow application of the ideas of natural semantics to a wider range of systems including substructural logics. Function symbols, descriptions, the lambda calculus, set theory, and arithmetic all remain to be explored. Furthermore, there is no reason why natural semantics has to be limited to the domain of logic. The inferential roles set up within a natural language provide a much richer field of investigation. There promise to be a wide range of applications to philosophical problems as well, at the very least to areas such as truth paradoxes and presupposition where supervaluations have been deployed in the past.
Given results obtained so far, what conclusions should be drawn concerning model-theoretic (MT) inferentialism? Those of an intuitionist persuasion may take heart at the fact that the rules for conjunction, the conditional, and intuitionistic negation express exactly their intuitionistic readings in Kripke semantics. Furthermore, if the position taken in Section 8.9 is adopted, one will conclude that the reading of the rules for classical negation is intuitionist as well.
This chapter is about how to analyze kinds of arguments traditionally called enthymemes, arguments that require for proper analysis and evaluation the identification of a missing premise, or in some instances a missing conclusion. A small section on enthymemes has traditionally been included in logic textbooks from the time of Aristotle. In this chapter it is shown how methods of argumentation study, including software tools recently developed in computing, have enabled new ways of analyzing such arguments. It is shown how the employment of these methods to four key examples reveals that the traditional doctrine of the enthymeme needs to be radically reconfigured in order to provide a more useful approach to the analysis of incomplete arguments.
There is an extensive literature on incomplete arguments, and this chapter begins with a survey of enough of this literature to make it possible to understand the investigation that follows and to show why it is needed. The second section of the chapter gives a brief historical outline of the literature on enthymemes, beginning with Aristotle’s account of it, including coverage of a significant historical controversy about what Aristotle meant by this term. One of the problems with the task of analyzing incomplete arguments is to get some general grasp of what it is one is trying to do, because the solution to this task can be applied not only to logic but to many other fields that contain argumentation, such as science and law. Therefore, it is important to formulate at the beginning what the purpose of the investigation is supposed to be. A brief account of this is contained in Section 3. The next four sections contain extensive analyses of four examples of incomplete arguments. The first example is meant to be very simple, but the next three examples show some highly significant factors found in carrying out the task of argument analysis needed to identify the missing parts of an argument. These findings are taken into account in Section 9 of the chapter, where both the nature of the task and the proper terminology needed to assist it are reformulated and clarified.