1 Introduction: a full formalization of logic
A recurring theme in Carnap’s work on the foundations of logic and mathematics is a concern with notions of formal completeness and questions pertaining to the determinacy and uniqueness of formal, i.e., logical and mathematical, concepts. The most prominent example of this can be found in his ultimately abandoned Untersuchungen zur Allgemeinen Axiomatik, containing the ill-fated Gabelbarkeitssatz, in which he aimed to unify several extant conceptions of the completeness of an axiom system. After Carnap’s ‘semantic turn’, following the rise and wide-spread acceptance of Tarskian model-theory, questions concerning the determinacy of formal notions gained a further layer of complexity.Footnote 1 It was in this context that Carnap worried that the usual characterizations of logical notions, in spite of the soundness and completeness of the systems they were part of, i.e., in spite of a perfect match at the level of consequence, left essential properties underspecified.
Carnap took up the question of a full formalization of logic in a “very little known” [Reference Raatikainen82, p. 283] work [Reference Carnap20]. There, he demonstrated that, surprisingly, the standard rules for almost all of the usual logical constants of FOL severely underdetermine their standard model-theoretic semantics. This state of affairs, Carnap thought, was highly undesirable: a unique determination of the standard semantic values of the constants by their rules of inference was to be a desideratum for a logical system on par with its soundness and completeness. Although tradition has not followed him in this assessment of importance, Carnap’s discovery had impactful consequences for a wide variety of philosophical views and projects, even outside the foundations of logic. Due to this, Carnap’s (Categoricity) Problem has, in recent years, been the subject of intense attention, after only sporadic discussions and rediscoveries for the over 60 years directly following the publication of [Reference Carnap20].Footnote 2
Despite the renewed interest there exists, to the best of my knowledge, no fully systematic study of the solution strategies put forward for resolving Carnap’s Problem.Footnote 3 This is the gap the present article aims to fill. In doing so, it is bound to remain incomplete as the literature is, by now, vast and scattered. While containing several novel observations and results, its main goal remains expository; it tries to bring under common philosophical perspectives different solution strategies that have been proposed and elaborated in the literature.
Carnap’s Problem has consequences for debates in the philosophy of language, mind, mathematics, ontology, epistemology, and logic and constitutes a deep and fruitful starting point for evaluating seemingly unrelated philosophical positions and proposals. Even though Carnap’s original interest in and treatment of the issue was much more narrow, this article deviates in this from his original focus and starting point. It is, moreover, less concerned with a faithful reconstruction of the historical Carnap, but rather attempts a systematic survey and discussion of Carnap’s Problem in a contemporary context. It is hoped that this, by bringing together various avenues of investigation, will shed light on just how deep, basic, and widespread the issues raised by Carnap’s Problem are.
The structure of the paper is as follows: in Section 2 I will, based on existing treatments of the issue, introduce the basic structure of Carnap’s Problem and provide a partial and incomplete overview of debates it impacts. Section 3 surveys and discusses the first of the major families of strategies addressing the problem, strategies who have in common the idea that the notion of inference needs to be refined or strengthened in order to rule out the underdetermination of the logical constants. Sections 4 and 5 take up the, less widespread, semantic strategy for remedying Carnapian underdetermination and investigate its scope and prospects. Finally, Section 6 concludes with a brief outlook. A short appendix contains definitions and results clarifying several points from the main parts of the article.
2 Carnap’s categoricity problem
Let
$\mathscr {L}$
be a propositional language consisting of a countably infinite stock of propositional variables
$p, q, r \ldots $
and the usual connectives
$\neg , \wedge , \vee , \rightarrow ,$
and
$\leftrightarrow $
. The set of sentences of
$\mathscr {L}$
is designated by
. A (single-conclusion) consequence relation
$\vdash _{\mathscr {L}}$
over
$\mathscr {L}$
is a relation of the form:Footnote
4
As usual, we write
$\Gamma \vdash _{\mathscr {L}} \varphi $
for
$\langle \Gamma , \varphi \rangle \in \: \vdash _{\mathscr {L}}$
.
A (two-valued) valuation
$v$
is a total function
. A sentence
$\varphi $
is true under a valuation
$v$
if
$v(\varphi ) = 1$
. We designate the set of all (two-valued) valuations by
. Let
. A sentence
$\varphi $
is a
$\mathcal {V}$
-consequence of a set of sentences
$\Gamma $
,
$\Gamma \models _{\mathcal {V}} \varphi $
, if, for all
$v \in \mathcal {V}$
, whenever
$v(\gamma ) = 1,$
for all
$\gamma \in \Gamma ,$
then also
$v(\varphi ) = 1$
(we write
$v(\Gamma ) = n$
for
$v(\gamma ) = n,$
for all
$\gamma \in \Gamma $
).
A valuation
$v$
is consistent with a consequence relation
$\vdash _{\mathscr {L}}$
if, whenever
$\Gamma \vdash _{\mathscr {L}} \varphi $
, it is not the case that
$v(\Gamma ) = 1$
, but
$v(\varphi ) = 0$
. A set of valuations
$\mathcal {V}$
is consistent with a consequence relation
$\vdash _{\mathscr {L}}$
if
$\vdash _{\mathscr {L}} \: \subseteq \: \models _{\mathcal {V}}$
.
yields the set of valuations consistent with
$\vdash _{\mathscr {L}}$
. The semantic value of a connective
$c \in \{\neg , \wedge , \vee , \rightarrow , \leftrightarrow \}$
,
, is a set of valuations, i.e.,
.
is consistent with a consequence relation
$\vdash _{\mathscr {L}}$
if
.
is (uniquely) determined if
.Footnote
5
Carnap’s Problem concerns the question of what semantic value we are able to recover on the basis of inferential information and demonstrates, in particular, the failure of standard inferential characterizations of the logical notions to determine their standard semantics.Footnote 6
2.1 Classical propositional logic
To appreciate the scope and import of Carnap’s Problem, it is best to start with a case of successful meaning-determination. Consider, to this end, conjunction as inferentially characterized by the usual clauses – (
$\wedge $
I)
$\varphi , \psi \vdash _{\mathscr {L}} \varphi \wedge \psi $
; (
$\wedge $
E
$_{1}$
)
$\varphi \wedge \psi \vdash _{\mathscr {L}} \varphi $
and (
$\wedge $
E
$_{2}$
)
$\varphi \wedge \psi \vdash _{\mathscr {L}} \psi $
– and endowed with its usual semantic value (as provided by the standard boolean satisfaction-clause):
From the fact that
$\mathbb {V}(\cdot )$
and
$\models _{\_}$
form an antitone Galois-connection it immediately follows that
.Footnote
7
Suppose, then, that there exists a
, s.t.
but
. This means that, for some pair of sentences
$\varphi $
and
$\psi $
,
$v^{*}$
is not boolean, i.e., (a)
$v^{*}(\varphi \wedge \psi ) = 1$
but
$v^{*}(\varphi ) = 0$
or
$v^{*}(\psi ) = 0$
; or (b)
$v^{*}(\varphi \wedge \psi ) = 0$
yet
$v^{*}(\varphi ) = 1$
and
$v^{*}(\psi ) = 1$
.
Note that, in case (a),
$v^{*}$
would be inconsistent with
$\wedge $
E
$_{1}$
or
$\wedge $
E
$_{2}$
, whereas, in case (b),
$v^{*}$
would be inconsistent with
$\wedge $
I. Clearly, however,
respects
$\wedge $
I,
$\wedge $
E
$_{1}$
, and
$\wedge $
E
$_{2}$
, and
$v^{*}$
is therefore inconsistent with
. It follows that
and therefore
. In other words,
$\wedge $
I,
$\wedge $
E
$_{1,}$
and
$\wedge $
E
$_{2}$
uniquely determine
.
Now let
$\vdash _{CPL}$
be the (single-conclusion) consequence relation of classical propositional logic and consider the valuations
$v_{\top }$
and
$v_{\vdash }$
.Footnote
8
-
(i)
$v_{\top }(\varphi ) = 1$
for all
. -
(ii)
$v_{\vdash }(\varphi ) = 1$
iff
$\vdash _{CPL} \varphi $
.
$v_{\top }$
does not rule out any sentence and does thus not constitute a counterexample to any claim of consequence – it is consistent with every consequence relation and, in particular, with
$\vdash _{CPL}$
. Moreover, since anything derivable from a set of theorems in CPL is itself a theorem of CPL,
$v_{\vdash }$
is consistent with
$\vdash _{CPL}$
as well. However, note that, for any
,
$v_{\top }(\varphi ) = v_{\top }(\neg \varphi ) = 1$
and, for an arbitrary propositional atom p,
$v_{\vdash }(p) = v_{\vdash }(\neg p) = 0$
yet
$v_{\vdash }(p \vee \neg p) = 1$
.
Since both
$v_{\top }$
and
$v_{\vdash }$
are consistent with
$\vdash _{CPL}$
and
$\vdash _{CPL}$
is sound and complete with respect to the class of boolean valuations
$\mathcal {B}$
, we have that
$v_{\top }, v_{\vdash } \in \mathbb {V}(\models _{\mathcal {B}})$
. Furthermore, since
and
for the usual semantic values of
$\neg $
and
$\vee $
(provided by the standard boolean satisfaction clauses)
it follows from the fact that
$\mathbb {V}(\cdot )$
and
$\models _{\_}$
form a Galois-connection that
and
(see [Reference Humberstone44, Chapter 1.12]). Hence,
and
. Therefore,
and
. In other words, the usual boolean semantic values of ‘
$\neg $
’ and ‘
$\vee $
’ are not determined by a (complete) description of their inferential behaviour, as captured by
$\vdash _{CPL}$
.
This is somewhat surprising given that the class of boolean valuations
$\mathcal {B}$
was sound and complete with respect to classical propositional consequence
$\vdash _{CPL}$
, i.e., both
$\vdash _{CPL} \: \subseteq \: \models _{\mathcal {B}}$
and
$\models _{\mathcal {B}} \: \subseteq \: \vdash _{CPL}$
hold, hence
$\models _{\mathcal {B}} \: = \: \vdash _{CPL}$
. It means that although the consequence relation of CPL fully and adequately captures the relation of logical consequence for the language of classical propositional logic, and the boolean valuations provide sufficiently many counterexamples for any claim of consequence not licensed by CPL, something is left out and cannot be secured at the level of consequence. The usual axiomatizations of CPL fail to provide, in Carnap’s words, a full formalization of CPL and thus require amendment. Thus, despite the completeness of
$\vdash _{CPL}$
for the intended interpretations of the logical constants not every aspect of their meaning is adequately captured by the consequence relation.Footnote
9
From the perspective of
$\vdash _{CPL}$
there is therefore no reason to think that the boolean clauses are the right way of semantically describing the meanings of the connectives.
This Carnapian underdetermination demonstrates two things: on the one hand, we are unable to recover the standard, intended meanings of the connectives from their inferential behaviour. On the other, the usual boolean meanings of these connectives are unstable in the sense that they cannot be recovered from the consequence relation they generate – they fail to be uniquely determined on the basis of the inferential patterns they give rise to.
How ‘bad’ or extensive is Carnap’s Problem? Already in [Reference Carnap20], Carnap provided an exhaustive classification of what can go wrong with valuations at the level of classical propositional consequence. He there showed that what he termed non-normal valuations, unintended valuations consistent with
$\vdash _{CPL}$
, are of two kinds: (i) the single valuation
$v_{\top }$
that interrupts the law of non-contradiction by making everything true; and (ii) valuations that make at least one sentence false and fail to be boolean for at least one connective by violating bivalence and making a sentence and its negation both false (of which
$v_{\vdash }$
is one of many instances).Footnote
10
What is lost through these ‘non-normal’ valuations is the truth-functionality of the semantic values of the connectives. Given a set of valuations
$\mathcal {V}$
and a connective c of adicity n, c is truth-functional with respect to
$\mathcal {V}$
if there exists a function
$f_{c}: \{0, 1\}^{n} \mapsto \{0, 1\}$
, s.t. for all
and
$v \in \mathcal {V}$
:Footnote
11
$f_{c}$
is called a truth-function for c. It follows from the truth-functionality of a connective with truth-function f that, whenever
$v(\varphi _{i}) = v(\psi _{i})$
, then
$f(v(\varphi _{1}), \ldots , v(\varphi _{n})) = f(v(\psi _{1}), \ldots , v(\psi _{n}))$
. Thus, if c is truth-functional,
$v(c(\varphi _{1}, \ldots , \varphi _{n})) = v(c(\psi _{1}, \ldots , \psi _{n}))$
. Consider, once more, the valuation
$v_{\vdash }$
and let
$f_{\vee }$
be the usual binary truth-function of
$\vee $
. We then have that
$v_{\vdash }(p \vee \neg p) = f_{\vee }(v_{\vdash }(p), v_{\vdash }(\neg p))$
. However,
$f_{\vee }(0, 0) = 0$
while
$v_{\vdash }(p \vee \neg p) = 1$
. Hence,
$f_{\vee }$
cannot be a truth-function for
$\vee $
over
$\mathcal {B} \cup \{v_{\vdash }\}$
. Similar arguments establish that there can be no truth-function for
$\vee $
over
$\mathcal {B} \cup \{v_{\vdash }\}$
.Footnote
12
Just as severe as Carnap’s Problem is for an adequate meaning-theory of the connectives, just as easy it is to address: force the truth-functionality of any one of the connectives
$\neg , \vee , \rightarrow , \leftrightarrow $
and the standard, intended, truth-functional interpretations will thereby be determined for all of them.Footnote
13
Thus, what is needed is a way to stabilize the standard boolean meaning of any one of the connectives other than conjunction in order to obtain a solution to Carnap’s Problem for CPL.Footnote
14
2.2 First-Order Logic (FOL)
Carnap was aware that the underdetermination uncovered by him extended to the quantifiers as well. In [Reference Carnap20] he outlined and discussed the non-normal interpretations affecting the standard first-order universal and existential quantifiers. Arguably, however, Carnap was not yet able to fully appreciate the dimension of underdetermination as it occurs at the level of quantification, since the mature concept of a quantifier only emerged later through the work of [Reference Lindström59, Reference Montague and Thomason66, Reference Mostowski67]. This observation is supported by the (limited) solution he put forward to resolve the underdetermination of the quantifiers by, essentially, reducing universal quantification to infinitary conjunction and existential quantification to infinitary disjunction. Since the concern of the present article lies less with a reconstruction of the historical Carnap we will here treat the issue from the perspective of generalized quantifier theory to emphasize the generality of the issue of Carnapian underdetermination.Footnote 15
Let
$\mathscr {L}(Q_{1}, \ldots , Q_{n})$
be a relational first-order languageFootnote
16
with a countably infinite set of individual variables
$x_{1}, x_{2}, \ldots $
, a countably infinite set of relation-symbols
$R_{1}^{n}, R_{2}^{n}, \ldots $
for any adicity n, a full complement of propositional connectives
$\neg , \wedge , \vee , \rightarrow , \leftrightarrow $
, and quantifier-symbols
$Q_{1}, \ldots , Q_{n}$
. Models for
$\mathscr {L}(Q_{1}, \ldots , Q_{n})$
are standard relational first-order structures
$\mathcal {M} = \langle M, \mathcal {R}_{1}, \mathcal {R}_{2}, \ldots , \rangle $
. The definition of sentence, consequence relation (written:
$\vdash _{\mathscr {L}(Q_{1}, \ldots , Q_{n})}$
), and truth-in-a-model are as expected and analogous to the propositional case.Footnote
17
From the perspective of generalized quantifier theory,Footnote
18
a quantifier
$\mathcal {Q}$
is a second-order predicate ‘checking’ whether a (sequence of) first-order predicate(s) has the property expressed by the quantifier. A quantifier-expression is of type
$\langle k_{1}, \ldots , k_{n} \rangle $
if it combines with n formulas to form a well-formed expression, binding
$k_{i}$
variables in the i-th formula. The usual universal and existential quantifiers are of type
$\langle 1 \rangle $
. For simplicity of presentation we consider, unless noted otherwise, only quantifiers of type
$\langle 1 \rangle $
in the following.
Let
$\mathfrak {M}$
be the class of all first-order models. The semantic value of a (type
$\langle 1 \rangle $
) quantifier
is a class of (first-order) models [Reference Lindström59]:Footnote
19
where
$\Psi $
is some (set-theoretic) property. The satisfaction clause for generalized quantifiers is as follows (where
$\mathcal {M}$
is a first-order model with domain M and
):
Examples of generalized quantifiers include the usual first-order quantifiers
$\forall $
,
$\exists $
, elementarily definable quantifiers such as ‘at least
$3$
’ (
$\exists _{\geq 3}$
), but also quantifiers such as ‘infinitely many’ (
$Q_{0}$
) and ‘uncountably many’ (
$Q_{1}$
):
-
(i)

-
(ii)

-
(iii)

-
(iv)

-
(v)

Let
$\vdash _{FOL}$
be the usual (single-conclusion) consequence relation of classical first-order logic. In a recent paper, Bonnay & Westerståhl [Reference Bonnay and Westerståhl12] precisely characterized the shape of possible interpretations of
$\forall $
consistent with its inferential behaviour in the context of
$\vdash _{FOL}$
.Footnote
20
Let
$\mathcal {M}$
be a model with domain M. The local quantifier
over a particular model
$\mathcal {M}$
can be obtained from the corresponding global quantifier
by restricting attention to the domain M of
$\mathcal {M}$
in the following way (for a type
$\langle 1 \rangle $
quantifier):
. In keeping with the global characterization of meaning, we say that a quantifier-interpretation
is consistent with a consequence relation
$\vdash $
iff
$\vdash $
is sound with respect to the model-theoretic consequence relation generated by
(see Appendix for details).Footnote
21
Then,
Theorem [Reference Bonnay and Westerståhl12].
An interpretation
of the universal quantifier is consistent with
$\vdash _{FOL}$
as long as, for all
$\mathcal {M}$
,
is a principal filter over M.
A principal filter
$\mathcal {F}_{X}$
over a set M has the form
$\mathcal {F}_{X} = \{A \subseteq M \: | \: X \subseteq A\}$
for some
$X \subseteq M$
.Footnote
22
The intended interpretation
is of course a principal filter over M (set
$X = M$
), – it is the maximal principal filter over M, – but it is, in general (as long as
$|M|> 1$
), far from the only one. As a result,
$\vdash _{FOL}$
underdetermines the semantic value of
$\forall $
and, derivatively, of
$\exists $
as well.
How to best understand the scale of underdetermination here? The meaning of
$\forall $
, according to
$\vdash _{FOL}$
, essentially boils down to meaning ‘for all X’ and the meaning of
$\exists $
to ‘some X’, where X is an arbitrary subset of the domain [Reference Bonnay and Westerståhl12]. As an example, let X be the base of a principal filter
$\mathcal {F}_{X}$
interpreting
$\forall $
over a model
$\mathcal {M}$
. Due to the duality of
$\forall $
and
$\exists $
, we have that
. Thus,
$\mathcal {M} \models \exists x \varphi (x)$
iff
iff
iff
, i.e., iff there is some X that is
$\varphi $
. Models of first-order logic thus come to resemble inner domains of models of free logic, distinguishing between ‘existing’ (those in the base of the principal filter) and ‘nonexisting’ (those not present in the base of the principal filter) objects [Reference Bonnay and Westerståhl12]. Allowing constant-symbols in the language, the only interpretation guaranteed to be recoverable from first-order inference patterns is the substitutional interpretation of the quantifiers.Footnote
23
This is, of course, deeply troubling, for it “undermines the prospects of philosophical ontology construed as the quintessentially armchair project of extracting ontological commitments from the semantic analysis of quantified statements” [Reference Antonelli and Torza2, p. 171]. For philosophical projects relying on an objectual interpretation of the usual first-order quantifiers, combined with the idea that this meaning is, ultimately, to be ‘read off’ their inferential behaviour, Carnap’s Problem is devastating.
2.3 The philosophical significance of Carnap’s problem
Carnap’s Problem is more than a mere formal curiosity in the mathematical foundations of propositional and first-order logic. Carnap deemed a categorical, i.e., unique, determination of the semantic values of the logical expressions of these systems to be a desideratum on par with soundness and completeness results for the relevant calculi. The underdetermination of semantics by ‘syntax’ that he uncovered constitutes a stumbling block for many philosophical projects that rely on the formalisms of the respective logical systems. In this brief section, I want to provide a few examples of places where Carnap’s Problem might be taken to cause issues.
Informally, Carnap’s Problem problematizes the idea that understanding how an expression functions in inference suffices for grasping its truth-conditional content. It therefore constitutes an immediate and sizeable issue for moderate inferentialism, a group of positions that attribute an important meta-semantic function to inferential roles in a theory of meaning. According to positions of this type, meanings are (best modelled by) model-theoretic objects (semantic maximalism) that can be known or ‘gotten to’ on the basis of epistemically minimal, and naturalistically acceptable, resources – inferential roles of the relevant expressions. Inherent to moderate inferentialism is the adoption of a meaning-determination thesis according to which it is inferential roles that determine, but are not identified with, the meanings of the logical expressions. As such, their success is threatened by Carnap’s Problem.Footnote 24
Relatedly, Carnap’s Problem undermines the idea that “soundness and completeness serve to legitimate talk of reference, denotation, semantic value, and the like; these model-theoretic terms derive their sense from the connection between models and valid inference” [Reference Ripley84, p. 142]. For it demonstrates that there is a gap between completeness and categoricity (in Carnap’s sense): soundness and completeness of a logical system are not sufficient for the categoricity of its logical expressions (and, as will be shown in Section 5.2, neither are they necessary). More, then, is needed to accommodate a grasp of or reference to noninferential meanings on the basis of inferentially mediated access.
Carnap’s Problem severely distorts the idea that the understanding of a logical notion is constituted by an appreciation of its (characteristic) inferential patterns, an idea popular in certain debates in the philosophy of logic and language (see, e.g., [Reference Boghossian10]). Where unique determination is made part of the conditions for the logicality of a notion (see [Reference Bonnay, Speitel, Sagi and Woods11, Reference Feferman and Torza28]), the need to address Carnapian underdetermination is obvious.
Logic, as a tool for scientific theory-building, is meant to provide a framework that not only ensures safe and reliable inference, but also limits the inevitable underdetermination arising at the level of scientific data (underdetermination of theory by evidence) and at the level of possible models of a theory (due to Löwenheim-Skolem and similar phenomena). Instead, Carnap’s Problem introduces a further dimension of underdetermination into the logical apparatus used for formalizing scientific theories, thereby increasing indeterminacy at a particularly basic level: “[o]ur student has heard of the difficulties of excluding non-standard interpretations in the upper stories of mathematics; now he finds the same thing in the basement [of logical theorizing]” [Reference Shoesmith and .Smiley92, p. 3].
A further, philosophically important aspect of Carnap’s Problem was already briefly mentioned in the previous section. According to Quine’s (in)famous criterion of ontological commitment [Reference Quine and Quine81] the ontological commitments of a theory are determined by its quantified-over variables. Combining this with the view that the meaning of those quantifiers is to be determined in a naturalistically acceptable way by inferential patterns renders this assessment of ontological commitment inadequate: “the possibility of non-standard interpretations reveals that being the value of a variable is at best a sufficient, but not necessary condition for ontological commitment” [Reference Antonelli1, p. 657].
Although Carnap’s discussion of the eponymously named underdetermination phenomenon in [Reference Carnap20] took place in the context of classical languages and truth-conditional semantics, nothing about Carnap’s Problem restricts it to this setting. Carnap’s question can be asked for any language and logical system that possesses a sufficiently formalized syntax and semantics. Carnap’s Problem arises just as forcefully for other logical systems. What changes when moving to a different language or logic are not just the inferential descriptions of the relevant notions but, more importantly, the semantic space, i.e., the space of values of the logical expressions under consideration. In an intensional setting, for example, it will no longer do to treat propositional meanings as given by functions from sentences to truth-values, and we might have to assess categoricity with respect to a semantic framework identifying meanings with, say, sets of worlds. We will consider applications of Carnap’s Problem to nonclassical and richer settings in Sections 4.2 and 4.3 below. At this point, it is worth emphasizing the extent and reach of Carnap’s Problem and the concomitant wealth of philosophical and formal questions revealed by it for debates in logic, philosophical logic, and the philosophies of logic and language.
3 Inferential strategies for solving Carnap’s problem
Common to inferential strategies addressing Carnap’s Problem is the belief that the shortcoming revealed by it is to be located in the restricted way inference can be represented in a purely assertion-based, single-conclusion framework. What Carnap’s Problem shows, according to accounts of this type, is that we should adopt a richer notion of inference to tighten control over the semantic values determined, fixed, or ‘pinned down’ by rules and inferential patterns. The basic method underlying inferential strategies consists therefore in a strengthening of the proof-theory or language so as to enable it to express further or stronger conditions on the semantic framework with which it is to cohere.
This section briefly surveys four inferential strategies that have been put forward to resolve Carnapian underdetermination and points out some of their (philosophical) weaknesses in the context of addressing Carnap’s Problem. The first strategy (Section 3.1) takes issue with the fact that inference is considered single-conclusion, the second with the fact that inference is taken to be (solely) assertion-based (Section 3.2). Nonetheless, both still support the idea that determination of semantic value should be effected by inferences, i.e., sequents consisting of premisses and conclusion(s). In contrast, the solution strategies of Sections 3.3 and 3.4 shift perspective from sequents to rules, and thus from inferences to inferring, maintaining that the dynamics of drawing inferences have a role to play in the determination of semantic value.
3.1 Enriching consequence: Multiple conclusions
Introduced in Gentzen’s seminal study of the proof-theory of classical and related systems [Reference Gentzen35], a multiple conclusion consequence relation (mcr)
$\vdash _{\mathscr {L}}^{m}$
over a language
$\mathscr {L}$
is a relation of the form:Footnote
25
The basic notion of an argument according to the multi-conclusion perspective is thus one with multiple premises and multiple conclusions. A valuation
$v$
is consistent with an mcr
$\vdash_{\mathscr{L}}^{m}$
iff, whenever
$\Gamma \vdash _{\mathscr {L}}^{m} \Delta $
and
$v(\Gamma ) = 1$
, then
$v(\delta ) = 1$
for some
$\delta \in \Delta $
. The remaining notions are defined analogous to the single-conclusion case.
This ‘simple’ modification has significant repercussions, for it strengthens the resulting notion of consequence to such a degree that the multiple-conclusion consequence relation for classical logic
$\vdash _{CPL}^{m}$
forces consistent valuations to be boolean.Footnote
26
In the context of mcrs the usual semantic values of the classical connectives are therefore uniquely determined – the expressive resources of the mcr-framework are sufficiently strong to ensure that the standard meanings of the logical constants possess inferential roles that determine them.
Whence the additional control over semantic values? By way of example, consider the valuations
$v_{\top }$
and
$v_{\vdash }$
from above.
$v_{\top }$
is ruled impermissible by the fact that mcrs permit empty succedents. Thus, it holds in particular that
$\varphi , \neg \varphi \vdash _{CPL}^{m} \emptyset $
[Reference Humberstone44, p. 78].
$v_{\vdash }$
, on the other hand, fails to be consistent with
$\vdash _{CPL}^{m} p, \neg p$
. The enriched multiple conclusion framework is in fact so expressive that every truth-value assignment to formulas of the language possesses a corresponding statement of deducibility – as a result, any class of valuations can be uniquely described by a set of multi-conclusion inferences.Footnote
27
The mcr-framework is thereby able to ensure the truth-functionality, and thus booleaness, of the inferentially characterized classical operators.Footnote
28
The “valuational semantics implicit in [a] consequence relation” [Reference Humberstone44, p. 389] is so tightly constrained and regulated in the setting of mcrs that non-standard interpretations are rendered impossible: the additional means of expression made available by moving to a multi-conclusion framework allow for the enforcement of constraints on consistent valuations that are sufficient to rule out Carnap’s non-normal valuations.Footnote
29
For the propositional case, making consequence multi-conclusion thus successfully solves Carnap’s Problem and fully formalizes CPL in the sense of Carnap. Things look less promising for the quantifiers, however. Accounts extending the multiple conclusion strategy to expressions from this grammatical category usually do so by reducing universal and existential quantification to infinite conjunctions and disjunctions, respectively, and stipulating that domains remain countable with every object in them possessing a name.Footnote 30 However, “[t]his procrustean strategy shows at best that if quantifiers are reduced to connectives, what works for connectives works for quantifiers as well” [Reference Bonnay and Westerståhl12, p. 723]. In particular, given the infinitary rules that must be adopted for the quantifiers on this conception the result is a rather unattractive formalism coupled with an apparent misconstruction of the grammatical type of quantificational expressions.
The multiple conclusion strategy was, in effect, already applied by Carnap himself [Reference Carnap20]. Adopting an mcr-framework to resolve Carnap’s Problem, however, encounters several philosophical obstacles. Even independently of the problematic treatment of quantifiers, mcrs have been regarded as sufficiently unnatural to lack philosophical motivation, especially in relation to the inferentialist program in the philosophy of logic.Footnote 31
The complaint is that “multiple-conclusion systems represent so marked a departure from our actual practice that they can hardly be said to track that practice even in an idealised sense” [Reference Steinberger98, p. 335].Footnote 32 Given that reliance on inferences was to satisfy the naturalistic demand for non-mysterious determination of logical meanings, such artificiality poses a challenge to the proponent of a multi-conclusion strategy. For the inferences codified in a consequence relation were to capture the usual practice according to which the meaning of the relevant expressions was determined. The move to a distinct and unrelated practice does very little to bridge that gap.
In his review of [Reference Carnap20], Church was less worried about the artificiality of the multi-conclusion framework than he was about the fact that “Carnap’s use of them [mcrs] is a concealed use of semantics” [Reference Church21, p. 496]. The worry stems from the way multiple conclusion sequents are interpreted. According to the disjunctive interpretation, a (multi-conclusion) sequent
$\Gamma \vdash _{\mathscr {L}}^{m} \Delta $
is equivalent in meaning to the (single-conclusion) sequent
$$ \begin{align*} \underset{\gamma \in \Gamma}{\bigwedge} \gamma \vdash_{\mathscr{L}} \underset{\delta \in \Delta}{\bigvee} \delta. \end{align*} $$
Understanding a multi-conclusion sequent via the associated single-conclusion sequent, however, seems to violate a fundamental inferentialist tenet in that the “very format of the proof system requires us to have a prior grasp of the meanings of some logical constants” [Reference Steinberger98, p. 346], namely of (possibly infinite) conjunctions and, more problematically, disjunctions.Footnote 33 Similarly, Dummett agrees that “[s]equents with two or more sentences in the succedent […] have no straightforwardly intelligible meaning, explicable without recourse to any logical constant” [Reference Dummett27, p. 187]. One thus already needs to possess an understanding of disjunction before being able to use the multiple conclusion framework whose purpose was precisely to determine such a meaning.
Without presupposing a prior understanding of disjunction, however, “non-normal interpretations of this ‘full formalization’ [the mcr formalization] become possible” [Reference Church21, p. 496],Footnote
34
for without fixing the meaning of disjunction a revenge Carnap’s Problem would affect “
$\bigvee $
”. The disjunctive interpretation of multi-conclusion sequents, built into what it means for a valuation to be consistent with an mcr, therefore appears unable to ground a non-circular explanation of how the semantic values of the propositional connectives are determined in virtue of the inferences they feature in.Footnote
35
3.2 Expanding language: bilateralism
The fundamental assumption of bilateralism is that there are two types of primitives logical theory has to account for:Footnote
36
the speech-acts of assertion and denial, both falling into the purview of logic, are conceived as “distinct activities on all fours with one another” [Reference Smiley93, p. 1]. Consequently, appropriate axiomatizations of logical systems will have to include two types of rules – those governing the interaction of a constant with the speech-act of assertion, and those governing the interaction of a constant with the speech-act of denial. Notwithstanding the equivalence between the denial of
$\varphi $
and the assertion of
$\neg \varphi $
, the bilateral enrichment is not inert: it enables a harmonious formulation of classical logic in a single-conclusion setting [Reference Rumfitt86] and a resolution of Carnap’s Problem.
To formally express the added dimension of meaning, bilateralists introduce force-markers
$+$
, for assertion, and
$-$
, for denial, into the language. Just as speech-acts apply to contents, force-markers attach to sentences
to yield signed sentences
$+ \varphi $
and
$- \varphi $
. Let the set of all signed sentences of
$\mathscr {L}$
be
. Force-markers do not “contribute to propositional content, but indicate[…] the force with which that content is promulgated” [Reference Rumfitt86, p. 803]. They are thus, unlike logical constants, non-embeddable and cannot be iterated. Their interaction is governed by coordination-principles which constitute structural rules of the relevant logical calculi.Footnote
37
What is true or false is of course contents and not the speech-acts themselves, but every valuation
$v$
over
induces a correctness-valuation
$v_{c}$
over
, s.t.
$v_{c}(+ \varphi ) = $
c(orrect) if
$v(\varphi ) = 1$
,
$v_{c}(+ \varphi ) = $
i(ncorrect) if
$v(\varphi ) = 0$
,
$v_{c}(- \varphi ) = $
c if
$v(\varphi ) = 0$
and
$v_{c}(- \varphi ) = $
i if
$v(\varphi ) = 1$
.Footnote
38
Thus, an assertion is correct iff the asserted content is true and a denial is correct iff the denied content is false.
Bilateral consequence relations
$\vdash _{\mathscr {L}}^{b}$
are single-conclusion consequence relations of the form:
tracking the preservation of correctness, rather than of truth. This change of focus effects a redefinition of the notion of consistency. A correctness valuation
$v_{c}$
is now consistent with
$\vdash _{\mathscr {L}}^{b}$
iff, whenever
$\Gamma \vdash _{\mathscr {L}}^{b} \varphi $
and
$v_{c}(\Gamma ) = $
c, then also
$v_{c}(\varphi ) = $
c.Footnote
39
Despite this shift we may continue to speak of the consistency of a valuation
$v$
with a consequence relation
$\vdash _{\mathscr {L}}^{b}$
directly, due to the correspondence between valuations and correctness-valuations:
$v$
is consistent with
$\vdash _{\mathscr {L}}^{b}$
iff the induced
$v_{c}$
is consistent with
$\vdash _{\mathscr {L}}^{b}$
. Similarly, for a given set of correctness valuations
$\mathcal {V}_{c}$
we say that a (signed) sentence
is a
$\mathcal {V}_{c}$
-consequence of a set of (signed) sentences
,
$\Gamma \models _{\mathscr {L}}^{b} \varphi $
, if, for all
$v_{c} \in \mathcal {V}_{c}$
, whenever
$v_{c}(\Gamma ) = $
c, then also
$v_{c}(\varphi ) = $
c. Definitions of related notions can be given analogously to the above.Footnote
40
Smiley [Reference Smiley93] then shows that the bilateralist’s framework is sufficiently expressive to uniquely determine the semantic values of the connectives and resolve Carnap’s Problem: for any set of valuations
$\mathcal {V}$
and bilateral consequence relation
$\models _{\mathcal {V}_{c}}$
induced by
$\mathcal {V}$
, it holds that
$\mathcal {V} = \mathbb {V}(\models _{\mathcal {V}_{c}})$
[Reference Smiley93]. By way of example, the valuation
$v_{\top }$
is ruled inadmissible due to the fact that
$+ \varphi \vdash _{CPL}^{b} - \neg \varphi $
.Footnote
41
Here,
$v_{\top }$
fails to induce a correctness-preserving correctness valuation, for the correctness valuation induced by
$v_{\top }$
will be such that
$v_{\top }^{c}(+ \varphi ) = $
c, whereas
$v_{\top }^{c}(- \neg \varphi ) = $
i [Reference Hjortland41, p. 454]. Similarly, the valuation
$v_{\vdash }$
is inconsistent with
$\vdash _{CPL}^{b}$
due to the fact that
$- p, - \neg p \vdash _{CPL}^{b} - (p \vee \neg p)$
, in which case
$v_{\vdash }^{c}$
takes us from correct to incorrect [Reference Rumfitt86, p. 807].
The situation in the bilateral case is analogous to the case of mcrs in that the bilateral formalism is expressive enough to associate every possible truth-value assignment with a statement of deducibility, thereby constraining consistent valuations tight enough so as to only permit boolean valuations as semantic values of the classical connectives. This is not surprising, for one can show that, in the classical case, mcrs and bilateral consequence relations are, in fact, intertranslatable.Footnote 42 This intertranslatability, however, “should not blind us to what is […] a crucial philosophical difference” [Reference Rumfitt86, p. 810] between mcrs and bilateralist consequence. For bilateral systems not only allow for the harmonious formulation of an axiomatization of CPL in a single-conclusion setting, but also resolve Carnap’s Problem while avoiding the objections brought forward against mcrs.Footnote 43
The resolution of Carnapian underdetermination in the bilateral framework is achieved through the assumption of the force-markers ‘
$+$
’ and ‘
$-$
’. It is therefore not surprising that the status of their meaning is critical to a well-motivated response to Carnap’s Problem. In many ways, the force-marker of denial behaves just like a negation-operator,Footnote
44
raising suspicion that the bilateralist might have made a hidden, and potentially illegitimate, semantic assumption in her usage of ‘
$-$
’ (see [Reference Murzi and Hjortland70, p. 486]). Given the (formal) similarity between unilateral negation and bilateral denial, then, it seems unclear how “bilateralism could possibly hope to offer any resistance against the semantic underdetermination argument” [Reference Button and Walsh19, p. 308].
But even if this line of reasoning could be resisted, and a distinction between negation and denial be upheld, a revenge Carnap’s Problem might arise at the level of correctness valuations. For it might be asked with what justification the bilateralist excludes non-normal correctness valuations like
$v_{c}^{\top }$
, where
$v_{c}^{\top }(\varphi ) = $
c for all
[Reference Murzi and Hjortland70, p. 486]. Bilateral consequence will be consistent with
$v_{c}^{\top }$
, yet
$v_{c}^{\top }$
will block the reconstruction of the boolean semantic value for negation from
$\vdash _{CPL}^{b}$
. Carnap’s Problem has thus been shifted upwards to the level of correctness valuations, but is by no means resolved.
A proponent of this type of objection will see themselves accused of having failed to appreciate some of the bilateralist’s basic assumptions. For the correctness norms governing assertion and denial are part and parcel of the bilateralist framework and not up for reinterpretation (see, e.g., [Reference Incurvati and Smith49, p. 10]). Taking them to possess the same openness and underdetermination as the logical constants misunderstands the bilateral approach: the question is not what the meaning of assertion and denial is, but “whether the content of the negation sign is fixed by the bilateral inferential practice when added to a given background of the use of force-markers to construct sentences whose default use is for assertion and rejection” [Reference Incurvati and Smith49, p. 10]. The bilateral practice, however, is assumed given from the outset – the formal similarity between ‘
$\neg $
’ and ‘
$-$
’ belies this crucial difference between the status of the meaning of these symbols.Footnote
45
Still, it might be objected that the bilateralist approach to Carnap’s Problem ultimately succeeds because it amounts to the tacit assumption of semantic principles going beyond those that can be established on an inferential basis alone. For some of the philosophical positions sketched in Section 2.3 this will constitute an issue.
Lastly, how does the bilateralist fare with respect to the quantifiers? Results by Button and Walsh suggest that they fare better than the unilateralist, but that preservation of correctness is still insufficient to determine the standard meanings of the first-order universal and existential quantifiers fully (see [Reference Button and Walsh19, Chapter 13]). However, it must be pointed out that a canonical (set-theoretic) semantics for bilateral quantification is still in its infancy, so that a conclusive judgement on this issue cannot be reached at this point.Footnote 46
3.3 Re-interpreting inference: Open-endedness
Open-ended inferentialism is characterized by two general tenets: (i) rules of inference determine the meaning of the logical constants; and (ii) such meaning-determining rules “hold always and without exception” [Reference Button and Walsh19, p. 313].Footnote 47 The former indicates a shift from consequence relations to rules presenting consequence relations (this makes it possible to talk about rules continuing to hold, in full generality, in expansions of a language).Footnote 48 The latter is the principle of open-endedness, – the idea that “rules of inference are truth preserving within any mathematically possible extension of language” [Reference McGee, Sher and Tieszen63, p. 70], – grounded in the prelinguistic and language-transcendent nature of inference (see, e.g., [Reference Murzi and Topey72]).Footnote 49 The open-ended character of rules of inference is codified through the demand that these rules continue to hold, no matter how the language is expanded.
Button [Reference Button18] and Button and Walsh [Reference Button and Walsh19] show that the requirement of open-endedness suffices to pin down the intended semantic space, a two-valued Boolean algebra, among all possible Boolean algebras. Under the assumption that the semantic space providing possible interpretations of the connectives must be a Boolean algebra, the connectives are thereby uniquely determined.Footnote 50 Despite these strong results, Carnap’s Problem remains, at least partially, unaddressed, for the determination succeeds under the assumption that the relevant semantic space forms a Boolean algebra, which precludes certain non-normal but inferentially admissible valuations from the outset. Yet, the restriction to Boolean algebras itself remains inferentially unmotivated.Footnote 51
McGee pursues a different route to unique determination. According to his interpretation of the open-endedness requirement the rules for a connective c must continue to hold when the language is extended with a duplicate
$c'$
of that connective, governed by identical rules of inference. A connective c is uniquely determined by its rules if sentences including it are interderivable with sentences that are identical, except that occurrences of c have been replaced with its duplicate
$c'$
, and vice versa [Reference McGee, Sher and Tieszen63, Reference McGee, Caret and Hjortland65]. Such interderivability ensures that the inferential role of a connective c has been so tightly constrained by its rules that there is but a unique candidate filling that role.Footnote
52
The usual connectives of propositional logic all possess rules satisfying this requirement [Reference Harris38].
Assuming a further soundness condition for the uniquely characterizing rules with respect to a given semantic space ensures that a constant and its duplicate will possess identical semantic values [Reference McGee, Sher and Tieszen63]. They are, therefore, not only inferentially, but also semantically uniquely determined. The rules for the usual connectives and quantifiers of FOL, understood as open-ended rules, “create a uniquely defined semantic role for each of the connectives and quantifiers” [Reference McGee, Sher and Tieszen63, p. 68]. Note, however, that this type of uniqueness still falls short of solving Carnap’s Problem. For while inferential uniqueness is sufficient to ensure sameness of semantic values for constants governed by identical sets of rules, it still cannot guarantee that there is only one possible interpretation of the constants with respect to which they are sound [Reference McGee, Rayo and Uzquiano64, p. 193]. It achieves, in other words, identical but not categorical interpretations – the constants are unambiguous, but not unique.Footnote 53
How does open-ended inferentialism fare in the case of the quantifiers? Motivated by Quinean and Putnamian concerns regarding indeterminate quantifier meanings and restricted quantification, McGee shows that the possibility of deviant meanings of the quantifiers is undermined by the open-endedness of the quantifier rules [Reference McGee, Rayo and Uzquiano64, p. 191].Footnote
54
Carnapian underdetermination of the quantifiers showed that the standard rules of universal quantification are sound for an interpretation according to which the variables range over a proper subset of the domain, so long as this subset constitutes the base X of a principal filter over the domain M. The meaning of
$\forall $
therefore amounts to something like ‘for all X’ rather than ‘for all elements of the domain M’. Under the assumption that the rules for the universal quantifier are open-ended, and thus need to remain valid no matter how the language is extended, they remain applicable when the language is extended by a predicate P, s.t.
, and a constant c, s.t.
a
$\notin X$
. In this extended language,
$\forall x Px$
will be true, yet
$Pc$
will be false in
$\mathcal {M}$
. But this conflicts with the rule of universal instantiation according to which
$\forall x Px \vdash _{\mathscr {L}} Pc$
[Reference McGee, Sher and Tieszen63, p. 68]. Since this argument can be reiterated as long as the base of the principal filter providing the interpretation of
$\forall $
is not maximal, demanding that the rules for the universal quantifier be open-ended appears to force it to take on its standard interpretation: “So, the default value of ‘
$\forall $
’ […] is quantification over everything” [Reference McGee, Sher and Tieszen63, p. 69].
The possibility of excluding non-normal interpretations of the quantifiers stems from the unconstrained nature of naming: “singular terms are unconstrained in their taking denotations […], thereby giving access to the ‘dark corners’ of the first-order domain where the light of the quantifiers does not shine” [Reference Antonelli1, pp. 638–639]. While the rules for the quantifiers by themselves “do not determine the range of quantification”, they ensure that “the domain of quantification in a given context includes everything that can be named within that context” [Reference McGee, Sher and Tieszen63, p. 69]. Combining this with the idea that the rules must remain valid under any extension of the language, and modulo any artificial restrictions on naming and designation, forces the standard interpretation of the quantifiers. The fact that the reach of singular terms might outstrip the range of quantification was the reason that Bonnay and Westerståhl [Reference Bonnay and Westerståhl12] had to close the admissible interpretations of the quantifiers under the interpretation of terms. Insistence on open-endedness makes the possibility of naming global and overcomes the remaining local underdetermination, therefore constituting a promising approach to resolving Carnap’s Problem at the level of quantification.Footnote 55
3.4 Meta-inferential determinacy: Local models of rules
In devising a stable foundation for a moderate inferentialist position that escapes Carnapian underdetermination J.W. Garson, in a series of works [Reference Garson, Dunn and Gupta31–Reference Garson34], slightly shifts the parameters of the way Carnap’s Problem was construed above, bringing it more in line with traditional inferentialist approaches to meaning.Footnote 56 Thus, instead of taking entire consequence relations as basic, Garson considers sets of rules instead. Moreover, the rules he considers possess a meta-inferential format – they no longer govern transitions between sentences but describe permissible transitions between entire arguments. That is, the rules have the general format of

The particular rules for disjunction adopted by Garson, for example, are:Footnote 57

The shift to a rule-based presentation of transitions between arguments allows for the formulation and implementation of further requirements constraining the relationship between inference and model-theoretic meaning. Carnapian underdetermination, Garson says, is the result of ignoring the way inferential patterns of a constant are given: “The deductive benchmark we are using for what counts as a model of a system is completely insensitive to the rules that are used to formulate it. It has been assumed that all that matters for specifying the inferential relations set up by a logic are the arguments that qualify as provable in the system. However, this view is shortsighted” [Reference Garson34, p. 15]. Admissible models, then, need not just be consistent with the arguments deemed acceptable by the consequence relation, but with the rules themselves: “[a] model of the arguments […] is insensitive to principles that regulate how one deduces new arguments from old ones, and this information matters to the interpretation of the connectives” [Reference Garson33, p. 166].
For our purposes, the interesting case of consistency with a rule is Garson’s criterion of local consistency:Footnote
58
a valuation
$v$
is (locally) consistent with a rule R if, whenever
$v$
is consistent with the rule-premises it must also be consistent with the rule-conclusion (where the notion of a valuation and consistency with a rule-premise-/conclusion are identical to the notions defined at the beginning of the previous section). A valuation
$v$
is thus consistent with a meta-inferential rule R in case
$v$
preserves (standard) consistency from the rule-premises to the rule-conclusion. Excluding the valuation
$v_{\top }$
on the basis of a nontriviality constraint (see Section 4.1 below), Garson then shows that local consistency with the rules adopted for the classical connectives suffices to establish their boolean meanings [Reference Garson33, Reference Garson34]. Hence, adopting a meta-inferential, rule-based specification of the inferential behaviour of the classical connectives suffices to solve Carnap’s Problem.
By way of example, consider, once more, the deviant valuation
$v_{\vdash }$
from above. What, in the current framework, renders
$v_{\vdash }$
inadmissible from the inferential perspective? Recall that, for some propositional letter p,
$v_{\vdash }(p) = v_{\vdash }(\neg p) = 0$
and assume
$v_{\vdash }$
was consistent with the rules for disjunction. Observe that
$v_{\vdash }$
is consistent with (i)
$\vdash _{\mathscr {L}} p \vee \neg p$
, (ii)
$p \vdash _{\mathscr {L}} p$
and (iii)
${\neg p \vdash _{\mathscr {L}} p}$
. Hence, by the assumed consistency of
$v_{\vdash }$
with
$\vee $
E, it follows that
$v_{\vdash }(p) = 1$
– contradiction. Hence,
$v_{\vdash }$
is inconsistent with the rules for disjunction. The meta-inferential formulation of the rules coupled with the adoption of a local consistency constraint allows us to make effective use of false sentences in antecedents of rule-premises, thereby obtaining access to those rows of a truth-table that were out of reach of simple consequence relations [Reference Garson34, p. 39].Footnote
59
Despite its success in resolving Carnapian underdetermination Garson ultimately abandons the standard of local consistency. This might be justified on the grounds that local consistency constitutes the wrong standard of consistency for rules that govern transitions not between sentences, but between arguments: while individual inferences must be truth-preserving, meta-inferences – transitions between inferences, – ought to be validity-preserving.Footnote
60
Validity, however, is a global phenomenon and should, accordingly, not be assessed ‘one valuation at a time’, but ‘wholesale’ with respect to a class of (appropriate) valuations. Thus, whenever a (meta-inferential) rule premise is valid, so should be its conclusion. This gives rise to the standard of global consistency according to which a set of valuations
$\mathcal {V}$
is (globally) consistent with a (meta-inferential) rule R just in case whenever all the rule-premises are consistent with every
$v \in \mathcal {V}$
, the rule conclusion must also be consistent with every
$v \in \mathcal {V}$
. If, in other words, the rule preserves (
$\mathcal {V}$
-)validity [Reference Garson34, 15ff.]. Global consistency succeeds in weakening the meanings of the logical constants determined by this standard of consistency so as to remove the asymmetry between what can be established on the basis of the rules and on the basis of the model-theoretic meanings thus determined.Footnote
61
However, it fails to remove Carnapian underdetermination and thus to solve Carnap’s Problem in the way it was constructed in the context of this article.Footnote
62
Murzi and Topey [Reference Murzi and Topey72] develop a variation of Garson’s approach that (a) succeeds in resolving Carnapian underdetermination, (b) discharges the unmotivated assumption of the exclusion of
$v_{\top }$
on non-inferentialist friendly grounds, and (c) allows an extension of the approach to quantifier-rules. (a) is, essentially, achieved through the adoption of a (suitably adjusted) standard of local consistency. However, to accommodate the quantifiers and overcome incompleteness phenomena that lead Garson to ultimately abandon the local standard of consistency they generalize his framework significantly. Without fully discussing their proposal in detail, central aspects include the adoption of a calculus of meta-inferential rules featuring single-conclusion arguments that may consist of open formulas in addition to sentences (the inclusion of open formulas makes it necessary to adapt the standard of local consistency to now involve, in addition to models, also variable assignments. The result is what Picollo [Reference Picollo78] terms a hybrid account, with models obeying a local, and variable assignments a global standard of consistency).
Moreover, their calculus allows for the possibility of higher-order rules, i.e., the assuming and discharging of rules, which makes it possible to interpret falsum as a punctuation mark, – signaling a dead-end in an argument, – instead of a logical constant, and renders the rule of reductio ad absurdum a structural rule (thereby achieving (b) above).Footnote 63 They modify the quantifier rules to allow for open formulas to feature in the respective premise- and conclusion-sequents and, further, adopt an open-endedness constraint (see Section 3.3). This allows them to resolve Carnap’s Problem uniformly for the propositional connectives and the quantifiers (see Section 5.3 for generalizing this strategy to higher-order quantifiers).Footnote 64
Moving from consequence relations to their rule-based presentations, together with the adoption of a specific rule-format, succeeds in avoiding Carnapian underdetermination. Both parameters play an essential role in resolving Carnap’s Problem. They thus require philosophical justification if they are to feature in a successful defense of the moderate inferentialist position. Even independently of the question whether open formulas are legitimate constituents of (a natural model of) arguments, and whether argumentative practice is best captured in meta-inferential terms,Footnote 65 the shift towards rules together with the requirement of a specific rule-format introduces a rather significant, and potentially unwelcome, presentation-dependency: “[t]he upshot of this is that the model of rules criterion is sensitive to details concerning how a system is formulated” [Reference Garson33, p. 166]. Whether or not Carnapian underdetermination is resolved thus seems to depend on particular, and potentially unstable, dynamics of reasoning.
4 Semantic strategies for solving Carnap’s problem I (the propositional case)
The basic idea of semantic strategies for solving Carnap’s Problem consists in restricting the space of possible valuations or models that can serve as interpretations of the logical symbols from the outset. Underlying this is the idea that Carnap’s Problem “is made artificially difficult by considering all possible interpretations [of the language], no matter how bizarre” [Reference Bonnay and Westerståhl12, p. 733]. Thus, not all consistent models are equally legitimate as some might be excluded on the basis of considerations having to do with general linguistic competence, constraints of logicality, or other factors.
4.1 Valuations vs interpretations
Do all consistent valuations constitute legitimate interpretations of the logical expressions of a language? Is consistency with inferential behaviour, in other words, sufficient for being considered a potential meaning? Bonnay and Westerståhl [Reference Bonnay and Westerståhl12] argue that it is not.Footnote 66 General principles underlying linguistic competence, they claim, narrow down the space of candidate interpretations from the outset, making Carnap’s Problem thereby easier to track.
One such constraint put forward in [Reference Bonnay and Westerståhl12] is a principle of non-triviality:
-
(Non-Triv) Every language contains at least one false sentence.
Such a principle, they say, is “a very weak requirement, hardly in need of motivation” [Reference Bonnay and Westerståhl12, p. 725]; after all, drawing some kind of distinction between what is true and what is false seems fundamental to the functioning of language. Nonetheless, the adoption of such an obvious constraint on potential interpretations is not inert: it rules out
$v_{\top }$
as inadmissible.
(Non-Triv) is a very natural constraint on a semantic space. Note, though, that if the motivation for adopting it was that a language must be able to draw some kind of distinction to be considered language at all it might best be interpreted as a demand of non-uniformity: at least two sentences of the language must take on different truth-values. In this formulation, the constraint of non-uniformity would also rule out the valuation according to which every sentence of the language is false, a valuation violating the classical truth-tables and inconsistent with classical consequence due to the fact that classical logic possesses tautologies.Footnote 67
As soon as we move to a multi-valued setting (see Section 4.2), however, the assumption of non-uniformity becomes problematic. This is the case since the valuation that assigns all atomic sentences the third truth-value in three-valued logics governed by strong Kleene truth-tables extends to a valuation that assigns all sentences of the language the third value, thereby violating the constraint of non-uniformity. This valuation, however, plays an important role in the meta-theory of such logics (demonstrating, for example, that
$K_{3}$
has no theorems). So as simple and natural as non-triviality appears on first view, more might have to be said about the particular shape it takes in different logics.
A more significant requirement defended in [Reference Bonnay and Westerståhl12] and [Reference Westerståhl, Brendel, Carrara, Ferrari, Hjortland, Sagi, Sher and Steinberger108] is a constraint of compositionality. Compositionality has sometimes been put forward as a semantic universal, a principle universally instantiated across languages explaining the otherwise mysterious phenomenon of linguistic creativity, the ability of speakers to understand and produce a potential infinitude of meaningful expressions based on finite amounts of data. The principle of compositionality states thatFootnote 68
-
(Comp) The meaning of a complex expression is a function of the meanings of its constituting expressions and their way of composition.
To be admissible, then, a valuation needs to respect (Comp) – otherwise it won’t even be recognized as an acceptable candidate based on rudimentary linguistic competence. In the context of the language of propositional logic in which meanings are truth-values, (Comp) forces the interpretation of a logical constant to be a truth-function. Which truth-function a particular constant then denotes is determined by the constant’s inferential behaviour.
More precisely, compositionality requires that the meaning of a constant c is given by a function
$f_{c}$
, s.t. the meaning of a compound expression
$c(\varphi _{1}, \ldots , \varphi _n)$
under a valuation
$v$
,
$v(c(\varphi _{1}, \ldots , \varphi _n))$
, is a function of the meaning of c,
$f_{c}$
, applied to the values of
$\varphi _{1}, \ldots , \varphi _{n}$
under
$v$
:
$v(c(\varphi _{1}, \ldots , \varphi _n)) = f_{c}(v(\varphi _{1}), \ldots , v(\varphi _{n}))$
. Since the possible semantic values of expressions under
$v$
are the truth-values
$0$
and
$1$
,
$f_{c}$
must be a truth-function. A valuation
$v$
is said to be c-compositional for a connective c iff there exists a truth-function
$f_{c}$
for c, s.t.
$v(c(\varphi _{1}, \ldots , \varphi _n)) = f_{c}(v(\varphi _{1}), \ldots , v(\varphi _{n}))$
for all
.
Consider, once again,
$v_{\vdash }$
to see how the requirement of compositionality suffices to rule out Carnap’s non-normal valuations. By (Comp) we know that there must be a truth-function
$f_{\vee }$
, s.t.
$v(\varphi \vee \psi ) = f_{\vee }(v(\varphi ), v(\psi ))$
for all
and
. In particular, then,
$v_{\vdash }(p \vee \neg p) = f_{\vee }(v_{\vdash }(p), v_{\vdash }(\neg p))$
. But note that
$v_{\vdash }(p) = v_{\vdash }(\neg p)$
and thus
$1 = v_{\vdash }(p \vee \neg p) = f_{\vee }(v_{\vdash }(p), v_{\vdash }(\neg p)) = f_{\vee }(v_{\vdash }(p), v_{\vdash }(p)) = v_{\vdash }(p \vee p) = 0$
– contradiction [Reference Bonnay and Westerståhl12, p. 728].Footnote
69
Hence,
$v_{\vdash }$
is not an admissible valuation, it is not a legitimate candidate for meaning.
Bonnay andWesterståhl [Reference Bonnay and Westerståhl12] show that (Non-triv) and (Comp) suffice to rule out Carnap’s non-normal valuations and to secure the intended values of the propositional connectives. According to [Reference Westerståhl, Brendel, Carrara, Ferrari, Hjortland, Sagi, Sher and Steinberger108] this is a favourable and in some sense natural result, for non-compositional valuations shouldn’t have even been considered legitimate candidates for interpretations of a language in the first place, as they violate principles characteristic of basic linguistic competence: “absent compositionality, the idea of meanings makes little sense” [Reference Westerståhl, Brendel, Carrara, Ferrari, Hjortland, Sagi, Sher and Steinberger108, p. 1].Footnote 70
4.2 Carnap’s categoricity problem in three values
Westerståhl [Reference Westerståhl, Brendel, Carrara, Ferrari, Hjortland, Sagi, Sher and Steinberger108] demonstrates the effect the assumption of compositionality has on the determination of semantic values of logical constants for a variety of logics and semantics. Here, we are concerned with the possibly most straightforward extension of the semantics of classical propositional logic to a three-valued context, and the question whether compositionality still suffices in this slightly richer setting to uniquely determine the intended semantic values of the logical constants.Footnote 71
A three-valued logic shares the language of classical propositional logic. A valuation
$v$
is now a total function
, where u is a third truth-value. We denote the class of all three-valued valuations by
. The designated values of the family of logics we are interested in in the following are the members of the set
$\mathcal {D} = \{1, u\}$
. For
, a sentence
$\varphi $
is a
$\mathcal {V}$
-consequence of a set of sentences
$\Gamma $
,
$\Gamma \models ^{\mathcal {D}}_{\mathcal {V}} \varphi $
, if it preserves designation. If, in other words, for all
$v \in \mathcal {V}$
, whenever
$v(\Gamma ) \in \mathcal {D}$
, then
$v(\varphi ) \in \mathcal {D}$
as well. All other notions are analogous to those defined above.Footnote
72
The specific three-valued logics we are concerned with in this section are given by the two classes of valuations
$\mathcal {V}_{K}$
and
$\mathcal {V}_{G_{3}}$
, where
$v \in \mathcal {V}_{K}$
iff
$v$
obeys the strong Kleene Schema and
$v \in \mathcal {V}_{G_{3}}$
iff
$v$
obeys the Gödel Schema:

Note that the strong Kleene and the Gödel Schema agree on the truth-tables for conjunction and disjunction, but disagree on the tables for negation and the conditional.
It can now be shown that
$\Gamma \models ^{\mathcal {D}}_{\mathcal {V}_{G_{3}}} \varphi $
iff
$\Gamma \vdash _{CPL} \varphi $
.Footnote
73
In other words, the logic generated by the class of valuations
$\mathcal {V}_{G_{3}}$
with designated values
$\mathcal {D} = \{1, u\}$
is a three-valued presentation of classical propositional logic. The logic generated by the class of valuations
$\mathcal {V}_{K}$
and designated values
$\mathcal {D} = \{1, u\}$
, on the other hand, is the logic LP. Consider now a valuation
$v^{+} \in \mathcal {V}_{G_{3}}$
, s.t.
$v^{+}(p) = v^{+}(q) = u$
for some atomic sentences
. Then, it is easy to observe the following:
-
(i)
$v^{+} \notin \mathcal {V}_{K}$
since, for example,
$v^{+}(p \rightarrow q) = 1$
. -
(ii)
$v^{+}$
is compositional as witnessed by the Gödel Schema. -
(iii)
$v^{+}$
is consistent with
$\models ^{\mathcal {D}}_{\mathcal {V}_{K}}$
: let
$\Gamma \models ^{\mathcal {D}}_{\mathcal {V}_{K}} \varphi $
and suppose that
$v^{+}(\gamma ) \in \mathcal {D} = \{1, u\}$
for all
$\gamma \in \Gamma $
, but
$v^{+}(\varphi ) = 0$
. That means that
$\Gamma \not \models ^{\mathcal {D}}_{\mathcal {V}_{G_{3}}} \varphi $
and, therefore,
$\Gamma \not \vdash _{CPL} \varphi $
. However, since LP is a sublogic of CPL, – i.e., if
$\Gamma \models ^{\mathcal {D}}_{\mathcal {V}_{K}} \varphi $
then
$\Gamma \vdash _{CPL} \varphi $
, – it follows that
$\Gamma \not \models ^{\mathcal {D}}_{\mathcal {V}_{K}} \varphi $
, contradiction to the assumption. Hence,
$v^{+}$
is consistent with
$\models ^{\mathcal {D}}_{\mathcal {V}_{K}}$
.
The above demonstrates that compositionality by itself is insufficient to rule out non-normal valuations for logics moderately richer than classical propositional logic; that there are, in other words, unintended, yet compositional, valuations consistent with the consequence relation of, for example, LP.Footnote 74 Tabakçı [Reference Tabakçı99] investigates in detail just how deep the failure of compositionality in fixing intended interpretations runs for the three-valued logics LP and K3, and discusses possible solution strategies.Footnote 75
Much more would need to be said about the case of three- and multi-valued logics in general to reach a conclusive verdict. However, what the simple example above seems to demonstrate is that the constraint of compositionality would have to be refined further to keep ruling out inadmissible valuations in the case of logics with more than two values. In particular, the above seems to point in the direction of compositionality not being a local property of a valuation, but rather a global property of a class thereof, for notice that there will not be a single truth-function f interpreting, say,
$\rightarrow $
, s.t. all
$v \in \mathcal {V}_{K} \cup \{v^{+}\}$
will be compositional w.r.t. f.Footnote
76
We will not pursue the issue further here, but see [Reference Tabakçı99] for a more detailed investigation of Carnap’s Problem in the three-valued setting.Footnote
77
Foreshadowing some of the concluding remarks of this article, Tabakçı [Reference Tabakçı99] investigates a further, elegant solution for resolving Carnap’s Problem for the logics K3 and LP: demanding that the valuational space obeys the constraint that a complex formula take on the third value u when all its immediate subformulas take on the value u establishes categoricity in a multi-conclusion setting. That such a constraint should be adopted, however, appears to be intimately connected with the role one takes u to play in the semantics of the logic. If a sentence receives the value u because it is ungrammatical or otherwise meaningless, as, for example, in Bochvar’s three-valued logic B
$_{3}$
, it is natural that this property is inherited by more complex sentences involving a constituent with value u, informing the resulting truth-tables of the language. If u is taken to indicate a contingency or indeterminacy, as in Łukasiewicz’s three-valued logic Ł
$_{3}$
, for example, it becomes less clear why that indeterminacy should be inherited in all cases by more complex sentences and thus whether the semantic constraint is appropriate to impose.Footnote
78
Whether it is therefore reasonable to accept such a constraint depends on the philosophical foundations of the logic formalized by means of the relevant matrices.Footnote
79
4.3 Beyond classical settings: Intuitionistic connectives and modal operators
Carnap’s question can be asked for any logic and logical constant therein. It appears naturally in multi-valued settings and beyond. In [Reference Bonnay and Westerståhl12, Section 6] it was established that the classical connectives retain their intended interpretations under the assumption of (Non-Triv) and (Comp) when moving to a possible-world semantics, where sentence-values no longer simply consist of truth-values, but sets of points or possible worlds.Footnote
80
Here, the intended interpretation of ‘
$\neg $
’ is the complementation operation on the powerset of the set of worlds, ‘
$\wedge $
’ the union-operation on the same set, and so on. These values are uniquely determined by the classical consequence relation over the enriched semantic space as long as constraints of non-triviality and compositionality are adhered to.Footnote
81
Of course, once the switch to a more expressive semantics has been made, questions about determinacy don’t just arise for the usual logical operators, but also and especially for the novel operators characteristic to the richer settings. In the possible worlds framework this includes, in particular, the modal operator
$\Box $
.Footnote
82
Here, Carnap’s Problem takes on additional complexities: on the one hand, modal logics admit a plethora of different types of semantics – from Kripke, over topological, to neighbourhood semantics, – all with reasonable claim to being an appropriate semantics for the modal language under consideration. On the other hand, the meaning of
$\Box $
in modal logic is a different type of meaning than that of the other propositional connectives. For while the pointwise truth of a formula without
$\Box $
in a model depends only on the values of the subformulas of the formula at that point, the pointwise truth of a boxed formula in a model also depends on the values of its subformulas at other points of the model (though, usually, not all others). Semantically, this special status of the meaning of
$\Box $
is captured by including an additional parameter in the model (accessibility relations in Kripke frames, the interior function in topological frames, neighbourhoods in neighbourhood frames) with respect to which truth of a boxed formula at a point in a model is determined.
To fruitfully ask Carnap’s question in this enriched framework, then, some preliminary issues have to be settled. Relating to the first point raised above, a choice of semantics has to be made: given the different types of semantics, with respect to which are we asking whether the inferential behaviour of
$\Box $
determines its intended value? Second, what is characteristic of the intended meaning of the
$\Box $
operator within that semantics, and how can this feature best be captured? Ideally, this latter consideration can be translated into semantic constraints restricting the values deemed legitimate for
$\Box $
within the chosen semantics.
Bonnay and Westerståhl [Reference Bonnay and Westerståhl13] take possible worlds semantics to constitute an appropriate compositional semantics for modal languages. The most general type of possible worlds semantics is neighbourhood semantics. This settles the first question in a well-motivated manner. What is distinct about the intended interpretation of
$\Box $
is that, unlike
$\neg $
or
$\vee $
, it does not receive a unique interpretation, possibly indexed by world domains but, rather, what is codified in Kripke semantics: that the meaning of
$\Box $
is such as to allow one to recover an accessibility relation between worlds, such that the truth of a boxed formula at a world depends on the truth of its subformulas at accessible worlds. This captures the idea that an intended meaning of
$\Box $
is such that the truth-values at a point of formulas involving it may depend on truth-values of its subformulas at different points of the model. This settles the second question concerning
$\Box $
’s meaning-type. A first version of Carnap’s question then asks under what conditions a modal consequence relation forces consistent neighbourhood interpretations to be, in the informal way described above, Kripkean [Reference Bonnay and Westerståhl13, p. 585].Footnote
83
Unlike propositional or first-order logic, the framework of modal logic does not constitute a single theory, but rather gives rise to a family of theories in which the behaviour of
$\Box $
can be characterized by different axioms and rules. Since the modal logic K, – containing the characteristic axiom
$\Box (p \rightarrow q) \rightarrow (\Box p \rightarrow \Box q)$
and closed under the (meta-)rule of necessitation “from
$\vdash \varphi $
infer
$\vdash \Box \varphi $
” (modal logics satisfying these are normal modal logics), – is sound and complete with respect to the class of all Kripke frames, it can be taken to articulate some basic adequacy constraints on the meaning of
$\Box $
. Carnap’s question thus becomes: under what circumstances do modal consequence relations extending
force a neighbourhood frame to be Kripkean [Reference Bonnay and Westerståhl13, p. 585]?
For finite frames the constraints imposed by
suffice to force the intended interpretation of
$\Box $
[Reference Bonnay and Westerståhl13, Theorem 12] – as well as of the remaining constants, – but this result does not generalize to the infinite case [Reference Bonnay and Westerståhl13, Fact 13]. Here, further semantic constraints are required to limit the space of consistent interpretations to the class of intended ones. Yet, what sort of semantic constraint is well-motivated based on the choice of semantics for modal languages? What is distinctive of the semantics of the modal elements of these languages?
What is characteristic of the meaning of
$\Box $
is that it contributes a local flavour to the evaluation of formulas including it: their truth depends on the truth of their subformulas at other ‘nearby’ points of the model as well.Footnote
84
Yet, the set of points that needs to be taken into consideration for the truth of a boxed formula at a point falls (usually) far short of the entire domain of the model. Distinctive of the semantic value of
$\Box $
thus seems to be a type of truth-locality – some, but not all, points of the model are relevant for the truth of formulas including it. This locality can be semantically expressed by means of bisimulation invariance:Footnote
85
“[t]he locality of modal logic […] is often framed in terms of invariance under bisimulation: bisimilar worlds satisfy the same modal formulas, so that only the local features of the structure of the Kripke models […] matter to modal satisfaction” [Reference Bonnay and Westerståhl13, p. 598].Footnote
86
Does bisimulation invariance, in conjunction with consistency w.r.t. the consequence relation of modal logics extending K, suffice to yield Kripkeanity?Footnote
87
Near enough: together with a further closure condition, bisimulation invariance indeed ensures Kripkeanity, and thus the intended (type of) interpretation of
$\Box $
[Reference Bonnay and Westerståhl13, Theorem 44 and Corollary 45]. It is worth emphasizing that the adoption of a constraint of bisimulation invariance was not arbitrary. Rather, its acceptability was the result of a reflection on the underlying motivations of the semantic framework itself – what was characteristic of the meaning of the expressions of modal languages was their locality. Once a particular semantics was chosen, the way that locality could be expressed and captured relative to that framework came with it.
Similar choice-points as in the modal case can also be observed in intuitionistic logic(s): here, too, there is an embarrassment of riches when it comes to adopting a semantics – from Kripke over Beth-, topological, Dragalin and, finally, algebraic semantics there are plenty of candidates to consider with respect to which Carnap’s question could be asked. Based on results in [Reference Bezhanishvili and Holliday8], Tong and Westerståhl [Reference Tong and Westerståhl102] arrange the above mentioned semantics in a hierarchy, with Kripke-semantics being the most specific, and algebraic semantics being the most general type of semantics for intuitionistic propositional logic.Footnote 88
Carnap’s question takes interestingly different shapes when applied to algebraic semantics as opposed to the other, set-based, semantics. Characteristic of the former is that the semantic values of sentences of the language are elements of the algebra, whereas semantic values of sentences in the latter type of semantics are (specific types of) subsets of the domains of the relevant models. This has consequences for the way the meanings of the logical connectives are conceived: in the algebraic case, interpretations of the logical constants are delivered by the respective algebra ‘directly’. To informatively ask Carnap’s question in this case one would, therefore, have to ask why (and whether it is only) Heyting algebras (that) constitute the intended interpretation of the intuitionistic connectives. In the case of non-algebraic semantics, the interpretations of the connectives are not part of the model but provided, so to say, from the outside: conjunction is interpreted as set-union, negation as set-complementation, etc. Here, then, we need not consider alternative (types of) models but can ask whether the same class of models, when the connectives are interpreted by different set-theoretic operations, remains consistent w.r.t. intuitionistic propositional consequence
$\vdash _{IPC}$
.
Tong and Westerståhl [Reference Tong and Westerståhl102] show that Carnap’s Problem is successfully resolved in the case of intuitionistic propositional logic by assuming compositionality and consistency with
$\vdash _{IPC}$
for all the semantics considered. The result for set-based semantics is in fact a special case of the more general result for algebraic semantics. They first recall the fact that an algebraic interpretation of the language of intuitionistic logic is consistent with
$\vdash _{IPC}$
iff it is a Heyting algebra [Reference Tong and Westerståhl102, Fact 3.4]. They then show that the Heyting-algebra interpretation of the connectives over the domain of an algebra is the unique interpretation over that domain consistent with
$\vdash _{IPC}$
[Reference Tong and Westerståhl102, Theorem 3.6].Footnote
89
From this it follows that compositionality and consistency with
$\vdash _{IPC}$
suffice to uniquely determine the standard interpretations of the intuitionistic connectives over all set-based semantics considered by Tong and Westerståhl, which include Kripke-, Beth-, and topological semantics [Reference Tong and Westerståhl102, Corollary 3.8].
This result is remarkable for a variety of reasons. On the one hand, it demonstrates the surprising robustness of intuitionistic consequence w.r.t. determining the intended extensions for its logical operators across a variety of semantics. On the other hand, the successful determination of the intended interpretations relies essentially on consistency with full consequence, consistency with ‘mere’ theorems is insufficient to achieve the desired determination [Reference Tong and Westerståhl102, Example 3.10]. Moreover, the categoricity of the intuitionistic connectives is, in general, not modular: while the consequence relation over the full intuitionistic language ensures unique determination for all connectives, this result does not continue to hold when considering arbitrary fragments of the language [Reference Tong and Westerståhl102, 175ff.].
5 Semantic strategies for solving Carnap’s problem II (the quantificational case)
Quantifiers introduce additional levels of semantic complexity, for they perform operations on sub-sentential components, thus unveiling a much more fine-grained structure than could be captured and expressed in the propositional case. Here, (Comp) and (Non-Triv) prove insufficient to pin down the intended meanings of the usual quantifiers. However, at the level of quantification other constraints emerge as natural candidates for adoption in the attempt to resolve Carnapian underdetermination.
5.1 Invariance
What distinguishes a quantifier from a ‘qualifier’, i.e., a mere run-of-the-mill second-order predicate (a predicate of (first-order) predicates), is that the former should only be sensitive to quantitative, i.e., cardinality-based, properties of its arguments. Within the set-theoretic framework of the background theory for the definition of quantifiers cardinality is captured in terms of bijections.Footnote
90
Sets M and N have the same cardinality if there exists a bijection between them. A permutation
$\pi $
is a bijection from a set M to itself. A permutation of a set M naturally induces a permutation of objects from the type hierarchy over M.Footnote
91
An object o from the type-hierarchy over a domain M is permutation-invariant if
$\pi [o] = o$
for all permutations
$\pi $
of M. A permutation-invariant object is thus an object that is insensitive to (model-internal) qualitative features – it is only capable of detecting distinctions on the basis of differing cardinality.Footnote
92
Since (local) quantifiers are second-order predicates over domains, demanding that they be permutation-invariant naturally captures their nature as quantifiers, thereby distinguishing them from other, non-quantificational predicates like, for example, ‘is a colour’. The requirement that quantifiers be permutation- or, more generally, bijection-invariant,Footnote 93 is further supported by considerations pertaining to their logicality: invariance has long been regarded as an at least necessary feature of the logicality of a notion, capturing the idea that logical operations are insensitive to the identity of objects and thus uninfluenced by ‘empirical’ features.Footnote 94 Thus, for something to be a quantifier or a logical notion, it ought to be at least permutation-invariant:
-
(Perm) Quantifiers are permutation-invariant.
Bonnay and Westerståhl observe that the only permutation-invariant principal filter over a domain M is the maximal principal filter
$\{M\}$
[Reference Bonnay and Westerståhl12, p. 730]. Thus, permutation-invariance, in addition to (Comp) and (Non-Triv), suffices to fix the intended interpretation of the universal, and thereby also the existential, quantifier.Footnote
95
This is a very welcome result, given the role played by permutation-invariance in a theory of (logical) quantification. Since being a (logical) quantifier means being permutation-invariant, it appears to follow from the very nature of quantification that the standard universal and existential quantifiers of FOL are uniquely determined.Footnote
96
5.2 Generalized quantification
The universal and existential quantifiers of FOL are at the lower end of a class of possible first-order quantifiers. Bonnay and Westerståhl’s result guarantees that considering them as (logical) quantifiers, and thus demanding that they be permutation-invariant, suffices to determine their standard interpretations. This also yields the determinacy of notions definable in terms of them, in particular of finite cardinality quantifiers of the form
. How far beyond
$\forall $
and
$\exists $
does this strategy generalize? That is, for which other quantifiers is (Perm) the only constraint needed to uniquely ‘fix’ their interpretation?
The study of the unique determinability of generalized quantifiers beyond
$\forall $
and
$\exists $
is still in its beginningsFootnote
97
but already allows several interesting observations. We say that a type
$\langle 1 \rangle $
-quantifier
$\mathcal {Q}$
is generalized elementarily definable, EC
$_{\Delta }$
for short, if there exists a set
$\Delta $
of sentences of FOL of the form
$\varphi (P)$
, whose only non-logical symbol is the predicate letter P of adicity
$1$
, s.t. for all models
$\mathcal {M} = \langle M, X \rangle $
:
In other words, where
$\Delta $
is such a set of sentences of FOL and
$Mod(\Delta ) = \{\mathcal {M} \: | \: \mathcal {M} \models \Delta \}$
,
$\mathcal {Q}$
is EC
$_{\Delta }$
if there exists
$\Delta $
, s.t.
$\mathcal {Q} = Mod(\Delta )$
. When
$\mathcal {Q} = Mod(\Delta )$
for some appropriate set of sentences
$\Delta $
we denote it by
$\mathcal {Q}_{\Delta }$
. We then observe the following:
Observation 5.1. Let
$\mathcal {Q} = \mathcal {Q}_{\Delta }$
for some set of sentences
$\Delta $
of FOL and assume that
$\mathcal {Q}_{\Delta }$
interprets Q, i.e.,
. Then,
-
(a)
$QxPx \models \varphi $
for all
$\varphi \in \Delta $
-
(b)
$\Delta \models Qx Px.$
Proof For (a), suppose that
$\mathcal {M} \models Qx Px$
. That means that
$\mathcal {M} \in \mathcal {Q}_{\Delta } = Mod(\Delta )$
. Hence,
$\mathcal {M} \models \Delta $
. For (b), suppose that
$\mathcal {M} \models \Delta $
. Hence
$\mathcal {M} \in Mod(\Delta ) = \mathcal {Q}_{\Delta }$
and therefore
$\mathcal {M} \models Qx Px$
.
Let
$\mathcal {M} \models ^{\mathcal {Q}} \varphi $
mean that
$\mathcal {M} \models \varphi $
when Q is interpreted by
$\mathcal {Q}$
and designate with
$\models _{\mathcal {Q}}$
the resulting model-theoretic consequence relation. From the observation above it then follows that
Proposition 5.2. If
$\mathcal {Q} = \mathcal {Q}_{\Delta }$
for a set
$\Delta $
of sentences of FOL, then
$\mathcal {Q}$
is uniquely determinable.
Proof Suppose that
$\mathcal {Q}'$
is consistent with
$\models _{\mathcal {Q}}$
. Let
$\mathcal {M} \in \mathcal {Q}'$
. Thus,
$\mathcal {M} \models ^{\mathcal {Q}'} QxPx$
. By (a) above we know that
$Qx Px \models _{\mathcal {Q}_{\Delta }} \varphi $
for all
$\varphi \in \Delta $
. Since
$\mathcal {Q}'$
is consistent with
$\models _{\mathcal {Q}_{\Delta }}$
, it follows that
$\mathcal {M} \models ^{\mathcal {Q}'} \varphi $
for all
$\varphi \in \Delta $
. But then
$\mathcal {M} \in Mod(\Delta ) = \mathcal {Q}_{\Delta }$
. For the other direction, suppose that
$\mathcal {M} \in \mathcal {Q}_{\Delta }$
, but
$\mathcal {M} \notin \mathcal {Q}'$
. Thus,
$\mathcal {M} \not \models ^{\mathcal {Q}'} Qx Px$
. Since
$\mathcal {M} \in \mathcal {Q}_{\Delta } = Mod(\Delta )$
we have that
$\mathcal {M} \models ^{\mathcal {Q}_{\Delta }} \Delta $
. However, since all
$\varphi \in \Delta $
are sentences of FOL we also have that
$\mathcal {M} \models ^{\mathcal {Q}'} \Delta $
. Since
$\mathcal {Q}'$
is consistent with
$\models _{\mathcal {Q}_{\Delta }}$
it follows, by (b), that
$\mathcal {M} \models ^{\mathcal {Q}'} Qx Px$
– contradiction. Hence,
$\mathcal {M} \notin \mathcal {Q}$
. Therefore,
$\mathcal {Q}' = \mathcal {Q}_{\Delta }$
.
Since
$\mathcal {Q}_{0} = \{\langle M, X \rangle \: | \: \aleph _{0} \leq |X| \} = Mod(\Delta )$
, where
$\Delta = \{\exists _{n} x Px \: | \: n \in \mathbb {N} \}$
it follows that the quantifier there are infinitely many is uniquely determinable by its associated consequence relation
$\models _{\mathcal {Q}_{0}}$
.Footnote
98
Furthermore, it was observed by Dag Westerståhl (p.c.) that the property of unique determinability is preserved under the operation of complementation, where the complement
$\mathcal {Q}^{c}$
of a (type
$\langle 1 \rangle $
) quantifier
$\mathcal {Q}$
is such that
$\langle M, X \rangle \in \mathcal {Q}^{c}$
iff
$\langle M, X^{c} = M - X \rangle \in \mathcal {Q}$
.Footnote
99
Since
$\mathcal {Q}_{fin} = \{\langle M, X \rangle \: | \: |X| < \aleph _{0} \} = \mathcal {Q}^{c}_{0}$
, the unique determinability of the quantifier there are finitely many follows as well. Hence, (Perm) suffices to ensure the unique determinability of quantifiers going beyond FOL.
How much further does the unique determinability of type
$\langle 1 \rangle $
quantifiers extend? Several observations suggest that it stops at
$\mathcal {Q}_{1}$
: it already follows from results proven in [Reference Keisler53] that a quantifier
$\mathcal {Q}$
is consistent with the complete axiomatization of
$\mathscr {L}(Q_{1})$
– FOL extended with the quantifier
$\mathcal {Q}_{1}$
, – as long as
$\mathcal {Q} = \{\langle M, X \rangle \: | \: \aleph _{\alpha } \leq |X|\}$
for some regular cardinal
$\aleph _{\alpha }$
. Hence, it is immediately apparent that
$\mathcal {Q}_{1}$
is severely underdetermined by its associated consequence relation.Footnote
100
This underdetermination continues into higher cardinalities: based on results in [Reference Keisler, van Rootselaar and Staal52] it is possible to show that no quantifier of the form
$\mathcal {Q} = \{\langle M, X \rangle \: | \: \aleph _{\alpha } \leq |X|\}$
for a strong singular limit cardinal
$\aleph _{\alpha }$
is uniquely determinable by any consequence relation over its language. Making strong set-theoretic assumptions, the underdetermination can be shown to affect further quantifiers: let
$\mathcal {Q}_{\alpha } = \{\langle M, X \rangle \: | \: \aleph _{\alpha } \leq |X|\}$
. Then, assuming V = L, it is possible to show that no quantifier of the form
$\mathcal {Q}_{\alpha + 1}$
is uniquely determinable by a consequence relation over its language.Footnote
101
Further quantifiers of different types corroborate the failures of unique determination further (see [Reference Speitel94] for examples). What is particularly noteworthy is the coming apart of completeness and unique determinability as demonstrated by
$\mathcal {Q}_{0}$
and
$\mathcal {Q}_{1}$
. Whereas the logic of FOL extended with the quantifier
$\mathcal {Q}_{0}$
is incomplete,
$\mathcal {Q}_{0}$
is uniquely determined by
$\models _{\mathcal {Q}_{0}}$
. On the other hand, FOL extended with the quantifier
$\mathcal {Q}_{1}$
possesses a complete recursively enumerable axiomatization, yet
$\mathcal {Q}_{1}$
is not uniquely determined by
$\models _{\mathcal {Q}_{1}}$
. This not only demonstrates the limits of permutation-invariance in reducing admissible interpretations but also undermines the sometimes implicitly assumed access to reference and denotation by the inferentialist on the basis of completeness and soundness results. It furthermore supports the Carnapian claim that for a ‘full formalization of logic’ both completeness and categoricity are required, as these results demonstrate that completeness of a logical system is neither necessary nor sufficient for the categoricity of its logical notions.
5.3 Higher-order quantification
For the first-order case we are left with an interesting situation: while (Perm) successfully determines the intended quantifier meanings of the quantifiers of FOL and beyond, it does not suffice to resolve underdetermination in general for all generalized quantifiers, no matter how well-behaved their respective logics are. How does the strategy that brought at least partial success in the context of first-order languages fare with respect to second- and higher-order languages?
Murzi and Topey [Reference Murzi and Topey72] claim that the first-order strategy generalizes to cover second- as well as higher-order universal and existential quantifiers.Footnote 102 The non-standard, unintended, interpretations of the second-order quantifiers are constituted by so-called general or Henkin-interpretations, in which the quantifiers range over, suitably closed, subsets of the set of all relations over the first-order domain. These interpretations are inferentially indistinguishable from the intended full interpretation of the quantifiers according to which they range over all relations over the first-order domain.Footnote 103 By what mechanism, then, might the full interpretation of the quantifiers be secured?
Murzi and Topey [Reference Murzi and Topey72] show that this can be achieved by means of the same mechanism that ultimately secured the standard interpretations of the first-order quantifiers. In the first-order case, what ensured that standard interpretations of the universal and existential quantifier were determined was their permutation-invariance under permutations of their range (i.e., of the first-order domain). In the second-order case one should, analogously, demand permutation-invariance w.r.t. the appropriate range of the second-order quantifiers. This range does not consist of a domain M of a model
$\mathcal {M}$
, however, but rather of the powerset of M, i.e.,
$\mathcal {P}(M)$
:
Notice that, while the permutation invariance of the interpretation of the first-order quantifier
$\forall $
amounts to the invariance of its range under all permutations of the domain M, what the permutation invariance of the interpretation of
$\forall _{2}$
requires is somewhat different. Since we are now quantifying over relations rather than objects, the range of
$\forall _{2}$
, when it binds a variable of arity n, must remain invariant under all permutations, not of M itself, but of
$\mathcal {P}(M^{n})$
– i.e. the set of n-ary relations on M. [Reference Murzi and Topey72, fn. 37]
This, however, essentially reduces the second-order case to the situation of the first-order case. The relations over the domain are treated as objects in their own right that can be mapped directly, so to speak, to other relations. Just as in the first-order case, if any relation is left out of the range of the second-order quantifiers that interpretation will not be permutation-invariant, vis-a-vis the result of [Reference Bonnay and Westerståhl12]. Hence, permutation-invariance of the second-order quantifiers ensures their intended, full, interpretation: “insofar as permutation invariance is a necessary condition for logicality, and insofar as the second-order quantifiers are genuinely logical, the rules for the second-order quantifiers are simply incompatible with any restricted interpretation” [Reference Murzi and Topey72, p. 3411]. Moreover, this strategy can be replicated at any finite order, thereby guaranteeing standard interpretations of higher-order quantifiers as well.
Note, however, that the perspective here has, ever so slightly, shifted. For the adoption and application of the permutation-invariance demand has been modified to apply directly to objects in
$\mathcal {P}(M)$
, rather than the permutations being induced via permutations of M. Not only is this a marked departure from the Tarskian picture of logicality [Reference Tarski100] but given the inherent instability of the power-set operation – resulting from its non-absolute nature – and related questions concerning the notion of ‘all subsets’ one might wonder whether some further indeterminacy might be looming in the background here.
6 Outlook: Philosophical consequences of Carnap’s problem
Carnap [Reference Carnap20] demonstrated that several logical facts about the standard logical constants of propositional and first-order logic remain undecided and underdetermined by the usual formulations of these logics. The question Carnap raises is, however, much broader: Carnap’s question, the question whether inferential characterizations of a logic uniquely determine that logic’s intended semantics, can be asked for any logical system and Carnap’s Problem arises for most of them. Importantly, it even arises for systems for which the usual adequacy theorems, meant to ensure a match between proof-theoretic and model-theoretic characterizations of consequence, hold. Despite this, some logically relevant aspects might, nonetheless, remain indeterminate. Carnap’s question therefore reveals an interesting perspective from which to consider what semantic information is contained in inferences, and which sort of facts are left out.
This has philosophical ramifications: the fact that inferential patterns succeed in determining intended values for several constants at once, but not for any of the involved constants in isolation, as was the case for several of the intuitionistic operators, for example, throws into serious doubt the widely held assumption that it is characteristic of logical operators that their meaning is atomistic, i.e., can be fully and sufficiently characterized independently of any other non-schematic expressions of the language. Moreover, the fact that some semantic universals succeed in significantly reducing underdetermination suggests that these semantic constraints are part and parcel of the (philosophical) theory underlying the logic. This impression is further strengthened by the observation that constraints that can be motivated on the basis of the philosophical interpretation of a logical theory lead to unique determinability of that logic’s operators (see, e.g., [Reference Tabakçı99]).
Carnap’s Problem is not merely a mathematical curiosity in the foundations of logic. It has significant repercussions for theories of meaning that rely on the methods of formal logic. It further impacts philosophical debates at the intersection of logic, mathematics, and philosophy. In [Reference Bonnay, Speitel, Sagi and Woods11], a criterion of logicality was motivated which used the insights provided by Carnap’s Problem to delineate a core of logical operations grounding a particularly stable and reliable set of inferential patterns. Speitel [Reference Speitel95] argued, on the basis of uniquely determinable notions, for the possibility of determinate access to the natural number structure. All these direct and indirect repercussions will, we hope, further stimulate interest in Carnap’s Problem, rehabilitating Carnap’s own ambitions to grant the same importance to the unique determinability of logical notions as was given to the completeness of logical systems.
7 Appendix
This short appendix clarifies the connection between Bonnay and Westerståhl’s [Reference Bonnay and Westerståhl12] main result and the way it is stated in the context of this article.Footnote 104
The following definitions are adaptations of the definitions from [Reference Bonnay and Westerståhl12]:
Definition 7.1. Let
$\mathcal {M}$
be a first-order model with domain M and
${q_{\forall } \subseteq \mathcal {P}(M)}$
. A weak model (for the universal quantifier) is a tuple
$\langle \mathcal {M}, q_{\forall } \rangle $
.
Definition 7.2. Let
$\langle \mathcal {M}, q_{\forall } \rangle $
be a weak model. Then:
Definition 7.3. A model
$\mathcal {M}$
is consistent with a consequence relation
$\vdash $
iff, whenever
$\Gamma \vdash \varphi $
and
$\mathcal {M} \models \gamma $
for all
$\gamma \in \Gamma $
, then
$\mathcal {M} \models \varphi $
.
In the following, let
$\mathscr {L}^{*}$
be a purely relational language that contains predicate variables.Footnote
105
Bonnay and Westerståhl [Reference Bonnay and Westerståhl12] then establish the following result.
Theorem 7.4 [Reference Bonnay and Westerståhl12].
A weak model
$\langle \mathcal {M}, q_{\forall } \rangle $
is consistent with
$\vdash _{FOL}$
(over
$\mathscr {L}^{*}$
) iff
$q_{\forall }$
is a principal filter over M.
Arguably, the treatment of quantifier-interpretations via weak models introduces an asymmetry between the treatment of the propositional connectives and quantifiers as ‘fixed’ expressions in the context of logical languages. For just as the interpretation of the propositional connectives was conceived of globally, as consisting of classes of valuations, so quantifiers should be thought of as Lindström-quantifiers. For this reason, we adapt the setting as follows.
Definition 7.5. A global (type
$\langle 1 \rangle $
) quantifier is a class of the form
$\mathcal {Q} = \{\langle M, X \rangle \: | \: M \text { is a set and } X \subseteq M \}$
.
Definition 7.6. Let
$\mathcal {Q}$
be a global quantifier. The local quantifier-on-a-model
$\mathcal {M},\ \mathcal {Q}^{\mathcal {M}}$
, corresponding to
$\mathcal {Q}$
, is the set
$\mathcal {Q}^{\mathcal {M}} = \{X \: | \: \langle M, X \rangle \in \mathcal {Q} \}$
, where M is the domain of
$\mathcal {M}$
.
Note that the interpretation of
$\mathcal {Q}^{\mathcal {M}}$
depends solely on
$\mathcal {Q}$
and the domain of
$\mathcal {M}$
, and is independent of any further elements of the signature of
$\mathcal {M}$
. That is.
Observation 7.7. Let
$\mathcal {Q}$
be a global quantifier and
$\mathcal {M}_{1}, \mathcal {M}_{2}$
be models, s.t.
$M_{1} = M_{2}$
. Then
$\mathcal {Q}^{\mathcal {M}_{1}} = \mathcal {Q}^{\mathcal {M}_{2}}$
.
Definition 7.8. Let
$\mathcal {Q}$
be a global quantifier interpreting
$\forall $
and
$\mathcal {M}$
be a model.
$\mathcal {M} \models ^{\mathcal {Q}} \forall x \varphi (x)$
iff
$\{ a \in M \: | \: \mathcal {M} \models ^{\mathcal {Q}} \varphi (a) \} \in \mathcal {Q}^{\mathcal {M}}$
.
Definition 7.9.
$\Gamma \models _{\mathcal {Q}} \varphi $
iff, for all
$\mathcal {M}$
, whenever
$\mathcal {M} \models ^{\mathcal {Q}} \gamma $
for all
$\gamma \in \Gamma $
, then also
$\mathcal {M} \models ^{\mathcal {Q}} \varphi $
.
Definition 7.10. A (global) quantifier
$\mathcal {Q}$
is consistent with a consequence relation
$\vdash $
iff
$\vdash \: \subseteq \: \models _{\mathcal {Q}}$
.
Lemma 7.11. Let
$\mathcal {Q}$
be a global quantifier interpreting
$\forall $
and
$\mathcal {M}$
be a model.
$\mathcal {M} \models ^{\mathcal {Q}} \varphi $
iff
$\langle \mathcal {M}, \mathcal {Q}^{\mathcal {M}} \rangle \models \varphi $
.
Proof The proof proceeds by induction on the complexity of
$\varphi $
. The propositional cases are standard. For
$\varphi := \forall x \psi (x)$
we have:
$\mathcal {M} \models ^{\mathcal {Q}} \forall x \psi (x)$
iff
$\{ a \in M \: | \: \mathcal {M} \models ^{\mathcal {Q}} \psi (a) \} \in \mathcal {Q}^{\mathcal {M}}$
iff (by the induction hypothesis)
$\{ a \in M \: | \: \langle \mathcal {M}, \mathcal {Q}^{\mathcal {M}} \rangle \models \psi (a) \} \in \mathcal {Q}^{\mathcal {M}}$
iff
$\langle \mathcal {M}, \mathcal {Q}^{\mathcal {M}} \rangle \models \varphi $
.
Now let
$\mathscr {L}$
be a purely relational first-order language (without predicate variables). Then,
Theorem 7.12. A (global) quantifier
$\mathcal {Q}$
interpreting
$\forall $
is consistent with
$\vdash _{FOL}$
(over
$\mathscr {L}$
) iff, for all
$\mathcal {M}$
,
$Q^{\mathcal {M}}$
is a principal filter over M.
Proof The right-to-left direction follows directly from Bonnay and Westerståhl’s original proof: let
$\mathcal {M}$
be a model and
$Q^{\mathcal {M}}$
a principal filter over M. Let
$\Gamma \vdash _{FOL} \varphi $
and assume that
$\mathcal {M} \models ^{\mathcal {Q}} \gamma $
for all
$\gamma \in \Gamma $
. By Lemma 7.11 it follows that
$\langle \mathcal {M}, \mathcal {Q}^{\mathcal {M}} \rangle \models \gamma $
for all
$\gamma \in \Gamma $
. Then, by Theorem 7.4,
${\langle \mathcal {M}, \mathcal {Q}^{\mathcal {M}} \rangle \models \varphi} $
and thus, by Lemma 7.11 again,
$\mathcal {M} \models ^{\mathcal {Q}} \varphi $
as well. Hence,
$\vdash _{FOL} \: \subseteq \: \models _{\mathcal {Q}}$
.
For the left-to-right direction assume that
$\mathcal {Q}$
, interpreting
$\forall $
, is consistent with
$\vdash _{FOL}$
, yet that there exists
$\mathcal {M}$
, s.t.
$Q^{\mathcal {M}}$
is not a principal filter over M. From Bonny and Westerståhl’s result we know that this must be due to some set(s) being undefinable over
$\mathscr {L}$
(as this possibility is ruled out when all sets are rendered definable through the addition of predicate variables). However, we can easily move to an expansion
$\mathcal {M}^{*}$
of
$\mathcal {M}$
where precisely these sets are named by predicate constants of the expanded signature. Since
$M = M^{*}$
it follows from Observation 7.7 that
$\mathcal {Q}^{\mathcal {M}} = \mathcal {Q}^{\mathcal {M}^{*}}$
. Yet, as soon as sets that ‘interrupt’
$\mathcal {Q}^{\mathcal {M}}$
from being a principal filter become definable we can find
$\Gamma \cup \{\varphi \}$
, s.t.
$\Gamma \vdash _{FOL} \varphi $
but
$\Gamma \not \models ^{\mathcal {Q}} \varphi $
and thus
$\vdash _{FOL} \: \not \subseteq \: \models _{\mathcal {Q}}$
, i.e.,
$\mathcal {Q}$
, interpreting
$\forall $
, is not consistent with
$\vdash _{FOL}$
as assumed.
As a concrete example, assume that
$\mathcal {Q}^{\mathcal {M}}$
was not closed under super-sets; i.e., assume there were sets
$X, Y \subseteq M$
, s.t.
$X \subseteq Y$
,
$X \in \mathcal {Q}^{\mathcal {M}}$
, but
$Y \notin \mathcal {Q}^{\mathcal {M}}$
. Since
$\mathcal {Q}^{\mathcal {M}} = \mathcal {Q}^{\mathcal {M}^{*}}$
we also have that
$X \in \mathcal {Q}^{\mathcal {M}^{*}}$
, but
$Y \notin \mathcal {Q}^{\mathcal {M}^{*}}$
. Now let
$\mathcal {M}^{*}$
be an expansion of
$\mathcal {M}$
containing two additional predicate constants P, R, s.t.
and
. Note that
$\forall x \varphi (x) \vdash _{FOL} \forall x (\varphi (x) \vee \psi (x))$
. But now we have that
$\mathcal {M}^{*} \models ^{\mathcal {Q}} \forall x Px$
, yet
$\mathcal {M}^{*} \not \models ^{\mathcal {Q}} \forall x (Px \vee Rx)$
. Hence,
$\forall x \varphi (x) \not \models _{\mathcal {Q}} \forall x (\varphi (x) \vee \psi (x))$
and thus
$\mathcal {Q}$
, interpreting
$\forall $
, is not consistent with
$\vdash _{FOL}$
.
Thus, instead of internalizing definability facts as Bonnay & Westerståhl do by means of including predicate variables in the language, the same effect is achieved in the current setting by conceiving of quantifier meaning as global and forcing
$\mathcal {Q}^{\mathcal {M}}$
to be identical over all models with the same domain.
Acknowledgements
The author expresses special thanks to Denis Bonnay and Dag Westerståhl for discussion of and guidance in obtaining several of the results mentioned in this article, Gila Sher for feedback on the material and its relevance to questions of logicality, Elke Brendel and Constantin Brîncus for discussion, and S. Kaan Tabakçı for fruitful discussions and ongoing work on aspects related to Carnap’s Problem in multi-valued and non-classical settings. The author further thanks audiences of the Bologna-Bonn-Padua Research Seminar and the Logic Colloquium 2024 for helpful comments and discussion of the material of the article.
Funding
Work on this article was supported by an Argelander Starter-Kit Grant of the University of Bonn, as well as an Institute Vienna Circle Research Fellowship during a stay at the Institute Vienna Circle in 2025.
