1 Introduction
A logic
${\mathbf {L}}$
is said to enjoy variable sharing iff whenever
$A\to B$
is a theorem of
${\mathbf {L}}$
, then A and B share a propositional atom. An important early result in the history of relevant logic was Belnap’s proof that sublogics of the relevant logic
${\mathbf {R}}$
enjoyed variable sharing.Footnote
1
But variable sharing, it turns out, comes in many flavors. One example of this is strong variable sharing—also due to Belnap—which requires that the shared atom p have the same polarity in both A and B. Another, due to Brady [Reference Brady7], is depth relevance, which requires the shared atom to occur in the scope of the same number of conditionals in A as in B.Footnote
2
One can combine these to get the strong depth relevance of Logan [Reference Logan24]. Ferguson and Logan [Reference Ferguson and Logan18], introduced and defended two yet more restrictive versions of variable sharing, which were shown to be enjoyed by the weak relevant logics
$\mathbf {B}$
and
$\mathbf {BM}$
. We will present these logics in §2.
While variable sharing properties are of special interest to relevant logicians, they also arise in other areas of logic and philosophy. Anderson and Belnap, the progenitors of relevant logic, defend variable sharing properties as enforcing the existence of “common meaning content” [Reference Anderson and Belnap1, p. 33] between antecedent and consequent, an attractive interpretation of which is overlap of topic. If one follows, e.g., Belnap’s suggestion that
We would wish to reject
$A\rightarrow B$
if A and B have nothing to do with one another (Belnap [Reference Belnap4, p. 144])
then among the clearest cases of two sentences’ “having nothing to do with one another” is the case where the sentences are about different topics.Footnote 3 This link to topic has been receiving increased attention recently; correspondences between variable sharing properties and overlap of topic have been discussed by, among others, Yablo [Reference Yablo45] and Berto [Reference Berto5]. But much work in this area has been governed either explicitly or implicitly by two principles of topic transparency that can be described as follows:
Negation Transparency: For any sentence
$\Phi $
, the topics of
$\Phi $
and
$\neg \Phi $
are identical.
Conditional Transparency: For an intensional conditional
$\rightarrow $
and sentences
$\Phi ,\Psi $
the topic of
$\Phi \rightarrow \Psi $
is the fusion of the topic of
$\Phi $
and the topic of
$\Psi .$
In many quarters, these assumptions have been challenged. The deductive system
$\mathsf {AC}$
, introduced by Angell [Reference Angell2], implicitly violates Negation Transparency, something made clear by Fine’s [Reference Fine19] analysis of subject-matter in
$\mathsf {AC}$
, and Angell [Reference Angell, Norman and Sylvan3, p. 121] gives an argument to this effect. Meanwhile, Berto [Reference Berto5] concedes that topic transparency is questionable in intensional contexts—pushing against Conditional Transparency—an idea which is explored at length by Ferguson [Reference Ferguson12]. If variable sharing is understood, following Fine, Yablo, Berto, and others, as a mechanism requiring overlap of topic, then the adoption or rejection of such principles will be reflected in the types of variable sharing features to be adopted; rejecting Negation Transparency, for example, means that the appearance of p in an antecedent and
$\neg p$
in a consequent ought to be insufficient to guarantee “common meaning content.” (Indeed, one can argue that Negation Transparency is already implicitly rejected by
$\mathbf {R}$
, insofar as
$\mathbf {R}$
has the strong variable sharing property requiring that some atomic letter appears with the same sign.)
Ferguson and Logan [Reference Ferguson and Logan18] argued that the novel forms of variable sharing described in that paper were natural consequences of challenging these features. If one takes negation and the relevant conditional to be topic transformative, then ensuring relevance—in the sense of topic overlap—must require a stronger standard than to merely ensure the mutual occurence of some atomic formula in antecedent and consequent positions. What is required instead is the atomic formula shared between antecedent and consequent in any provable conditional occur in each position nested within the same configuration of instances of
$\neg $
and
$\rightarrow $
.
The variation of these properties to be examined in this work involves a simple insight concerning Conditional Transparency: the topic of a conditional is sensitive to chirality, that is, the contribution to the topic of a conditional made by an immediate subformula is influenced by whether it appears in the antecedent or consequent position. The opposite view can be summarized as a thesis of:
Chiral Transparency: For an intensional conditional
$\rightarrow $
and sentences
$\Phi ,\Psi $
the topic of
$\Phi \rightarrow \Psi $
is the same as that of
$\Psi \rightarrow \Phi .$
There is precedent for the rejection of such a thesis. With respect to its reflection in formal systems, Ferguson [Reference Ferguson and Logan18] showed that a subsystem of Parry’s analytic implication distinguishes the topics of formulas
$\varphi \rightarrow \psi $
and
$\psi \rightarrow \varphi $
, resulting in a variable sharing property that rejects the relevance of, e.g.,
$p\rightarrow q$
to
$q\rightarrow p$
. On a philosophical level, we also may recall a Yablovian defense of an emphasis on chirality in assessing the overall topic of an intensional conditional. Yablo [Reference Robles and Méndez36] makes a succinct but compelling argument that chirality influences topic in the case of predicates, noting that insofar as “Man Bites Dog” touches on a more interesting topic than the mundane “Dog Bites Man,” the topics of the two must be distinct. As argued in e.g., Ferguson [Reference Ferguson12, Reference Ferguson13], this Yablovian argument can be adapted to show that pairs of conditional sentences that are converses of one another might part ways with respect to topic as well.Footnote
4
As a consequence, a view of topic especially sensitive to the influence of intensional connectives will require a very strong flavor of variable sharing.
In addition to variable sharing properties, relevant logics, and indeed most common logics, are closed under uniform substitution of formulas for atoms.Footnote 5 Just as there are strengthenings of variable sharing, one can strengthen closure under substitutions by allowing non-uniform substitutions.Footnote 6 One such example is depth substitutions, studied by Logan [Reference Logan24], where the formula assigned to an (occurrence of an) atom can vary with the depth of that (occurrence of that) atom. Many relevant logics that enjoy the depth relevance property are, as Logan [Reference Logan24] showed, also closed under depth substitutions, and so are hyperformal, to use Logan’s term.Footnote 7 From the point of view of the logic, two occurrences of the same atom at different depths may be treated for all intents and purposes as though they were distinct atoms. Further developing the themes above, we will consider additional forms of non-uniform substitutions, giving rise to another form of hyperformality, and show that logics that are closed under these also enjoy related variable sharing properties.
We will close this introduction with a brief overview of the paper. In the next section, we will briefly define the standard relevant logics of interest,
${{\mathbf {BM}}} $
and
${{\mathbf {B}}}$
. Then, in §3, we will define the key concept of a lericone sequence. We show that
${{\mathbf {BM}}} $
and
${{\mathbf {B}}}$
are closed under certain classes of lericone-sensitive substitutions, and, following this, we discuss the philosophical significance of our results, and variable sharing properties more generally. In §4, we will study the largest sublogic of classical logic that is closed under lericone-sensitive substitutions, which we call
${{\mathbf {CLV}}}$
. We prove that
${{\mathbf {CLV}}}$
enjoys variable sharing, using a new proof technique, along with a related system
$\mathbf {CLV}^{\sim }$
closed under a more restricted class of substitutions. We show that the consequence relation for
${{\mathbf {CLV}}}$
is compact, and in §5, we give tableau systems for
${{\mathbf {CLV}}}$
and
$\mathbf {CLV}^{\sim }$
, whose adequacy we prove. Finally, we close in §6 with some directions for future work.
2 Setup
We work in a standard propositional language
$\mathcal {L}$
formulated using the set of atomic formulas
$\mathsf {At}=\{p_i\}_{i=1}^\infty $
and the connectives
$\land $
,
$\lor $
,
$\to $
, and
$\neg $
each with their expected arities. We identify (as a matter of convenience and not as a matter of philosophical conviction) logics with their sets of theorems. There are two logics we will be concerned with throughout:
$\mathbf {BM}$
and
$\mathbf {B}$
.
The logic
$\mathbf {BM}$
received its most extensive examination by Fuhrmann [Reference Fuhrmann20] and was discussed by Priest and Sylvan [Reference Priest and Sylvan32].Footnote
8
It has received a spate of recent interest, being mentioned not only by Ferguson and Logan [Reference Standefer, Sedlár, Standefer and Tedder40], but also by Standefer [Reference Standefer38, Reference Standefer39], Tedder and Bilková [Reference Tedder and Bilková43], and Ferenz and Tedder [Reference Ferenz and Tedder10]. It is axiomatized as follows:
-
(A1)
$A\to A$
-
(A2)
$A\land B\to A,A\land B\to B$
-
(A3)
$A\to A\lor B, B\to A\lor B$
-
(A4)
$A\land (B\lor C)\to (A\land B)\lor (A\land C)$
-
(A5)
$((A\to B)\land (A\to C))\to (A\to (B\land C))$
-
(A6)
$((A\to C)\land (B\to C))\to ((A\lor B)\to C)$
-
(A7)
$\neg (A\land B)\to (\neg A\lor \neg B)$
-
(A8)
$(\neg A\land \neg B)\to \neg (A\lor B)$
-
(R1)
$A,B\Rightarrow A\land B$
-
(R2)
$A,A\to B\Rightarrow B$
-
(R3)
$A\to B\Rightarrow \neg B\to \neg A$
-
(R4)
$A\to B,C\to D\Rightarrow (B\to C)\to (A\to D).$
The logic
$\mathbf {B}$
is
$\mathbf {BM}$
with one new axiom and one new rule:
-
(A9)
$\neg \neg A\to A$
-
(R5)
$A\to \neg B\Rightarrow B\to \neg A.$
Note that with these additions, Axioms A7 and A8 as well as rule R3 are superfluous.
$\mathbf {B}$
was originally thought of, by Routley et al. [Reference Routley, Plumwood, Meyer and Brady37], as something like a basement level relevant logic. It has received a good deal of philosophical attention over the years, and plays an important role in both Read [Reference Read34] and Logan [Reference Logan27].
3 Lericones
Definition 1. We define the set of lericone sequences,
$\mathsf {LRCN}$
, as follows:
-
• The empty sequence,
$\varepsilon $ , is a lericone sequence.
-
• The one-member sequence c is a lericone sequence.
-
• If
$\overline {x}$ is a lericone sequence, so are
$l\,\overline {x}$ ,
$r\overline {x}$ , and
$n\overline {x}$ .
-
• Nothing else is a lericone sequence.
We let
$\mathsf {LRN}$
be the set subset of all c-free members of
$\mathsf {LRCN}$
—equivalently,
$\mathsf {LRN}$
is the set of all finite sequences from
$\{l,r,n\}$
. The word ‘lericone’ is a mnemonic portmanteau: It stands for ‘Left, Right, Conditional, Negation’. This connection is explained by the next definition.
Definition 2. The set of partial lericone parsing trees of a formula A is defined as follows:
-
(1)
$\underset {\varepsilon }{A}$ is a partial lericone parsing tree for A.
-
(2) Given a (possibly empty) tree T and
$*\in \{\land ,\lor \}$ , if
is a partial lericone parsing tree for A, then so is
-
(3) Given a (possibly empty) tree T, if
is a partial lericone parsing tree for A, then so is
-
(4) Given a (possibly empty) tree T and
$\overline {x}\neq \varepsilon $ , if
is a partial lericone parsing tree for A, then so is
-
(5) Given a (possibly empty) tree T, if
is a partial lericone parsing tree for A, then so is
Definition 3.
The lericone parsing tree for A,
$\mathsf {lpt}(A)$
is the maximal partial lericone parsing tree for A.
Clause (3) in Definition 2 requires a bit of comment. We are, as comparison of clauses (3) and (4) makes clear, treating conditionals differently depending on whether they occur in a purely extensional context—which is to say, as subformulas of a formula whose lericone sequence is
$\varepsilon $
—or in an intensional context. There are a number of ways to justify this difference in treatment.
One justification is this: variable sharing results just are results about how atomic subformulas of A and atomic subformulas of B are related when
$A\to B$
is a theorem. What variable sharing results aren’t are results about atomic formulas that occur as subformulas of the formula A in
$A\to B$
and atomic formulas that occur as subformulas of the formula B in
$A\to B$
. As an example, consider the variable sharing result proved in Entailment Volume 1 which, with slight changes of vocabulary, is the following.
Theorem 1 (Anderson and Belnap [Reference Anderson and Belnap1]).
If
$A\to B$
is a theorem of the logic E, then some variable occurs as an antecedent part of both A and B, or else as a consequent part of both A and B.
Note that as stated, this theorem ignores the conditional connecting A and B in
$A\to B$
: the theorem isn’t that there are occurrences of some atomic formula p in A and in B and in the formula
$A\to B$
, those occurrences are either both positive occurrences or both negative occurrences. Indeed, that result is false, and trivially so:
$p\to p$
is a theorem of E but the first p occurs as an antecedent part of
$p\to p$
and the second p occurs as a consequent part of
$p\to p$
.
Thus, in writing rules (3) and (4) so as to ignore conditionals that occur in extensional contexts, we’re not doing anything unusual—this is just the sort of thing one does when providing strong variable sharing results.
Of course, it being the thing that’s done is different from it being the thing that should be done. One might wonder whether the latter holds. We claim it does, but will have to delay the explanation until §3.3.
Example 1. Each of the following is a partial lericone parsing tree for
$\neg p\to (p\to q)$
; the rightmost one is
$\mathsf {lpt}(\neg p\to (p\to q))$
:

When we need to talk about an occurrence of B as a subformula of A, we will often write
$A[B]$
as shorthand for ‘the occurrence of B as a subformula of A that we’re interested in’ since otherwise the discussion will become unwieldy in places. If A has been written out, we will generally specify (when specification is needed) which occurrence we’re interested in by underlining the one we’re after. Note that the sequences occurring under the formulas in
$\mathsf {lpt}(A)$
implicitly define a function mapping each occurrence of B as a subformula of A to a lericone sequence. We write ‘
$\mathsf {lrcn}(A[B])$
’ to mean ‘the lericone sequence assigned via this function to the occurrence of B we are interested in’.
Example 2. We can list the value of the
$\mathsf {lrcn}$
function at each atom-occurrence in the preceding example as follows:
-
•
$\mathsf {lrcn}(\neg \underline {p}\to (p\to q))=nc$
-
•
$\mathsf {lrcn}(\neg p\to (\underline {p}\to q))=lc$
-
•
$\mathsf {lrcn}(\neg p\to (p\to \underline {q}))=rc.$
Definition 4. For
$\overline {x}\in \mathsf {LRN}$
, we define the c-transform of
$\overline {x}$
,
$t(\overline {x})$
as follows:

Example 3. Write
$A[B]\to C$
to mean that the instance of B we are interested in occurs in
$A\to C$
as a subformula of A, and similarly for
$C\to A[B]$
, which we will distinguish from
$(C\to A)[B]$
, where we do not take B’s location to be specified. Here’s an observation: if
$\mathsf {lrcn}(A[B]\to C)=\overline {x}c$
or
$\mathsf {lrcn}(C\to A[B])=\overline {x}c$
, then
$\mathsf {lrcn}(A[B])=t(\overline {x})$
. We won’t prove this—though it’s a good exercise and can be proved by a straightforward induction on A—and will instead examine a few examples.
-
Example 3a: Here we take
$A[B]$ to be
$\underline {p}\to p$ and C to be
$p$ . Now note that
$\mathsf {lrcn}((\underline {p}\to p)\to p)=lc$ while
$\mathsf {lrcn}(\underline {p}\to p)=c$ . Since
$c=t(l)$ , this confirms the claim.
-
Example 3b: Here we take
$A[B]$ to be
$\neg (p\to \underline {p})$ and C to be
$q$ . Now note that
$\mathsf {lrcn}(q\to \neg (p\to \underline {p}))=rnc$ while
$\mathsf {lrcn}(\neg (p\to \underline {p}))=rn$ . Since
$t(rn)=rn$ , this confirms the claim.
We leave the proof of the following fact about c-transforms to the reader.
Lemma 1.
-
•
$t(l\overline {x})=\left \{\begin {array}{rl} c & \text { if }\ \overline {x}=\varepsilon \\ lt(\overline {x}) & \text { otherwise } \end {array}\right .$
-
•
$t(r\overline {x})=\left \{\begin {array}{rl} c & \text { if }\ \overline {x}=\varepsilon \\ rt(\overline {x}) & \text { otherwise } \end {array}\right .$
-
•
$t(n\overline {x})=nt(\overline {x})$ .
Much of our interest in this paper concerns substitutions that are sensitive to lericones. To help settle notation and intuitions, we will first define (plain) substitutions.
Definition 5. (Plain substitutions)
A plain substitution is a function
$\mathsf {At}\to \mathcal {L}$
. Given a plain substitution
$\sigma $
, we extend it to a function
$\mathcal {L}\to \mathcal {L}$
(which we will also call
$\sigma $
) as follows:
-
•
$\sigma (A\land B)=\sigma (A)\land \sigma (B)$ .
-
•
$\sigma (A\lor B)=\sigma (A)\lor \sigma (B)$ .
-
•
$\sigma (\neg A)=\neg \sigma (A)$ .
-
•
$\sigma (A\to B)=\sigma (A)\to \sigma (B)$
Definition 6. (Lericone substitutions)
A lericone substitution is a function
$\mathsf {LRCN}\times \mathsf {At}\to \mathcal {L}$
. Given a lericone substitution
$\sigma $
we extend it to a function
$\mathsf {LRCN}\times \mathcal {L}\to \mathcal {L}$
(which we also call
$\sigma $
) as follows:
-
•
$\sigma (\overline {x},A\land B)=\sigma (\overline {x},A)\land \sigma (\overline {x},B)$ .
-
•
$\sigma (\overline {x},A\lor B)=\sigma (\overline {x},A)\lor \sigma (\overline {x},B)$ .
-
•
$\sigma (\overline {x},\neg A)=\neg \sigma (n\overline {x},A)$ .
-
•
$\sigma (\overline {x},A\to B)=\left \{ \begin {array}{@{}rl} \sigma (c,A)\to \sigma (c,B) & \text { if }\ \overline {x}=\varepsilon \\ \sigma (l\overline {x},A)\to \sigma (r\overline {x},B) & \text { otherwise}. \end {array}\right .$
We can see plain substitutions as a special class of lericone substitutions, namely those in which
$\sigma (\overline {x}, A)=\sigma (\overline {y}, A)$
, for all
$\overline {x},\overline {y}\in \mathsf {LRCN}$
.
We will adapt an important definition from Leach-Krouse et al. [Reference Leach-Krouse, Logan and Worley23], that of atomic injective substitutions. As we will see below, atomic injective substitutions play a crucial role in our results.
Definition 7. We say that a lericone substitution is atomic when its range is a subset of
$\mathsf {At}$
. We say that a lericone substitution is atomic injective when it is atomic and injective as a two-place function.
It’s important to note that all it means when we say a given lericone substitution is atomic is that it maps each member of
$\mathsf {LRCN}\times \mathsf {At}$
to a member of
$\mathsf {At}$
. It most definitely does not mean that it(s extension) maps each member of
$\mathsf {LRCN}\times \mathcal {L}$
to a member of
$\mathsf {At}$
—indeed, a moment’s reflection makes it clear that no lericone substitution does that. We also emphasize that, for a given atomic injective lericone substitution
$\iota $
, if
$\overline {x}\neq \overline {y}$
, then
$\iota (\overline {x},p)\neq \iota (\overline {y}, p)$
.
It will be useful on occasion in what follows to have a specific atomic injective on hand. So we construct one via what is essentially Gödel coding as follows. First, let
$g(l)=1$
,
$g(r)=2$
,
$g(c)=3$
, and
$g(n)=4$
. Let
$\pi _i$
be the ith prime. For
$\overline {x}=x_1\dots x_n\in \mathsf {LRCN}$
, let
$g(\overline {x})=\prod _{i=2}^{n+1}\pi _i^{g(x_{i-1})}$
. Finally, define
$g(\overline {x},p_i)=p_{2^ig(\overline {x})}$
. We take it to be clear that g is atomic injective.
Example 4. Consider the formula
$\neg p_1\to (p_1\to p_1)$
. The lericone sequences for the three occurrences of
$p_1$
, from left to right, are
$nc$
,
$lc$
, and
$rc$
, respectively. Consequently,

with the result that the three occurrences of
$p_1$
are mapped to different atoms.
Lemma 2. If
$\sigma $
and
$\tau $
are lericone substitutions and for all
$p\in \mathsf {At}$
and all
$\overline {x}\in \mathsf {LRN}$
we have that
$\tau (\overline {x}c,p)=\sigma (t(\overline {x}),p)$
, then in fact for all
$A\in \mathcal {L}$
and all
$\overline {x}\in \mathsf {LRN}$
we have that
$\tau (\overline {x}c,A)=\sigma (t(\overline {x}),A)$
.
Proof. By induction on A. The hypothesis gives the base case. The inductive cases for conjunction and disjunction are immediate from IH.
For negations note that
$\tau (\overline {x}c,\neg A)=\neg \tau (n\overline {x}c,A)$
. But by IH, this is
$\neg \sigma (t(n\overline {x}),A)=\neg \sigma (nt(\overline {x}),A)=\sigma (t(\overline {x}),\neg A)$
.
For conditionals first note that
$\tau (\overline {x}c,A\to B)=\tau (l\overline {x}c,A)\to \tau (r\overline {x}c,B)$
, and by IH this is
$\sigma (t(l\overline {x}),A)\to \sigma (t(r\overline {x}),B)$
. We now consider two cases.
If
$\overline {x}=\varepsilon $
, then by the above equality,
$\tau (c,A\to B)=\sigma (t(l),A)\to \sigma (t(r), B)$
. But
$t(l)=t(r)=c$
, so this becomes
$\sigma (c,A)\to \sigma (c,B)=\sigma (\varepsilon ,A\to B)=\sigma (t(\varepsilon ),A\to B)$
as required.
If
$\overline {x}\neq \varepsilon $
, then by the above equality,
$\tau (\overline {x}c,A\to B)=\sigma (t(l\overline {x}),A)\to \sigma (t(r\overline {x}),B)$
. By Lemma 2,
$t(l\overline {x})=lt(\overline {x})$
and
$t(r\overline {x})=rt(\overline {x})$
. So this becomes
$\sigma (lt(\overline {x}),A)\to \sigma (rt(\overline {x}),B)=\sigma (t(\overline {x}),A\to B)$
as required.
Definition 8. If
$\sigma $
is a lericone substitution, then we define the lericone substitution
$t(\sigma )$
as follows:

Corollary 1. (Corollary to Lemma 2)
For all
$\overline {x}\in \mathsf {LRN}$
and all
$A\in \mathcal {L}$
,
$t(\sigma )(\overline {x}c,A)=\sigma (t(\overline {x}),A)$
.
Note that in the definition of
$t(\sigma )$
, nothing important is happening with the ‘otherwise’ clause and in fact this can be chosen arbitrarily with no changes in what follows. Note also that
$t(\sigma )$
is, in effect, a lericone substitution that simply treats each sequence as though its terminal ‘c’, if it has one, isn’t there. So it, in a certain sense, ignores parts of the sequence it’s being handed. We will also need to do the opposite, and add in an additional sequence sometimes. To that end, we have the following lemma whose proof is sufficiently like the proof of Lemma 2 that we leave it to the reader.
Lemma 3. Let
$\sigma $
and
$\tau $
be lericone substitutions,
$\overline {y}\in \mathsf {LRN}$
, and let
$\tau (\overline {x}c,p)=\sigma (\overline {xy}c,p)$
for all
$p\in \mathsf {At}$
and
$\overline {x}\in \mathsf {LRN}$
. Then for all
$\overline {x}\in \mathsf {LRN}$
and all A,
$\tau (\overline {x}c,A)=\sigma (\overline {xy}c,A)$
.
Definition 9. If
$\sigma $
is a lericone substitution and
$\overline {y}\in \mathsf {LRN}$
, then we define the lericone substitution
$\sigma ^{\overline {y}}$
as follows:

Corollary 2. (Corollary to Lemma 3)
For all
$\overline {x}\in \mathsf {LRN}$
and all
$A\in \mathcal {L}$
,
$\sigma ^{\overline {y}}(\overline {x}c,A)=\sigma (\overline {xy}c,A)$
.
Theorem 2. If
$A'$
is a theorem of
$\mathbf {BM}$
, then for all lericone substitutions
$\sigma $
,
$\sigma (\varepsilon ,A')$
is a theorem of
$\mathbf {BM}$
as well.
Proof. By induction on the derivation of
$A'$
.Footnote
9
One can quickly verify by inspection that for each axiom, applying a lericone substitution returns an instance of the same axiom. We will look at two examples to illustrate. To see that applying lericone substitutions to (A7) results in more instances of (A7), note that the
$\mathsf {LRCN}$
-sequence for each displayed A and B is
$nc$
, so
$\sigma (\varepsilon , \neg (A\land B)\to (\neg A\lor \neg B))=\neg (\sigma (nc, A)\land \sigma (nc, B))\to (\neg \sigma (nc, A)\lor \neg \sigma (nc, B))$
, which is an instance of (A7). Similarly, for (A5), note that the lericone sequences for the displayed occurrences of A are all
$lc$
and the lericone sequences for the displayed occurrences of B and C are all
$rc$
. By reasoning similar to that for (A7), the result of applying a lericone substitution to (A5) is another instance of (A5).
The case where the last rule applied in the proof was R1 is immediate. Suppose the last rule was R2, and let
$\sigma $
be a lericone substitution. By IH applied to A,
$\sigma (\varepsilon ,A)$
is a theorem of
$\mathbf {BM}$
.Footnote
10
By IH applied to
$A\to B$
,
$t(\sigma )(\varepsilon ,A\to B)$
is a theorem of
$\mathbf {BM}$
. So
$t(\sigma )(c,A)\to t(\sigma )(c,B)$
is a theorem of
$\mathbf {BM}$
. But by Lemma 2,
$t(\sigma )(c,A)=\sigma (\varepsilon ,A)$
and
$t(\sigma )(c,B)=\sigma (\varepsilon ,B)$
. So
$\sigma (\varepsilon ,A)\to \sigma (\varepsilon ,B)$
is a theorem of
$\mathbf {BM}$
. Thus,
$\sigma (\varepsilon ,B)$
is a theorem of
$\mathbf {BM}$
, as required.
Suppose the last rule applied was R3 and let
$\sigma $
be a lericone substitution. By IH,
$\sigma ^n(\varepsilon ,A\to B)$
is a theorem of
$\mathbf {BM}$
. Thus,
$\sigma ^n(c,A)\to \sigma ^n(c,B)$
is a theorem of
$\mathbf {BM}$
. But then by Lemma 3,
$\sigma (nc,A)\to \sigma (nc,B)$
is a theorem of
$\mathbf {BM}$
, whence so also is
$\neg \sigma (nc,B)\to \neg \sigma (nc,A)=\sigma (\varepsilon ,\neg B\to \neg A)$
as required.
Suppose the last rule applied was R4 and let
$\sigma $
be a lericone substitution. By IH, both
$\sigma ^l(\varepsilon ,A\to B)$
and
$\sigma ^r(\varepsilon ,C\to D)$
are theorems of
$\mathbf {BM}$
. Thus,
$\sigma ^l(c,A)\to \sigma ^l(c,B)=\sigma (lc,A)\to \sigma (lc,B)$
and
$\sigma ^r(c,C)\to \sigma ^r(c,D)=\sigma (rc,C)\to \sigma (rc,D)$
are theorems. But then so is the following:

But this just is
$\sigma (\varepsilon ,((B\to C)\to (A\to D))$
.
Corollary 3. If
$A\to B$
is a theorem of
$\mathbf {BM}$
, then there is a variable
$p$
, an
$\overline {x}\in \mathsf {LRN}$
, and occurrences
$A[p]$
of
$p$
in A and
$B[p]$
of
$p$
in B so that
$\mathsf {lrcn}(A[p]\to B)=\mathsf {lrcn}(A\to B[p])=\overline {x}c$
. Thus,
$p$
occurs under the same lrn-sequence in both A and B.
Proof. Let
$A\to B$
be a theorem of
$\mathbf {BM}$
and g be the Gödel substitution defined above. By Theorem 2,
$g(\varepsilon ,A\to B)$
is a theorem of
$\mathbf {BM}$
. So
$g(\varepsilon ,A\to B)=g(c,A)\to g(c,B)$
is also a theorem of
$\mathbf {R}$
. So some variable—let’s say
$p_{2^ig(\overline {x})}$
occurs simultaneously in both
$g(c,A)$
and
$g(c,B)$
. It follows that
$p_i$
occurs under the
$\mathsf {LRN}$
-sequence
$\overline {x}$
in both A and B and thus that
$\mathsf {lrcn}(A[p]\to B)=\mathsf {lrcn}(A\to B[p])=\overline {x}c$
.
We note here that this proof of Corollary 3 is parasitic on the existing variable sharing result for
$\mathbf {R}$
. We will show below that this parasitism is not essential.
3.1
${{\mathbf {B}}}$
is Closed Under Faithful Lericone Substitutions
$\mathbf {BM}$
, as mentioned, is ignominiously located in the subbasement of the relevant world. Here, we will show that a very mild modification of the above result lets us push the result from the subbasement to the basement proper.
Definition 10. Define
$\sim '$
to be the relation containing all and only pairs of the form
$\langle \overline {x}nn\overline {y}, \overline {xy} \rangle $
. Let
$\sim $
be the equivalence relation generated by
$\sim '$
,
$\underline {LRCN}$
be the set of equivalence classes of
$\mathsf {LRCN}$
under
$\sim $
, and
$\underline {LRN}$
be the set of equivalence class of
$\mathsf {LRN}$
under
$\sim $
. We say that a lericone substitution
$\sigma $
is faithful when
$\sigma (\overline {x},A)=\sigma (\overline {y},A)$
whenever
$\overline {x}\sim \overline {y}$
.
Theorem 3. If
$A'$
is a theorem of
$\mathbf {B}$
, then for all faithful lericone substitutions
$\sigma $
,
$\sigma (\varepsilon ,A')$
is a theorem of
$\mathbf {B}$
as well.
Proof. By induction on the proof of
$A'$
. For axioms and rules in
$\mathbf {BM}$
, the result follows from Theorem 2. We’re thus left to check the two new additions.
For the new axiom, we let
$\sigma $
be a faithful lericone substitution and compute as follows:

But since
$\sigma $
is faithful,
$\sigma (nnc,A)=\sigma (c,A)$
. Thus, the final line of the computation again records an instance of a
$\mathbf {B}$
-axiom.
Now suppose the last rule used in the proof was R5. Then
$A'=D\to \neg C$
, and the last step in the proof concluded this from
$C\to \neg D$
. Let
$\sigma $
be a faithful lericone substitution. By IH,
$\sigma ^n(\varepsilon ,C\to \neg D)$
is a theorem of
$\mathbf {B}$
so
$\sigma ^n(c,C)\to \neg \sigma ^n(nc,D)$
is a theorem of
$\mathbf {B}$
. Thus,
$\sigma (nc,C)\to \neg \sigma (nnc,D)$
is a theorem of
$\mathbf {B}$
. So by an application of R5,
$\sigma (nnc,D)\to \neg \sigma (nc,C)$
is a theorem of
$\mathbf {B}$
. Since
$\sigma $
is faithful,
$\sigma (nnc,D)=\sigma (c,D)$
. Thus,
$\sigma (c,D)\to \neg \sigma (nc,C)=\sigma (\varepsilon ,D\to \neg C)$
is a theorem of
$\mathbf {B}$
as required.
We end by stating without proof the corresponding variable sharing result.
Corollary 4. If
$A\to B$
is a theorem of
$\mathbf {B}$
, then there is a variable
$p$
, an
$\overline {x}\in \underline {LRN}$
, and occurrences
$A[p]$
of
$p$
in A and
$B[p]$
of
$p$
in B so that
$\mathsf {lrcn}(A[p]\to B)\sim \mathsf {lrcn}(A\to B[p])\sim \overline {x}c$
. Thus,
$p$
occurs under equivalent
$\mathsf {LRN}$
-sequences in both A and B.
3.2 Philosophical Reflections
Variable sharing is multifarious. The above results complement a myriad of distinct and increasingly fine-grained notions of variable sharing properties. It’s natural to wonder whether the proliferation is justified. We think it is, and will review a handful of reasons for thinking this to be so.
For one, variable sharing was historically taken to be a necessary but not sufficient condition on being a relevant logic.Footnote 11 Standefer [Reference Standefer, Sedlár, Standefer and Tedder40] has proposed using variable sharing, with one other plausible condition, to define the class of relevant logics. If we take this on board, different types of variable sharing would seem to give us different flavors of relevance.
In a bit more detail, Standefer [Reference Standefer, Sedlár, Standefer and Tedder40] proposes a three-component definition for the class of relevant logics. The first component is the definition of a logic: a logic is a set of formulas closed under plain substitutions. The second component is a definition of the relevant portion: a logic
${\mathbf {L}}$
is a proto-relevant logic iff it satisfies the variable sharing criterion, namely, that if
$A\to B\in {\mathbf {L}}$
, then A and B share an atom. For the third component, Standefer defines the relevant logics to be the proto-relevant logics that are closed under modus ponens, (R1) above, and adjunction, (R2) above.
By this definition all the standard relevant logics—including the ones that Anderson and Belnap discussed, such as
${\mathbf {R}}$
and
${\mathbf {E}}$
—count as relevant logics. But so do many other logics not typically included in the relevant family. For example, linear logic, many connexive logics, and even some non-transitive logics are included in the class, while classical and intuitionistic logic are excluded. Importantly, however, the definition includes logics that are properly stronger than
${\mathbf {R}}$
, as well as some that are incomparable with
${\mathbf {R}}$
, such as
${\mathbf {TMingle}}$
.Footnote
12
In fact, below we will isolate another relevant logic incomparable with
${\mathbf {R}}$
, as well as with
${\mathbf {TMingle}}$
.
Towards the end of his paper, Standefer considers that we might isolate different flavors of relevance by modifying the second component of his definition so as to require some stronger sort of variable sharing. Thus, for example, we might call a set of formulas a lericone-relevant logic if it is closed under plain substitutions, satisfies the
$\mathsf {LRCN}$
-variable sharing criterion, and is closed under modus ponens and adjunction. But what we see here is that there is an alternative way we might introduce new flavors of relevance: Rather than strengthen in the second component we can strengthen in the first.
In more detail, while there is nothing wrong with saying that (e.g.,) a lericone-relevant logic is a set of formulas that is closed under plain substitutions, satisfies the
$\mathsf {LRCN}$
-variable sharing criterion, and is closed under modus ponens and adjunction, we might alternatively say that a relevant lericone-hyperformal logic is a set of formulas that is is closed under
$\mathsf {LRCN}$
-substitutions, satisfies the ordinary variable-sharing criterion, and is closed under modus ponens, and adjunction. As we will see, it turns out that all lericone-hyperformal logics that are sublogics of classical logic,
${\mathbf {CL}}$
, are lericone-relevant logics.
The upshot of this is a general way to identify classes of relevant logics via different senses of formal relevance. The logics obtained via the use of lericone substitutions are well-behaved, albeit weak. Valid implications in these logics ensure a very tight connection between antecedent and consequent, much tighter than basic variable sharing, or even depth variable sharing.
Second, we can turn our attention to Anderson and Belnap’s [Reference Anderson and Belnap1, p. 33] explication of relevance as “common meaning content”. As one intuitive way of failing to share common content is a failure to share topic, these increasingly fine-grained notions of variable sharing line up very naturally with the burgeoning theory of topic as investigated by Yablo [Reference Yablo45] and Berto [Reference Berto5]. As discussed by Ferguson [Reference Ferguson, Sedlár, Standefer and Tedder16] and Ferguson and Kadlecikova [Reference Ferguson, Kadlecikova and Sedlár17], lericone relevance reflects an acknowledgement that the topic of a subsentence is affected by features of the intensional connectives in which it is nested and that its topic-theoretic contribution to the complex in which it appears must be evaluated in situ.
In
$\mathbf {B}$
, for example, the formula
$(A\rightarrow B)\rightarrow (\neg B\rightarrow \neg A)$
is not provable. In the presence of rule (R3), the failure of the entailment is not a matter of truth conditions, but can be attributed rather to a lack of common topic between
$A\rightarrow B$
and
${\neg B\rightarrow \neg A}$
, which would be a downstream consequence of a rejection of either Negation Transparency or Chiral Transparency. If one accepts the arguments of Ferguson [Reference Ferguson and Sedlár14, Reference Ferguson15] suggesting that contraposition is not a topic-preserving operation, then this would recommend the topic theory implicit in
$\mathbf {B}$
.
Likewise, the failure of theoremhood of
$\neg \neg A\rightarrow A$
in
$\mathbf {BM}$
signals the failure of a thesis concerning the invariance of topic under applications of double negation, namely, the failure of the following:
Involutive Transparency: For any sentence
$\Phi $
, the topics of
$\Phi $
and
$\neg \neg \Phi $
are identical.
$\mathbf {BM}$
, in failing to identify the propositions A and
$\neg \neg A$
as having common content, appears to part ways with
$\mathbf {B}$
on grounds of involutive transparency in this sense.
Such correspondences in weak relevant logics suggest the value of a very fine-grained and uniform hierarchy of variable sharing properties, as these correspondences align particular classes of relevant logics with particular linguistic views concerning topic transparency. If one’s preferred theory of topic accepts Involutive and Chiral Transparency but rejects Conditional Transparency, (e.g.,) the results of Logan [Reference Logan24] suggest that a notion of relevance tailored to this theory of topic might be exhibited by the class of strong depth relevant logics. As the theory of topic matures—and determinations are made concerning various transparency theses—a panoply of variable sharing properties may help justify the selection of individual relevant logics.
Finally, as we saw in the introduction, there are good reasons for demanding something like lericone-hyperformality. What we will see below in §4.3 is that lericone-hyperformality is almost enough on its own to ensure a logic enjoys ordinary variable sharing. So in fact, not only is hyperformality a novel path to new flavors of relevance, it’s actually a novel path to relevance full stop.
3.3 On ‘c’ Again
As we noted above, while our differential treatment of ‘c’s looks to be a glaring oddity, it is in fact the normal thing one does when proving variable sharing results. We promised then that we’d return to the question of whether it was what one should do. Here, we make good on the promise to return.
The point to observe here is this: if one of your goals is to prove an invariance result along the lines of Theorem 2 or Theorem 3, then some sort of differential treatment for initial conditionals seems to be strictly required. This is not a precise claim, so we can’t exactly prove it. What we can do is give a useful demonstration by showing that the ‘c-free’ analogue of closure under lericone substitutions is incompatible with even the barest whiff of logic. To that end, consider the following definition.
Definition 11 (LRN substitutions).
A lrn substitution is a function
$\mathsf {LRN}\times \mathsf {At}\to \mathcal {L}$
. Given a LRN substitution
$\sigma $
we extend it to a function
$\mathsf {LRN}\times \mathcal {L}\to \mathcal {L}$
(which we also call
$\sigma $
) as follows:
-
•
$\sigma (\overline {x},A\land B)=\sigma (\overline {x},A)\land \sigma (\overline {x},B)$ .
-
•
$\sigma (\overline {x},A\lor B)=\sigma (\overline {x},A)\lor \sigma (\overline {x},B)$ .
-
•
$\sigma (\overline {x},\neg A)=\neg \sigma (n\overline {x},A)$ .
-
•
$\sigma (\overline {x},A\to B)=\sigma (l\overline {x},A)\to \sigma (r\overline {x},B)$
Note that this definition is to
$\mathsf {LRN}$
what Definition 2 is to
$\mathsf {LRCN}$
.
Theorem 4. Let X be a set of sentences that contains the sentence
$p\to p$
and is closed under both modus ponens and lrn substitutions. Then for every sentence B,
$B\in X$
—that is, X is trivial.
Proof. Choose an lrn substitution
$\sigma $
such that
$\sigma (l,p)=p\to p$
and
$\sigma (r,p)=B$
. Then
$\sigma (\varepsilon ,p\to p)=(p\to p)\to B$
. So since
$p\to p\in X$
and X is closed under modus ponens and lrn substitutions,
$B\in X$
.
4 Classical Lericone Invariant Logic
We have introduced and motivated the concepts of a lericone and lericone-sensitive substitutions. The natural next question is what logic one gets from these concepts. In this section, we will formulate this question in a precise way, in terms of closure under lericone-sensitive substitutions. We will define a class of assignments that are, likewise, sensitive to lericones and show that the lericone-invariant fragment of
${\mathbf {CL}}$
coincides with the set of formulas valid with respect to these assignments. Following this, we will show that the resulting logic enjoys variable sharing, in fact a new form of variable sharing, and so this logic is a new relevant logic. Its consequence relation is, additionally, compact. While we do not have a Hilbert axiomatization of this logic, we will provide a simple tableau system for it, which will be shown sound and complete.
4.1 Characterizing
$\mathbf {CLV}$
part 1: The syntactic characterization
The logic we are interested in will be called
$\mathbf {CLV}$
. We will provide, in the next subsection, a semantic definition of
$\mathbf {CLV}$
. In this section, we provide a strictly syntactic characterization of a set of sentences that we will later show is identical
$\mathbf {CLV}$
. In particular, here we show that for arbitrary sets of sentences X (and thus in particular for the set of theorems
${\mathbf {CL}}$
of clasical logic) there is a largest subset of X closed under lericone substitutions. The fact that
$\mathbf {CLV}$
can be characterized in both of these ways is interesting in its own right, but also comes in handy when providing metatheory for the tableau system we will provide for
$\mathbf {CLV}$
in the next section.
Definition 12. For any set of formulas X, say that X is closed under lericone substitutions when for all lericone substitutions
$\sigma $
, if
$A\in X$
, then
$\sigma (\varepsilon ,A)\in X$
.
Definition 13.
$X^{\mathsf {LRCN}}=\{A:\sigma (\varepsilon ,A)\in X\text { for all lericone substitutions }\sigma \}$
It will be useful below to have a way of ‘composing’ two lericone substitutions. This is not strictly possible since the domain of a lericone substitution does not match its range. But there is an analogue to composition readily available, defined as follows.
Definition 14. Let
$\sigma $
and
$\tau $
be lericone substitutions. Define
$(\sigma \star \tau )$
to be the lericone substitution
$\langle \overline {x},p\rangle \mapsto \sigma (\overline {x},\tau (\overline {x},p))$
.
Note (and this is something to watch out for below) that while this definitely does define a lericone substitution, it’s not at all obvious from the definition alone that the way
$\sigma \star \tau $
behaves on atoms is the same as the way
$\sigma \star \tau $
behaves on complex formulas. That this is so is proved in the following lemma:
Lemma 4.
$(\sigma \star \tau )(\overline {x},A)=\sigma (\overline {x},\tau (\overline {x},A))$
for all formulas A.
Proof. By induction on A. If A is an atom, the result is immediate from the definition of
$(\sigma \star \tau )$
. We examine one of the two conditional cases and leave the remainder of the cases to the reader.
So suppose
$\overline {x}\neq \varepsilon $
. Then
$(\sigma \star \tau )(\overline {x},A\to B)=(\sigma \star \tau )(l\overline {x},A)\to (\sigma \star \tau )(r\overline {x},B)$
. By IH,
$(\sigma \star \tau )(l\overline {x},A)=\sigma (l\overline {x},\tau (l\overline {x},A))$
and
$(\sigma \star \tau )(r\overline {x},B)=\sigma (r\overline {x},\tau (r\overline {x},B))$
. Thus, we have that

Lemma 5.
$X^{\mathsf {LRCN}}$
is closed under lericone substitutions and if
$Y\subseteq X$
is closed under lericone substitutions, then
$Y\subseteq X^{\mathsf {LRCN}}$
. Thus,
$X^{\mathsf {LRCN}}$
is the largest subset of X closed under lericone substitutions.
Proof. Much as in the proof of Theorem 3 of Leach-Krouse et al. [Reference Leach-Krouse, Logan and Worley23]. Explicitly, to show that
$X^{\mathsf {LRCN}}$
is closed under lericone substitutions, let
$A\in X^{\mathsf {LRCN}}$
and
$\sigma $
be a lericone substitution. To see that
$\sigma (\varepsilon ,A)\in X^{\mathsf {LRCN}}$
, we need to show that for all lericone substitutions
$\tau $
,
$\tau (\varepsilon ,\sigma (\varepsilon ,A))\in X^{\mathsf {LRCN}}$
. But by Lemma 4,
$\tau (\varepsilon ,\sigma (\varepsilon ,A))=\tau \star \sigma (\varepsilon ,A)$
. And since
$\tau \star \sigma $
is a lericone substitution and
$A\in X^{\mathsf {LRCN}}$
, it follows that
$\tau \star \sigma (\varepsilon ,A)\in X$
as required.
Now note that if
$Y\subseteq X$
is closed under lericone substitutions and
$A\in Y$
, then for all
$\sigma$
,
$\sigma(\varepsilon,A)\in Y\subseteq X$
, so
$A\in X$
, so
$Y\subseteq X^{\mathsf {LRCN}}$
. Thus,
$X^{\mathsf {LRCN}}$
contains every subset of X that is closed under lericone substitutions.
4.2 Characterizing
$\mathbf {CLV}$
part 2: The Semantic Characterization
We will now provide a semantic characterization of
$\mathbf {CLV}$
. We do this by ‘twisting’ the usual semantics for classical logic by allowing it to vary across lericone sequences.
Definition 15. A lericone-sensitive truth-value assignment (from here on just an assignment) is a function
$f:\mathsf {LRCN}\times \mathsf {At}\longrightarrow \{0,1\}$
. We extend such an assignment to a function (which we also call
$f$
)
$\mathsf {LRCN}\times \mathcal {L}\longrightarrow \{0,1\}$
as follows:
-
•
$f(\overline {x},A\land B)=\inf (f(\overline {x},A), f(\overline {x},B))$
-
•
$f(\overline {x},A\lor B)=\mathrm{sup} (f(\overline {x},A), f(\overline {x},B))$
-
•
$f(\overline {x},\neg A)=1-f(n\overline {x},A)$ .
-
•
$f(\overline {x},A\to B)=\left \{ \begin {array}{rl} \mathrm{sup} (1-f(c,A),f(c,B)) & \text { if }\ \overline {x} = \varepsilon \\ \mathrm{sup} (1-f(l\overline {x},A),f(r\overline {x},B)) & \text { otherwise.} \end {array} \right .$
Given an assignment f, a set of sentences X and a sentence A, we say that just if either
$f(\varepsilon ,B)=0$
for some
$B\in X$
or
$f(\varepsilon ,A)=1$
. We say that
is valid and write
$X\vDash _{\mathsf {LRCN}} A$
just if
for all assignments f. We say that A is classically lericone valid when
$\emptyset \vDash _{\mathsf {LRCN}} A$
, and we write
$\mathbf {CLV}$
for the set of classically lericone valid formulas.
Our goal now is to show that
$\mathbf {CLV}={\mathbf {CL}}^{\mathsf {LRCN}}$
. In preparation for that, we will prove some lemmas.
Definition 16. Let
$\sigma $
be a lericone substitution and
$f$
be an assignment. We define the assignment
$(f\bullet \sigma )$
by
$(f\bullet \sigma )(\overline {x},p)=f(\overline {x},\sigma (\overline {x},p))$
.
Lemma 6. For all
$A\in \mathcal {L}$
,
$(f\bullet \sigma )(\overline {x},A)=f(\overline {x},\sigma (\overline {x},A))$
.
Proof. By induction on A. The base case is immediate from the definition of
$(f\bullet \sigma )$
. We consider only the
$\varepsilon $
part of the conditional case and leave the rest to the reader.
For that case, we compute as follows:

Lemma 7.
$\mathbf {CLV}$
is closed under lericone substitutions: If
$A\in \mathbf {CLV}$
and
$\sigma (\varepsilon ,A)=B$
, then
$B\in \mathbf {CLV}$
.
Proof. We prove the contrapositive, supposing that
$B\not \in \mathbf {CLV}$
and targeting the disjunction that either
$A\notin \mathbf {CLV}$
or
$\sigma (\varepsilon ,A)\neq B$
for demonstration. By supposition that
$B\notin \mathbf {CLV}$
, there is an assignment f such that
$f(\varepsilon , B)=0$
. Now, either
${\sigma (\varepsilon ,A)= B}$
or
$\sigma (\varepsilon ,A)\neq B$
; in the latter case, the target disjunction is satisfied, so assume that
$\sigma (\varepsilon ,A)= B$
. By Lemma 6,
$(f\bullet \sigma )(\varepsilon ,A)=f(\varepsilon ,\sigma (\varepsilon ,A))=f(\varepsilon ,B)=0$
. Thus,
${A\not \in \mathbf {CLV}}$
, satisfying the target disjunction.
Lemmas 5 and 7 establish that
$\mathbf {CLV}\subseteq {{\mathbf {CL}}}^{\mathsf {LRCN}}$
. Let us turn to the converse. A few definitions and lemmas will speed us along.
Definition 17 (Skeletons).
Let A be a formula.
$B\in \mathcal {L}$
is a skeleton of
$A\in \mathcal {L}$
iff for some injective atomic lericone substitution
$\iota $
,
$\iota (\varepsilon , A)=B$
. B is a skeleton iff there is some formula A such that B is a skeleton of A.
Clearly every formula has many skeletons.
Definition 18. Let
$\iota $
be an atomic injective lericone substitution. We define the plain substitution
$\iota ^{-1}$
as follows:

Lemma 8.
$\iota ^{-1}$
is well-defined and for all formulas A and all
$\overline {x}\in \mathsf {lrcn}$
,
$\iota ^{-1}(\iota (\overline {x},A))=A$
.
Proof. Suppose that
$\iota (\overline {y},q)=\iota (\overline {z},r)$
. Then since
$\iota $
is injective,
$q=r$
. Thus,
$\iota ^{-1}$
is well-defined.
To see that
$\iota ^{-1}(\iota (\overline {x},A))=A$
, a straightforward induction on A suffices.
Lemma 9. Let X be a set of formulas. Then, for every
$A\in \mathcal {L}$
, and every skeleton B of A,
$A\in X^{\mathsf {LRCN}}$
iff
$B\in X^{\mathsf {LRCN}}$
.
Proof. The ‘only if’ direction is immediate. For the other direction, suppose
${B\in X^{\mathsf {LRCN}}}$
, for each skeleton B of A. Pick one such B. By definition, there is an atomic injective
$\iota $
such that
$\iota (\varepsilon ,A)=B$
.
Since
$X^{\mathsf {LRCN}}$
is closed under lericone substitutions, it’s closed under plain substitutions as well. Thus,
$\iota ^{-1}(\iota (\varepsilon ,A))=A\in X^{\mathsf {LRCN}}$
as required.
Lemma 10.
${{\mathbf {CL}}}^{\mathsf {LRCN}}$
is contained in
$\mathbf {CLV}$
.
Proof. Suppose
$B\not \in \mathbf {CLV}$
. By Lemma 9, we may without loss of generality assume that B is a skeleton. Since
$B\not \in \mathbf {CLV}$
, there is an assignment f such that
$f(\varepsilon ,B)=0$
. Since
$B$
is a skeleton, we can define a
${{\mathbf {CL}}}$
-assignment g as follows:

By an induction on B we can show that
$f(\varepsilon , B)=g(B)=0$
. By definition,
${{{\mathbf {CL}}}^{\mathsf {LRCN}}\subseteq {{\mathbf {CL}}}}$
, so
$g(B)=0$
implies
$B\not \in {{\mathbf {CL}}}^{\mathsf {LRCN}}$
.
Thus, we’ve now established
Theorem 5.
${{\mathbf {CL}}}^{\mathsf {LRCN}}=\mathbf {CLV}$
.
4.3
$\mathbf {CLV}$
is A Relevant Logic
$\mathbf {CLV}$
is stronger than the logic
${{\mathbf {BM}}}$
. The formula
$(p\to q)\lor (q\to r)$
is contained in the former but not in the latter. While this formula is not attractive from a relevant-logical point of view, it is surprising that some of the paradoxes of implication survive even the stringent standards of lericone-sensitive assignments.
On the other hand, given that it does in fact contain some of the paradoxes of material implication, and seems to do so in virtue of its being quite close in spirit to classical logic, it comes as quite a surprise to find (as we are about to show) that
$\mathbf {CLV}$
nonetheless enjoys the variable sharing property. To prove this, we will need a definition and a lemma. First, the definition of polarity of an
$\mathsf {LRN}$
-sequence.
Definition 19 (Polarity).
The polarity of
$\overline {x}\in \mathsf {LRN}$
is defined as follows.
-
• The polarity of
$\varepsilon $ is positive.
-
• If the polarity of
$\overline {y}$ is positive(negative), then the polarity of
$n\overline {y}$ is negative(positive).
-
• If the polarity of
$\overline {y}$ is positive(negative), then the polarity of
$l\overline {y}$ is negative(positive).
-
• If the polarity of
$\overline {y}$ is positive(negative), then the polarity of
$r\overline {y}$ is positive(negative).
This definition of polarity matches the usual definition, when one is not considering c.Footnote 13 We will only be concerned with polarity in contexts where c will not arise.
Definition 20. Suppose A and C share no atoms. We define the following functions:

Note that aside from the initial caveat, the choice of C in the above definition is unimportant, and that it works just as well to define
$f^-_{A}=1-f^+_{A}$
. But the intuitive connection between the lemma here and the theorem below is better if we put the definition this way. Finally, note that since A and C don’t share any atoms, no subformula of A is a subformula of C.
Lemma 11. If B is a subformula of A, then
$f^+_A(\overline {x}c,B)=1$
if
$\overline {x}$
is positive and
$\mathsf {lrcn}((A\to C)[B])=\overline {x}c$
and
$f^+_A(\overline {x}c,B)=0$
if
$\overline {x}$
is negative and
$\mathsf {lrcn}((A\to C)[B])=\overline {x}c$
.
Proof. By induction on B. If
$B=p$
is an atom, the result is immediate from the definition of
$f^+_A$
.
For conjunctions note that
$f^+_A(\overline {x}c,B_1\land B_2)=\inf (f^+_A(\overline {x}c,B_1),f^+_A(\overline {x}c,B_2))$
and if
$\mathsf {lrcn}((A\to C)[B_1\land B_2])=\overline {x}c$
, then
$\mathsf {lrcn}((A\to C)[B_1])=\overline {x}c$
and
$\mathsf {lrcn}((A\to C)[B_2])=\overline {x}c$
. So if
$\overline {x}$
is positive, then by IH
$f^+_A(\overline {x}c,B_i)=1$
and thus
$f^+_A(\overline {x}c,B_1\land B_2)=1$
. On the other hand, if
$\overline {x}$
is negative, then by IH
$f^+_A(\overline {x}c,B_i)=0$
and thus
$f^+_A(\overline {x}c,B_1\land B_2)=0$
. Mutatis mutandis the same arguments work for disjunctions.
For negations, note that
$f^+_A(\overline {x}c,\neg B')=1-f^+_A(n\overline {x}c,B')$
. If
$\mathsf {lrcn}((A\to C)[\neg B'])=\overline {x}c$
, then
$\mathsf {lrcn}((A\to C)[B'])=n\overline {x}c$
. So if
$\overline {x}$
is positive, then by IH
$f^+_A(n\overline {x}c,B')=0$
and thus
$f^+_A(\overline {x}c,\neg B')=1$
. On the other hand, if
$\overline {x}$
is negative, then by IH
$f^+_A(n\overline {x}c,B')=1$
and thus
$f^+_A(\overline {x}c,\neg B')=0$
.
Finally, for conditionals note that
$f^+_A(\overline {x}c,B_1\to B_2)=\mathrm{sup} (1-f^+_A(l\overline {x}c,B_1), f^+_A(r\overline {x}c,B_2))$
. If
$\mathsf {lrcn}((A\to C)[B_1\to B_2])=\overline {x}c$
, then
$\mathsf {lrcn}((A\to C)[B_1])=l\overline {x}c$
and
$\mathsf {lrcn}((A\to C)[B_2])=r\overline {x}c$
. So if
$\overline {x}$
is positive, then by IH
$f^+_A(l\overline {x}c,B_1)=0$
and thus
$f^+_A(\overline {x}c, B_1\to B_2)=1$
. On the other hand, if
$\overline {x}$
is negative, then by IH
$f^+_A(l\overline {x}c,B_1)=1$
and
$f^+_A(r\overline {x}c,B_2)=0$
and thus
$f^+_A(\overline {x}c,B_1\to B_2)=0$
.
Lemma 12. If B is a subformula of A, then
$f^-_A(\overline {x}c,B)=0$
if
$\overline {x}$
is positive and
$\mathsf {lrcn}((A\to C)[B])=\overline {x}c$
and
$f^-_A(\overline {x}c,B)=1$
if
$\overline {x}$
is negative and
$\mathsf {lrcn}((A\to C)[B])=\overline {x}c$
.
Proof. By induction on B. If
$B=p$
is an atom, the result is immediate from the definition of
$f^-_A$
.
For conjunctions note that
$f^-_A(\overline {x}c,B_1\land B_2)=\inf (f^-_A(\overline {x}c,B_1),f^-_A(\overline {x}c,B_2))$
and if
$\mathsf {lrcn}((A\to C)[B_1\land B_2])=\overline {x}c$
, then
$\mathsf {lrcn}((A\to C)[B_1])=\overline {x}c$
and
$\mathsf {lrcn}((A\to C)[B_2])=\overline {x}c$
. So if
$\overline {x}$
is positive, then by IH
$f^-_A(\overline {x}c,B_i)=0$
and thus
$f^-_A(\overline {x}c,B_1\land B_2)=1$
. On the other hand, if
$\overline {x}$
is negative, then by IH
$f^-_A(\overline {x}c,B_i)=1$
and thus
$f^-_A(\overline {x}c,B_1\land B_2)=1$
. Mutatis mutandis the same arguments work for disjunctions.
For negations, note that
$f^-_A(\overline {x}c,\neg B')=1-f^-_A(n\overline {x}c,B')$
. If
$\mathsf {lrcn}((A\to C)[\neg B'])=\overline {x}c$
, then
$\mathsf {lrcn}((A\to C)[B'])=n\overline {x}c$
. So if
$\overline {x}$
is positive, then by IH
$f^-_A(n\overline {x}c,B')=0$
and thus
$f^-_A(\overline {x}c,\neg B')=1$
. On the other hand, if
$\overline {x}$
is negative, then by IH
$f^-_A(n\overline {x}c,B')=1$
and thus
$f^-_A(\overline {x}c,\neg B')=0$
.
Finally, for conditionals note that
$f^-_A(\overline {x}c,B_1\to B_2)=\mathrm{sup} (1-f^-_A(l\overline {x}c,B_1), f^-_A(r\overline {x}c,B_2))$
. If
$\mathsf {lrcn}((A\to C)[B_1\to B_2])=\overline {x}c$
, then
$\mathsf {lrcn}((A\to C)[B_1])=l\overline {x}c$
and
$\mathsf {lrcn}((A\to C)[B_2])=r\overline {x}c$
. So if
$\overline {x}$
is positive, then by IH
$f^-_A(l\overline {x}c,B_1)=0$
and thus
$f^-_A(\overline {x}c,B_1\to B_2)=1$
. On the other hand, if
$\overline {x}$
is negative, then by IH
$f^-_A(l\overline {x}c,B_1)=1$
and
$f^-_A(r\overline {x}c,B_2)=0$
and thus
$f^-_A(\overline {x}c,B_1\to B_2)=0$
.
Lemma 13. Suppose that A and B share no atoms. Define the assignment
$h$
by

Then if C is a subformula of A then
${h(\mathsf {lrcn}((A{\kern-1pt}\to{\kern-1pt} B)[C]),C){\kern-1pt}={\kern-1pt}f^+_A(\mathsf {lrcn}((A{\kern-1pt}\to{\kern-1pt} B)[C]),C)}$
and if C is a subformula of B then
$h(\mathsf {lrcn}((A\to B)[C]),C)=f^-_B(\mathsf {lrcn}((A\to B)[C]),C)$
Proof. By induction on C, separately for each conclusion. For atoms the result is immediate from the definition of h. We sample a selection of the remaining clauses and leave the rest to the reader.
Suppose
$C=D_1\land D_2$
is a subformula of A, and say
$\mathsf {lrcn}((A\to B)[C])=\overline {x}c$
. As
$\mathsf {lrcn}((A\to B)[C])=\mathsf {lrcn}((A\to B)[D_i])$
, it follows that

By IH,

which was to be proved.
Suppose
$C=D_1\lor D_2$
is a subformula of B, and say
$\mathsf {lrcn}((A\to B)[C])=\overline {x}c$
. Then, as
$\mathsf {lrcn}((A\to B)[C])=\mathsf {lrcn}((A\to B)[D_i])$
,

By IH,

which completes the case.
Suppose
$C=\neg D$
is a subformula of A, and say
$\mathsf {lrcn}((A\to B)[C])=\overline {x}c$
. As
$n\overline {x}c=\mathsf {lrcn}((A\to B)[D])$
, it follows that

The transition from the first to the second line is justified by IH, and the remainder are justified by the definition of assignments.
Suppose
$C=D_1\to D_2$
is a subformula of B, and say
$\mathsf {lrcn}((A\to B)[C])=\overline {x}c$
. As
$l\overline {x}c=\mathsf {lrcn}((A\to B)[D_1])$
and
$r\overline {x}c=\mathsf {lrcn}((A\to B)[D_2])$
, it follows that

Therefore,

By IH, this implies that the right-hand side is identical to

which in turn is identical to
$f^-_B(\mathsf {lrcn}((A\to B)[C]),C)$
, as desired.
With this lemma in hand, we can prove that
$\mathbf {CLV}$
enjoys variable sharing.
Theorem 6.
$\mathbf {CLV}$
enjoys the variable sharing property.
Proof. Suppose that A and B share no atoms and define h as above. By the preceding lemmas,
$h(c,A)=1$
and
$h(c,B)=0$
. It follows that
$h(\varepsilon , A\to B)=0$
, and thus
${A\to B\not \in \mathbf {CLV}}$
. Contraposing, if
$A\to B\in \mathbf {CLV}$
, then A and B share an atom.
Inspection of the proof reveals that we can strengthen the result to the following new form of variable sharing.
Corollary 5. If
$A\to B\in \mathbf {CLV}$
, then A and B share an atom with the same lericone sequence in
$A\to B$
.
Proof. Suppose that A and B do not share an atom with the same lericone sequence. Let
$A'\to B'$
be a skeleton that results from applying an atomic injective substitution to
$A\to B$
. It follows that
$A'$
and
$B'$
do not share any atoms. By the previous theorem,
$A'\to B'\not \in \mathbf {CLV}$
. As
$\mathbf {CLV}$
is closed under lericone substitutions, it follows that
${A\to B\not \in \mathbf {CLV}}$
. Therefore, by contraposing, if
$A\to B\in \mathbf {CLV}$
, A and B share an atom with the same lericone sequence in
$A\to B$
.
Since
$\mathbf {CLV}$
enjoys variable sharing, it is a relevant logic. We note that, since
$\mathbf {CLV}$
contains
$(p\to q)\lor (q\to r)$
, it is incomparable with the best known relevant logic, Anderson and Belnap’s logic
${{\mathbf {R}}}$
, which contains violations of lericone substitution closure, such as
$(p\to (p\to q))\to (p\to q)$
. In fact, it is incomparable with the logic
${{\mathbf {RMingle}}}$
, obtained from
${{\mathbf {R}}}$
by the addition of
$A\to (A\to A)$
, due to the same disjunction. As a consequence,
$\mathbf {CLV}$
is also incomparable with
${{\mathbf {TMingle}}}$
, which enjoys the variable sharing property, unlike
${{\mathbf {RMingle}}}$
.Footnote
14
Closure under lericone substitution identifies a different subfamily of the relevant logics from the usual suspects.
Two noteworthy observations are worth making at this point First, in light of Theorem 6, the proofs of Corollary 3 and Corollary 4 need no longer be parasitic on variable sharing results for
$\mathbf {R}$
—we need only observe that since the logics in question are lericone-invariant sublogics of classical logic, they are ipso facto sublogics of
$\mathbf {CLV}$
. This fact is sufficient to carry, in the proofs of the above corollaries, the weight formerly carried by known variable sharing results for
$\mathbf {R}$
.
Second, note that while we have defined an entire consequence relation
$\vDash _{\mathsf {LRCN}}$
, our relevance results hold only for the set of validities of this relation. The relation as a whole is irrelevant in the way Tarskian consequence relations usually are; e.g. for all A and B,
$A\vDash _{\mathsf {LRCN}}B\to B$
.Footnote
15
One might wonder, then, why we define the consequence relation at all. We give three reasons.
First, it was quite natural to do so and didn’t require much additional spilled ink. Second, the point was to present something that, in spite of being lericone invariant, differed as little as possible from classical logic. This way of presenting it seemed more of that flavor than the alternative—though we acknowledge that this is a matter of taste.
But third and perhaps most importantly, we hope that by presenting the matter this way, the incongruousness of the relevance of the set of validities of
$\vDash _{\mathsf {LRCN}}$
and the deep invalidity of the consequence relation itself might motivate some of our colleagues who are bothered by such things to cook up a better alternative. This seems, in fact, to have already borne fruit; see, e.g., Logan and Worley [Reference Logan and Worley28].
4.4
$\vDash _{\mathsf {LRCN}}$
is compact
We end the ‘nice features of
$\mathbf {CLV}$
and related systems’ theme by showing that
$\vDash _{\mathsf {LRCN}}$
is compact. To begin, we establish some notation. For a lericone substitution
$\sigma $
, say that
$\sigma (\overline {x},X)=\{\sigma (\overline {x},B):B\in X\}$
. Where
$\sigma $
is a lericone substitution, for a set of formulas X, let
$X^\sigma =\sigma (\varepsilon ,X)$
, and for a formula A,
$A^\sigma =\sigma (\varepsilon ,A)$
.
Lemma 14. Suppose
$\iota $
is an injective atomic lericone substitution, that
$X=Y^\iota $
, for some set of formulas Y, and that
$A=B^\iota $
, for some formula B. If
$X\vDash _{{{\mathbf {CL}}}}A$
, then
$X\vDash _{\mathsf {LRCN}} A$
.
Proof. Suppose
$X\not \vDash _{\mathsf {LRCN}} A$
. So, there is a
$\mathsf {LRCN}$
-assignment f such that
${f(\varepsilon , B)=1}$
, for each
$B\in X$
, and
$f(\varepsilon ,A)=0$
. We want to construct a
${{\mathbf {CL}}}$
-assignment g “matching” f. First, note that for every atom p in
$X\cup \{A\}$
, for all formulas
$B_1,B_2\in X\cup \{A\}$
,
$\mathsf {lrcn}(B_1[q_1])=\mathsf {lrcn}(B_2[q_2])$
, where
$q_1$
and
$q_2$
are occurrences of p. Suppose that
$\mathsf {lrcn}(B_1[q_1])\neq \mathsf {lrcn}(B_2[q_2])$
, and say that
$\overline {x}=\mathsf {lrcn}(B_1[q_1])$
and
$\overline {y}=\mathsf {lrcn}(B_2[q_2])$
. Then for some
$r,s\in \mathsf {At}$
,
$\iota (\overline {x},r)=p$
and
$\iota (\overline {y},s)=p$
, which contradicts the injectivity of
$\iota $
. Therefore, each occurrence of p in
$X\cup \{A\}$
has the same
$\mathsf {lrcn}$
-sequence, which we will denote by
$\mathsf {lrcn}(p)$
.
Define the
${{\mathbf {CL}}}$
-assignment g by defining

The function g is well-defined because for each
$p\in \mathsf {At}$
,
$\mathsf {lrcn}(p)$
is well-defined. We claim that for
$B\in X\cup \{A\}$
,
$g(B)=f(\varepsilon , B)$
. By an induction on structure, for each
$B\in X\cup \{A\}$
, for each subformula C of B,
$g(C)=f(\mathsf {lrcn}(B[C]), C)$
.
It follows that
$g(B)=1$
, for all
$B\in X$
, and
$g(A)=0$
. Therefore,
$X\not \vDash _{{{\mathbf {CL}}}}A$
, as desired.
Next, we note a lemma whose proof is straightforward.
Lemma 15. Suppose that
$\iota $
is an injective atomic lericone substitution, that
$X=Y^\iota $
, for some
$Y\subseteq \mathcal {L}$
, and that
$A=B^\iota $
, for some
$B\in \mathcal {L}$
. If
$X\vDash _{\mathsf {LRCN}}A$
, then
$X\vDash _{{{\mathbf {CL}}}} A$
.
Proof. The proof is left to the reader.
Putting the preceding lemmas together we have the following corollary.
Corollary 6. Suppose
$\iota $
is an injective atomic lericone substitution, that
$X=Y^\iota $
, for some set of formulas Y, and that
$A=B^\iota $
, for some formula B. Then,
$X\vDash _{\mathsf {LRCN}}A$
iff
$X\vDash _{{{\mathbf {CL}}}} A$
.
We will note another straightforward lemma.
Lemma 16. The following are equivalent, for all
$X\subseteq \mathcal {L}$
and all
$A\in \mathcal {L}$
.
-
•
$X\vDash _{\mathsf {LRCN}} A$ .
-
•
$X^\iota \vDash _{\mathsf {LRCN}} A^\iota $ , for all atomic injective
$\iota $ .
Proof. The proof is left to the reader.
Putting these pieces together, we get compactness for
$\vDash _{\mathsf {LRCN}}$
.
Theorem 7. For
$X\subseteq \mathcal {L}$
and
$A\in \mathcal {L}$
, if
$X\vDash _{\mathsf {LRCN}}A$
, then for some finite
$Y\subseteq X$
,
$Y\vDash _{\mathsf {LRCN}}A$
.
Proof. Suppose
$X\vDash _{\mathsf {LRCN}}A$
. By Lemma 16, we can assume that X and A are the images of some atomic injective
$\iota $
. It then follows by Corollary 6 that
$X\vDash _{{{\mathbf {CL}}}}A$
. By the compactness of classical logic, for some finite
$Y\subseteq X$
,
$Y\vDash _{{{\mathbf {CL}}}}A$
. By Corollary 6,
$Y\vDash _{\mathsf {LRCN}}A$
, as desired.
4.5 Faithful
$\mathbf {CLV}$
Recall the distinction between faithful lericone substitutions and lericone substiutions simpliciter; this distinction, we showed, is reflected in the relationship between
$\mathbf {B}$
(closed under faithful substitutions) and
$\mathbf {BM}$
(closed under all lericone substitutions). By encoding Involutive Transparency in its axioms,
$\mathbf {B}$
acts as a sort of faithful counterpart to
$\mathbf {BM}$
.
This leads us to turn our attention to such a faithful counterpart of
$\mathbf {CLV}$
, a logic
$\mathbf {CLV}^{\sim }$
that bears to
$\mathbf {CLV}$
the same relationship that
$\mathbf {B}$
bears to
$\mathbf {BM}$
. If we write
$X\vDash _{\mathsf {LRCN}^{\sim }} A$
to indicate that
for all faithful assignments f, we induce a new notion of validity and may write
$\mathbf {CLV}^{\sim }$
for the set of formulas valid with respect to all faithful assignments.
In order to introduce the system, consider several definitions.
Definition 21.
$X^{\mathsf {LRCN}^{\sim }}=\lbrace A:\sigma (\varepsilon ,A)\in X\mbox { for all faithful lericone substitutions }\sigma \rbrace $
Definition 22. A set of sentences X is closed under faithful lericone substitutions when for all faithful lericone substitutions
$\sigma $
, if
$A\in X$
then
$\sigma (\varepsilon ,A)\in X$
.
Recall the definition of the operation
$\star $
. We make a few observations. Note first that Lemma 4 holds a fortiori in case
$\sigma $
and
$\tau $
are faithful lericone substitutions. This observation allows us to infer a second lemma.
Lemma 17. If
$\sigma ,\tau $
are faithful lericone substitutions, then
$\sigma \star \tau $
is a faithful lericone substitution.
Proof. Suppose
$\sigma ,\tau $
to be faithful and suppose that
$\overline {x}\sim \overline {y}$
. Then by Lemma 4,
$(\sigma \star \tau )(\overline {x},A)$
is
$\sigma (\overline {x},\tau (\overline {x},A))$
. By assumption that
$\sigma $
and
$\tau $
are faithful,
$\sigma (\overline {x},\tau (\overline {x},A))=\sigma (\overline {x},\tau (\overline {y},A))=\sigma (\overline {y},\tau (\overline {y},A)$
, which, by Lemma 4 once more, is equal to
$(\sigma \star \tau )(\overline {y},A)$
. As A,
$\overline {x}$
, and
$\overline {y}$
were arbitrary, we conclude that
$\sigma \star \tau $
is faithful.
Knowing that
$\sigma \star \tau $
is faithful in case
$\sigma $
and
$\tau $
are allows us to modify the logic of the proof of Lemma 5 to infer the following.
Lemma 18. For a set of formulas X,
$X^{\mathsf {LRCN}^{\sim }}$
is the largest subset of X closed under faithful lericone substitutions.
Definition 23. A faithful lericone-sensitive truth-value assignment is a lericone-sensitive truth-value assignment such that for all
$\overline {x}\overline {y}\in \mathsf {LRCN}$
and formulas
$p\in \mathsf {At}$
,
$f(\overline {x}\overline {y},p)=f(\overline {x}nn\overline {y},p)$
.
Lemma 19. If
$f$
is a faithful lericone-sensitive truth-value assignment, then for all
$A\in \mathcal {L}$
,
$f(\overline {xy},A)=f(\overline {x}nn\overline {y},A)$
Proof. By induction on A.
As Lemma 6 holds a fortiori in case f is a faithful lericone-sensitive assignment and
$\sigma $
is a faithful lericone substitution, we can infer the following lemma almost for free.
Lemma 20.
$\mathbf {CLV}^{\sim }$
is closed under faithful lericone substitutions, i.e.,
$(\mathbf {CLV}^{\sim })^{\mathsf {LRCN}^{\sim }} = \mathbf {CLV}^{\sim }$
.
This allows us to infer that
$\mathbf {CLV}^{\sim }\subseteq \mathbf {CL}^{\mathsf {LRCN}^{\sim }}$
. To show the converse, we note again that many lemmas established in pursuit of showing the relationship between
$\mathbf {CLV}$
and
$\mathbf {CL}^{\mathsf {LRCN}}$
carry over immediately to the case of faithful lericone substitutions.
Definition 24. Let
$B\in \mathcal {L}$
be a skeleton of a formula A. We call B a faithful skeleton if there exists an injective atomic faithful lericone substitution
$\sigma $
such that
$\sigma (\varepsilon , A)=B$
.
Lemma 21. For X a set of formulas, for all
$A\in \mathcal {L}$
and every faithful skeleton B of A,
$A\in X^{\mathsf {LRCN}^{\sim }}$
iff
$B\in X^{\mathsf {LRCN}^{\sim }}$
.
Proof. Just as every plain substitution is a lericone substitution, so, too, is every plain substitution a faithful lericone substitution. Consequently, selecting an atomic injective faithful
$\sigma $
such that
$\sigma (\varepsilon ,A)=B$
allows us to rehearse the reasoning of the proof of Lemma 9 to the case of
$X^{\mathsf {LRCN}^{\sim }}$
.
The reasoning of the proof of Lemma 10—that from any lericone counterexample f, one can construct a classical counterexample g—applies a fortiori to faithful lericone assignments f, allowing us to infer that
$\mathbf {CL}^{\mathsf {LRCN}^{\sim }}\subseteq \mathbf {CLV}^{\sim }$
, whence we may conclude the following.
Theorem 8.
$\mathbf {CL}^{\mathsf {LRCN}^{\sim }}=\mathbf {CLV}^{\sim }$
We can note the distinctness of
$\mathbf {CLV}$
and
$\mathbf {CLV}^{\sim }$
by observing that
$A\rightarrow \neg \neg A$
is not valid in
$\mathbf {CLV}$
although it is valid in
$\mathbf {CLV}^{\sim }$
. Consequently,
$\mathbf {CLV}\subsetneq \mathbf {CLV}^{\sim }$
. For this reason, that
$\mathbf {CLV}$
is a relevant logic need not immediately entail that
$\mathbf {CLV}^{\sim }$
is a relevant logic. We start the task of showing it to be relevant via the following lemma.
Lemma 22.
$\bar {x}nn\bar {y}$
is positive (negative) iff
$\bar {x}\bar {y}$
is positive (negative).
This means that with respect to the subformulas that matter, the assignment h defined in Lemma 13 is a faithful assignment. But it need not be faithful in general, requiring that we show how to produce a faithful version of h.
Lemma 23. Define the assignment
$h^{\sim }$
by

Then
$h^{\sim }$
both has the properties described in Lemma 13 and is faithful.
Proof. The same induction as in Lemma 13 carries over to establish that the same holds for
$h^{\sim }$
. To show that
$h^{\sim }$
is faithful, n.b., that if p occurs in A (respectively, B) and
$\bar {x}\sim \bar {y}$
, then
$f^{+}_{A}(\bar {x},p)=f^{+}_{A}(\bar {y},p)$
(respectively,
$f^{-}_{B}(\bar {x},p)=f^{-}_{B}(\bar {y},p)$
). Otherwise,
$h^{\sim }(\bar {x},p)=1=h^{\sim }(\bar {y},p)$
independently of choice of
$\bar {x}$
.
Following the argument of Theorem 6 but noting that
$h^{\sim }$
is faithful allows us to infer that:
Theorem 9.
$\mathbf {CLV}^{\sim }$
enjoys the variable sharing property.
This, moreover, allows us to infer that
$\mathbf {CLV}^{\sim }$
has a slightly weaker variable sharing property of faithful lericone relevance.
Corollary 7. If
$A\rightarrow B\in \mathbf {CLV}^{\sim }$
then A and B share an atom
$p$
with equivalent
$\mathsf {LRN}$
sequences in both A and B.
5 Tableaux for Classical Lericone Consequence
We take the preceding to demonstrate that
$\mathbf {CLV}$
is an interesting system that deserves the attention of logicians, particularly relevant logicians. But as it stands, we have not provided anything resembling proof theory for
$\mathbf {CLV}$
. We conclude the paper by providing a tableau system for
$\vDash _{\mathsf {LRCN}}$
and providing the usual metatheory for it.

5.1 Tableaux for
$\mathbf {CLV}$
Where
$X\cup \{A\}$
is a set of formulas we call an expression of the form
a sequent. Given a sequent
, its initial tableau has a single branch containing, for each
$B\in X$
, a triple of the form
$\langle \varepsilon ,1,B\rangle $
and also containing the triple
$\langle \varepsilon ,0,A\rangle $
. A tableau can be extended according to the following rules:

A branch on a tableau closes when there is a lericone sequence
$\overline {x}$
and a formula A so that both
$\langle \overline {x},0,A\rangle $
and
$\langle \overline {x},1,A\rangle $
occur somewhere in the tableau. Otherwise, the branch is open. A tableau is closed when all of its branches are closed. We say that
$X\vdash A$
when the initial tableau for
eventually closes.
We will now turn to showing that
$X\vdash A$
iff
$X\vDash _{\mathsf {LRCN}} A$
. To that end, we first introduce some definitions.
Definition 25. A set of triples is a set all of whose members have the form
$\langle \overline {x},i,A\rangle $
with
$\overline {x}\in \mathsf {LRCN}$
,
$i\in \{0,1\}$
, and
$A\in \mathcal {L}$
.
Definition 26. A lericone assignment
$f$
conforms to the set of triples S just if
$\langle \overline {x},i,A\rangle \in S$
only if
$f(\overline {x},A)=i$
.
Definition 27. Let S be a set of triples. We define the
$\langle \overline {x},i,A\rangle $
-extensions of S as follows: if
$A\in \mathsf {At}$
or
$\langle \overline {x},i,A\rangle \not \in S$
, then S is the only
$\langle \overline {x},i,A\rangle $
-extension of S. Otherwise,

Lemma 24. Let S be a set of triples and
$\langle \overline {x},i,A\rangle \in S$
. Then
$f$
conforms to S iff
$f$
conforms to at least one
$\langle \overline {x},i,A\rangle $
-extension of S.
Proof. By examining cases. The atomic case is immediate. We examine two of the four conditional cases and leave all other cases to the reader.
Suppose
$\langle \overline {x},i,A\rangle =\langle \varepsilon ,0,B_1\to B_2\rangle $
. Since f conforms to S,
$f(\varepsilon ,B_1\to B_2)=0$
. Thus,
$\mathrm{sup} (1-f(c,B_1),f(c,B_2))=0$
. It follows that
$f(c,B_1)=1$
and
$f(c,B_2)=0$
. So f conforms to
$S\cup \{\langle c,1,B_1\rangle ,\langle c,0,B_2\rangle \}$
as required.
Suppose
$\overline {x}\neq \varepsilon $
and
$\langle \overline {x},i,A\rangle =\langle \overline {x},1,B_1\to B_2\rangle $
. Since f conforms to S,
$f(\overline {x},B_1\to B_2)=1$
. Thus,
$\mathrm{sup} (1-f(l\overline {x},B_1),f(r\overline {x},B_2))=1$
. So either
$f(l\overline {x},B_1)=0$
or
$f(r\overline {x},B_2)=1$
. Thus, f either conforms to
$S\cup \{\langle l\overline {x},0,B_1\rangle \}$
or conforms to
$S\cup \{\langle r\overline {x},1,B_1\rangle \}$
, as required.
Theorem 10 (Soundness).
If
$X\vdash A$
, then
$X\vDash _{\mathsf {LRCN}} A$
.
Proof. Suppose
$X\vdash A$
. Let
$f(\varepsilon ,B)=1$
for all
$B\in X$
. If
$f(\varepsilon ,A)=0$
, then f conforms to the initial tableau for
. So by the preceding lemma, in all extensions of the initial tableau for
, f conforms to the set of formulas on at least one branch. But since
$X\vdash A$
, all branches eventually contain both
$\langle \overline {x},1,C\rangle $
and
$\langle \overline {x},0,C\rangle $
, which no assignment can conform to. So
$f(\varepsilon ,A)\neq 0$
, as required.
For completeness, we need two further definitions:
Definition 28. A set of triples is saturated when
$\langle \overline {x},i,A\rangle \in S$
only if S contains one of its
$\langle \overline {x},i,A\rangle $
-extensions.
Definition 29. A tableau is complete just if for each of its branches, the set of triples on that branch is saturated.
Notice that if
$X\not \vdash A$
, then there will be an open branch on each completed tableau for
.
Theorem 11 (Completeness).
If
$X\not \vdash A$
, then
$X\not \vDash _{\mathsf {LRCN}} A$
.
Proof. Choose a completed tableau for and an open branch
$\mathcal {B}$
on it. Let
$f_{\mathcal {B}}(\overline {x},p)=1$
iff
$\langle \overline {x},1,p\rangle \in \mathcal {B}$
. By induction on C, one sees that if
$\langle \overline {x},i,C\rangle \in B$
, then
$f_{\mathcal {B}}(\overline {x},C)=i$
. So since
$\{\langle \varepsilon ,1,D\rangle \mid D\in X\}\subseteq \mathcal {B}$
and
$\langle \varepsilon ,0,A\rangle \in \mathcal {B}$
,
. Thus,
$X\not \vDash _{\mathsf {LRCN}} A$
.
In virtue of offering proof theory via tableaux, we additionally can make two further observations concerning decidability:Footnote 16
Theorem 12 (Decidability of
$\vdash $
).
For finite sets of premises, the consequence relation
$\vdash $
is decidable.
Proof. To check for validity of an inference
$X\vdash A$
, start an initial tableau for
. This tableau is guaranteed to terminate in a finite number of steps with one of two conditions obtaining: Either each branch is closed (in which case
$X\vdash A$
holds) or there is a branch that cannot be closed (in which case
$X\vdash A$
fails). Consequently, the tableau calculus provides a decision procedure for the consequence relation
$\vdash $
.
As a corollary, we have a similar result about the set of theorems
$\mathbf {CLV}$
:
Corollary 8 (Decidability of
$\mathbf {CLV}$
).
The set
$\mathbf {CLV}$
is decidable.
Proof.
$A\in \mathbf {CLV}$
iff
$\varnothing \vdash A$
, whence this follows as a corollary of Theorem 12.
Now, we move on to examine proof theory for the faithful version of this system.
5.2 Tableaux for Faithful
$\mathbf {CLV}$
Let us return to the semantically-defined consequence relation of
$\mathbf {CLV}^{\sim }$
to provide a tableau calculus for this system as well. We can provide a tableau calculus for the faithful version of
$\mathbf {CLV}$
by adding to the foregoing calculus the following rule:
Faithfulness Rule

We say that
$X\vdash ^{\sim }A$
holds when the initial tableau for
closes for tableaux including the Faithfulness Rule.
We could also offer a an alternative method of defining the tableau calculus by stating that a branch in the calculus omitting the Faithfulness Rule faithfully closes when there are lericone sequences
$\bar {x}\sim \bar {y}$
such that both
$\langle \bar {x},0,A\rangle $
and
$\langle \bar {y},1,A\rangle $
appear on the branch and say that a tableau omitting the Faithfulness Rule faithfully closes if every branch faithfully closes. Then we could say that
$X\vdash ^{\sim }_{2}A$
holds when every tableau omitting the Faithfulness Rule faithfully closes.
Definition 30. Let
$\bar {x}$
be a lericone sequence and define its faithful reduct
$\rho (\bar {x})$
recursively by saying that
$\rho (\overline {x})=\overline {x}$
if
$\overline {x}$
contains no occurrences of
$nn$
and
$\rho (\overline {x}nn\overline {y})=\rho (\overline {xy})$
otherwise.
The following is immediate.
Lemma 25.
$\bar {x}\sim \bar {y}$
iff
$\rho (\bar {x})=\rho (\bar {y})$
In support of proving soundness, we will establish two lemmas.
Lemma 26.
$X\vdash ^{\sim }A$
iff
$X\vdash ^{\sim }_{2}A$
Proof. For left-to-right, consider a completed tableau for without the Faithfulness Rule with a branch
$\mathcal {B}$
that is not faithfully closed, i.e., for which there are no nodes
$\langle \bar {x},1,B\rangle $
and
$\langle {y},0,B\rangle $
on
$\mathcal {B}$
for which
$\bar {x}\sim \bar {y}$
. By Lemma 25, there are no such
$\bar {x},\bar {y}$
where
$\rho (\bar {x})=\rho (\bar {y})$
. Consequently, no number of applications of the Faithfulness Rule will yield a
$\bar {z}$
such that
$\langle \bar {z},1,B\rangle $
and
$\langle \bar {z},0,B\rangle $
are on an extension to that branch, i.e.,
$X\nvdash ^{\sim }A$
.
For right-to-left, take a tableau for without the Faithfulness Rule such that all branches faithfully close. Then there exist
$\langle \bar {x},1,B\rangle $
and
$\langle \bar {y},0,B\rangle $
on any branch
$\mathcal {B}$
such that
$\bar {x}\sim \bar {y}$
. One can then apply the Faithfulness Rule to each node finitely many times to yield
$\langle \rho (\bar {x}),1,B\rangle $
and
$\langle \rho (\bar {y}),0,B\rangle $
on an extended branch
$\mathcal {B}'$
without any novel branching. By Lemma 25,
$\rho (\bar {x})=\rho (\bar {y})$
whence the extended branch
$\mathcal {B}'$
closes simpliciter.
Lemma 27. A faithful
$f$
cannot conform to a set of triples including two triples
$\langle \bar {x},1,C\rangle $
and
$\langle \bar {y},0,C\rangle $
such that
$\bar {x}\sim \bar {y}$
.
Proof. By definition of f’s conforming to S, if f were to conform to both
$\langle \bar {x},1,C\rangle $
and
$\langle \bar {y},0,C\rangle $
, then
$f(\bar {x},C)=1$
and
$f(\bar {y},C)=0$
, contradicting the faithfulness of f.
This allows us to prove soundness of the tableau system with respect to
$\mathsf {LRCN}^{\sim }$
:
Theorem 13 (Soundness).
If
$X\vdash ^{\sim } A$
, then
$X\vDash _{\mathsf {LRCN}^{\sim }} A$
.
Proof. Suppose that
$X\vdash ^{\sim }A$
and suppose for contradiction that f is a faithful assignment such that
$f(\epsilon ,B)=1$
for each
$B\in X$
and
$f(\epsilon ,A)=0$
. By Lemma 26, this means that for every tableau in the system without the Faithfulness Rule, we can follow the reasoning of Theorem 10 to ensure in every extension of the initial tableau for
, f must conform to the formulas on some branch. But Lemma 26 means that every branch includes some nodes
$\langle \bar {x},1,C\rangle $
and
$\langle \bar {y},0,C\rangle $
where
$\bar {x}\sim \bar {y}$
, i.e.,
$f(\bar {x},C)\neq f(\bar {y},C)$
, contradicting the faithfulness of f per Lemma 27.
To prove completeness, we again navigate a handful of definitions and lemmas.
Definition 31. A lericone-sensitive assignment
$f$
is atomically faithful if for all atoms
$p$
and sequences
$\bar {x}\sim \bar {y}$
,
$f(\bar {x},p)=f(\bar {y},p)$
.
Lemma 28. A lericone-sensitive assignment
$f$
is atomically faithful if and only if it is faithful simpliciter.
Proof. Right-to-left is immediate. Left-to-right follows from an induction on complexity of formulas, e.g., supposing that
$\bar {x}\sim \bar {y}$
then:

Lemma 29. Let X be a set of formulas with signed atoms
$\mathsf {At}(X)=\lbrace \langle \bar {x},p\rangle \mid \bar {x}=\mathsf {lrcn}(B[p])\mbox { for some }B\in X\rangle $
with respect to which it is atomically faithful. Then there exists a faithful
$f'$
such that
$f'(\varepsilon ,B)=f(\varepsilon ,B)$
for all
$B\in X$
.
Proof. Define
$f'$
so that

Then if f is atomically faithful, so is
$f'$
and moreover, by Lemma 28,
$f'$
is faithful without qualification.
Definition 32. Let S be a set of triples. Its faithful closure
$S^{\sim }$
is the set
$\lbrace \langle \bar {x},i.A\rangle \mid \langle \bar {y},i,A\rangle \in S\mbox { and }\bar {x}\sim \bar {y}\rangle \rbrace $
.
Lemma 30. Let
$\mathcal {B}$
be an open branch in a saturated, complete tableau in the system including the Faithfulness Rule. Then
$\mathcal {B}^{\sim }$
includes no inconsistent pairs of triples.
Proof. We prove this by contraposition. Suppose that there are triples
$\langle \bar {x},1,C\rangle $
and
$\langle \bar {x},0,C\rangle $
in
$\mathcal {B}^{\sim }$
. Then as
$\bar {x}$
is a finite string, these tuples entered the closure of
$\mathcal {B}$
by finitely many occasions of eliding substrings
$nn$
given initial tuples
$\langle \bar {y},1,C\rangle $
and
$\langle \bar {y},0,C\rangle $
from the branch
$\mathcal {B}$
. But by completeness of the branch, corresponding applications of the Faithfulness Rule must have led to
$\langle \bar {x},1,C\rangle $
and
$\langle \bar {x},0,C\rangle $
having appeared in
$\mathcal {B}$
itself, ruling out either its openness.
Together, the foregoing lemmas set up a proof of the completeness of the tableau system with the Faithfulness Rule with respect to
$\mathsf {LRCN}^{\sim }$
.
Theorem 14 (Completeness).
If
$X\nvdash ^{\sim } A$
, then
$X\nvDash _{\mathsf {LRCN}^{\sim }} A$
.
Proof. Suppose that
$X\nvdash ^{\sim }A$
and find an open branch and take its faithful closure
$\mathcal {B}^{\sim }$
. As before, we can a valuation
$f_{\mathcal {B}^{\sim }}(\bar {x},p)=1$
precisely when
$\langle \bar {x},1,p\rangle \in \mathcal {B}^{\sim }$
; by similar reasoning as used in the proof of completeness of
$\mathbf {CLV}$
,
. What remains to be shown is that we can find a faithful
$f_{\mathcal {B}^{\sim }}'$
witnessing this fact. By Lemma 29, however, we are guaranteed that one exists, whence
$X\nvDash _{\mathsf {LRCN}^{\sim }}A$
.
As in the previous section, we can also follow the proof steps of decidability of
$\vdash $
and
$\mathbf {CLV}$
from Theorem 12 and Corollary 8, respectively, to show decidability of the analogous faithful consequence relation
$\vdash ^{\sim }$
and set
$\mathbf {CLV}^{\sim }$
.
Theorem 15 (Decidability of
$\vdash ^{\sim }$
).
For finite sets of premises, the consequence relation
$\vdash ^{\sim }$
is decidable.
Corollary 9 (Decidability of
$\mathbf {CLV}^{\sim }$
).
The set
$\mathbf {CLV}^{\sim }$
is decidable.
6 Conclusion
We introduced a novel and very fine-grained form of hyperformality, lericone-hyperformality. We showed that two well-known relevant logics are in fact lericone-hyperformal. As usual this gave us lericone variable-sharing as a corollary. Consequently, variable sharing for
${{\mathbf {BM}}} $
and
${{\mathbf {B}}}$
can be freed from parasitism on variable sharing results for
$\mathbf {R}$
.
We then introduced a new relevant logic that was also lericone-hyperformal—in fact, what we introduced was the largest lericone-hyperformal sublogic of classical logic—and a “faithful” sister system. These logics were defined semantically via the mechanism of lericone-sensitive assignments. We also provided proof systems for these logics in the form of tableau systems.
There are many natural extensions of this work to be considered. Most pressing to us is actually giving a decent axiomatization of
$\mathbf {CLV}$
and
$\mathbf {CLV}^{\sim }$
, and perhaps providing an alternative proof system for
$\models _{\mathsf {LRCN}}$
. It’s also not clear how far we might extend the proof of our variable sharing result for
$\mathbf {CLV}$
. It doesn’t work when tracking depth alone à la Brady [Reference Brady7]—the largest sublogic of classical logic closed under depth substitutions will still contain all substitutions of
$p\to (q\lor \neg q)$
, so it doesn’t enjoy variable sharing. But perhaps signed depth (à la Logan [Reference Logan24]) would do the job. We’re not sure. It would be good to know whether there is a single strongest lericone-relevant sublogic of
${{\mathbf {CL}}}$
or whether there are multiple, incomparable such logics, as is the case with sublogics of
${{\mathbf {CL}}}$
satisfying the variable-sharing criterion.Footnote
17
Finally, extensions of the vocabulary being considered naturally present opportunities for doing more syntax-tracking. Both modal extensions and extensions with constants in the
$\mathsf {t},\mathsf {f},\top ,\bot $
family are interesting in this regard.
Acknowledgments
We would like to thank Rohan French, Lloyd Humberstone, Tore Fjetland Øgaard, Blane Worley, the members of the Logic Group at Kansas State University, and two referees from this journal for comments and discussion that improved this paper.
Funding
Shawn Standefer’s research was supported by the Sprout Career Development Grant NTU-114L7850 from National Taiwan University.