## 1 Introduction

Paraconsistent logics employ various strategies to reject the rule known as “explosion” ( $A,\neg A\vdash B$ ), according to which one may infer any conclusion from contradictory premises. Such a strategy must come at the expense of some classically-valid logical principle. In Graham Priest’s $Logic\ of\ Paradox$ (LP), all classical theorems are preserved, but many classically-valid inferences such as the disjunctive syllogism ( $A,\neg A\vee B\vdash B$ ) fail to hold [Reference Priest10]. Yet, Priest’s paraconsistent agenda has never been revisionist in nature. As he himself describes this agenda:

“By and large, it has accepted that the reasoning of classical mathematics is correct. What it has wished to do is to reject the excrescence $ex\ contradictione\ quodlibet$ , which does not appear to be an integral part of classical reasoning, but merely leads to trouble when reasoning ventures into the transconsistent…The problem is therefore posed as to how to account for the apparently acceptable but invalid classical reasoning.” [Reference Priest12, p. 221]

To solve this problem, Priest recommends a specific strategy that “stems from the observation that counter-examples to inferences such as the disjunctive syllogism occur only in the transconsistent. Hence, provided we stay within the domain of the consistent, which classical reasoning of course does (by and large), classical logic is perfectly acceptable” [Reference Priest12, p. 222]. That is to say, Priest recommends adopting all of classical reasoning, to the extent that our reasoning remains free of contradiction throughout. To that end, he introduces the logic $Minimally\ inconsistent\ LP$ (MiLP) that aims to recapture all of classical reasoning in cases where no inconsistencies are involved [Reference Priest11, Reference Priest12, pp. 221–230].

Priest’s characterization of MiLP is carried out in purely model-theoretic terms. Consequently, his account leaves open questions such as “what is the exact difference between LP and MiLP?” and “How does MiLP block explosion even though it vindicates all of classical reasoning in consistent situations?” In the present paper, I aim to conduct a proof-theoretic analysis of MiLP, shedding technical as well as conceptual light on such issues.Footnote
^{1}

The paper proceeds as follows. In Section 2, I introduce both LP and MiLP model-theoretically, and highlight certain properties of the latter. In Section 3, I propose a sequent system for MiLP in the form of the sequent calculus of LP (introduced in [Reference Beall1]), augmented with natural deduction-style rules for assuming and discharging sequents. I provide conceptual justification for this approach, and prove that the resultant system is both sound and complete (along with related results). In Section 4, I point out one philosophical implication of my analysis: it can be used in response to a criticism of MiLP put forward by Beall [Reference Beall2]. [Nota bene: For simplicity, my discussion in this paper is confined to propositional MiLP. While I do believe that the analysis below can in principle be applied to first-order MiLP, the latter task requires quite a few adjustments and so it is left for a follow-up project.]

## 2 LP and MiLP: model-theoretic accounts

Conceptually, LP is based on the idea that sentences may be assigned not only one truth value—namely, “true” or “false”—but also both of these values. Model theoretically, this approach is cashed out by
$strong\ Kleene\ valuations$
. Let the values
$1,0$
stand for “true” and “false,” respectively, and let the value
$\frac {1}{2}$
stand for “both true and false” (what is sometimes referred to in the literature as a “glut”). A function
$v:\mathcal {L}\to \{0,\frac {1}{2},1\}$
is a strong Kleene valuation if it is based on the truth tablesFootnote
^{2}
:

An LP model will be identified with such a valuation function. Regarding logical consequence, LP has two designated values:
$\frac {1}{2}$
and
$1$
. Thus, a set of conclusions
$\Delta $
LP-follows from some set of premises
$\Gamma $
(
$\Gamma \models _{LP}\Delta $
) if for every model *v*: if
$v(\gamma )\in \{\frac {1}{2},1\}$
for all
$\gamma \in \Gamma $
, then there is some
$\delta \in \Delta $
such that
$v(\delta )\in \{\frac {1}{2},1\}$
.

Interestingly, LP’s theorems are exactly those of classical logic [Reference Priest10, p. 228]. As for inferences, the situation is different. For example, the disjunctive syllogism
$A,\neg A\vee B\models B$
is not LP-valid, any model *v* where
$v(A)=\frac {1}{2}$
and
$v(B)=0$
being a counterexample. Thus, even though LP and classical logic agree on theorems, LP turns out to be weaker than classical logic when it comes to inferences.

However, inferences like the disjunctive syllogism are essential to classical reasoning, and so we should avoid denying as many instances of them as possible. To meet this challenge, Priest introduces MiLP [Reference Priest11, Reference Priest12, pp. 221–230], whose rationale is that

“given some information from which we have to reason, we can cash out the idea that the situation is no more inconsistent than we are forced to assume by restricting ourselves to those models of the information that are, in some sense, as consistent as possible, given the information—or, as we will say, are minimally inconsistent.” [Reference Priest12, p. 222]

Technically speaking, this idea is laid out as follows. First, we define a strict partial order over models:
$v_{1}<v_{2}$
if
$v_{1}$
is “more consistent” than
$v_{2}$
, in the sense that the “glutty” atoms in
$v_{1}$
form a proper subset of the “glutty” atoms in
$v_{2}$
:
$\{p\mid v_{1}(p)=\frac {1}{2}\}\subsetneqq \{p\mid v_{2}(p)=\frac {1}{2}\}$
. Second, we say that *v* is a
$minimal$
model of a given set of premises
$\Gamma $
if (i) *v* is a model of
$\Gamma $
(i.e.,
$v(\gamma )\in \{\frac {1}{2},1\}$
for all
$\gamma \in \Gamma $
) and (ii) for all
$v^{\prime }<v$
,
$v^{\prime }$
is
$not$
a model of
$\Gamma $
, namely, there is some
$\gamma \in \Gamma $
s.t.
$v^{\prime }(\delta )=0$
. With these definitions in mind, we say that
$\Delta $
follows from a set
$\Gamma $
in MiLP (
$\Gamma \models _{m}\Delta $
) if for every *v* that is a minimal model of
$\Gamma $
there is some
$\delta \in \Delta $
s.t.
$v(\delta )\in \{\frac {1}{2},1\}$
.

MiLP is “more classical” than LP [Reference Priest12, p. 225]. In effect, in consistent situations—namely, situations where no premise is a glut—MiLP turns out to be in agreement with classical logic. For example, observe that the minimal models of
$\{p,\neg p\vee q\}$
(where *p* and *q* are atoms) are those where both *p* and *q* are assigned
$1$
. Thus, as opposed to LP, MiLP endorses the corresponding instance of the disjunctive syllogism
$p,\neg p\vee q\vdash q$
. Similarly, one can show that MiLP endorses any instance of any classically-valid inference rule, provided that the premises have a “classical” model, namely, one free of gluts. If there is no such model—say, in the case of the inference
$p\wedge \neg p\vdash q$
—MiLP turns out to be in agreement with LP.

As Priest points out [Reference Priest12, pp. 224–225], MiLP is a nonmonotonic logic. Here is a quick example: MiLP endorses the inference
$r\vee p!\vdash r$
(where both
$p,r$
are atoms, and
$p!$
is the abbreviation of
$p\wedge \neg p$
) since every minimal model of the premise is one where *r* is assigned
$1$
and *p* is assigned either
$1$
or
$0$
. But the moment we add
$p!$
as an independent premise, the conclusion *r* no longer follows, since every *v* that is a minimal model of the premises
$\{p!,r\vee p!\}$
has
$v(p)=\frac {1}{2}$
, which dictates that
$r\vee p!$
cannot be assigned
$0$
even if
$v(r)=0$
.

Unfortunately, the model-theoretic account of MiLP does not fully explain how this logic becomes classical-like in some contexts, and LP-like in others. Indeed, only a proof-theoretic analysis may provide us with a comprehensive story as to the inference rules and axioms MiLP endorses. To make the need of such a story apparent, it is enough to point out certain important facts about MiLP that, to the best of my knowledge, have not yet been even mentioned in the literature. Consider the following two examples:

(1) It is well known that giving up the metarule of Weakening on the left (which dictates that a consequence relation be monotonic) opens up a gap between two kinds of rules: additive and multiplicative.Footnote
^{3}
In particular, it has the effect that the left additive disjunction rule:

does not entail the left multiplicative disjunction rule:

Now, LP and CL, being monotonic, each admit both rules. By contrast, whereas one can easily convince oneself that the additive rule is MiLP-valid, here is a counterexample to the multiplicative rule: $q!,p!\vee r\models _{m}r$ and $p!,q!\vee r\models _{m}r$ , but $p!,q!,(p!\vee r)\vee (q!\vee r)\nvDash _{m}r$ .

(2) Similarly, giving up left Weakening opens up a gap between multiplicative Cut:

and additive Cut:

Clearly, classical logic and LP each admit both rules. However, while MiLP admits additive Cut,Footnote
^{4}
it violates multiplicative Cut: clearly
$p!\models _{m}p!\vee r$
and
$p!\vee r\models _{m}r$
(as we saw), but
$p!\nvDash _{m}r$
. This counterexample shows that MiLP is not only nonmonotonic, but also nontransitive.Footnote
^{5}

The above examples make clear the need to have a comprehensive proof-theoretic account of MiLP, preferably in the form of a sequent system that is as simple as possible.Footnote
^{6}
That is the task I embark on in the next section.

## 3 A sequent system for MiLP

Here is the sequent calculus of LP, given in [Reference Beall1]:

This system was proven sound and complete with respect to $\models _{LP}$ in [Reference Beall1], following [Reference Priest13, p. 157, Theorem 8.7.9].

Now, as a first step toward having such a system for MiLP, we need to observe that any LP-derivable sequent should be MiLP-derivable, since
$\models _{LP}\subset \models _{m}$
.Footnote
^{7}
On the other hand, we just saw that multiplicative Cut and left Weakening are not MiLP-valid, even though they are LP-valid as well as classically-valid.Footnote
^{8}
This is not a big deal, though. For, we can drop left Weakening while replacing multiplicative Cut with additive Cut. Both changes do not matter much: as Beall shows [Reference Beall1], Weakening (on both sides) and multiplicative Cut are both admissible in the above system. Similarly, we can restrict the axioms to sequents that involve only literals without thereby changing the set of derivable sequents.Footnote
^{9}
On top of the above changes, we have to add rules that will allow us to recover classical-like reasoning in cases of consistent premises. In particular, we need a machinery that somehow keeps track of the amount of inconsistency in our premises.

A question may arise at this point as to whether measuring inconsistency is a matter of logic to begin with, as such a task necessarily discriminates between atoms, classifying them into the two categories of glutty and non-glutty. Indeed, it is not at all the business of paraconsistent logicians $qua$ logicians to make such distinctions. Beall makes this point quite nicely:

“The classical theorist…lets
$logic$
do the work of delivering theories that are consistent or trivial. The classical logician needn’t add any rules to her theory to ensure that
$\alpha \wedge \neg \alpha $
collapses the theory into triviality; logic does that. But glut theorists, I suggest, reject (or ought firmly reject) as much. If such collapse is appropriate—if the given theory is to be consistent (up to triviality)—the glut theorist must resort to *non-logical*
$rules$
to do the job.” [Reference Beall3, p. 442]

In other words, paraconsistent logics merely allow for the possibility of sentences being glutty, but it is up to one’s theory—indeed, one’s $non$ - $logical$ theory—to determine which sentences are in effect both true and false.

Given the scope of this paper—it being a logic paper—we need not concern ourselves here with the question of how one’s non-logical theory determines which sentences are glutty. Rather, I shall assume that such information is already given, and suggest a way to incorporate it into the above proof system.Footnote
^{10}
As a point of departure, notice that since glutty sentences are distinguished by *non-logical* means, we may safely assume that their
$logical$
structure is as primitive as possible, namely, that they are atoms. Indeed, on the basis of the information that certain atoms are gluts—given by one’s non-logical theory—one may appeal to one’s
$logical$
theory, so as to establish that other, more complex sentences—e.g., a conjunction of gluts—are gluts too. But as far as non-logical theory is concerned, I suggest, only atoms are specified as gluts.Footnote
^{11}

In light of this last point, it is natural to assume that the information of which sentences (if any) are gluts is codified in the starting points of our derivations, namely, in the axioms with which we begin.Footnote
^{12}
Thus, I suggest that any axiom
$\Gamma \vdash _{m}\Delta $
(where
$\vdash _{m}$
designates MiLP-derivability rather than LP-derivability) is to be regarded, among other things, as codifying the information that the members of
$\{p\mid p,\neg p\in \Gamma \}$
are specified as gluts by one’s non-logical theory. In this way, we can also measure the inconsistency of
$\Gamma \vdash _{m}\Delta $
, namely, by the set
$\Gamma !=\{p\mid p,\neg p\in \Gamma \}\cup \{\neg p\mid p,\neg p\in \Gamma \}$
. Clearly
$\Gamma !\subseteq \Gamma $
.

Assume, then, that we would like to begin our derivation with $\Gamma \vdash _{m}\Delta $ . How do we express the assertion that the situation is as consistent as possible, namely, that $only$ the members of $\Gamma !$ are gluts? Here is my suggestion: such an assumption is expressed by any sequent of the form of $\Gamma !,p,\neg p,\Sigma \vdash _{m}\Pi $ , where $p\notin \Gamma !$ and $\Sigma ,\Pi $ are arbitrary. Which is to say, we $assume$ that if the situation were even slightly more inconsistent—i.e., that at least one additional sentence were a glut—things would “explode” and any conclusion would follow, no matter what our other premises would have been.

How do we incorporate such an assumption into our calculus? After all, an assumption of the form
$\Gamma !,p,\neg p,\Sigma \vdash _{m}\Pi $
need not be derivable, even though the axiom that licenses it, namely,
$\Gamma \vdash _{m}\Delta $
, is derivable.Footnote
^{13}
I would like to draw here on a proposal that originates in Schroeder-Heister [Reference Schroeder-Heister16] and that was recently brought up in Hlobil [Reference Hlobil7] and Murzi and Rossi [Reference Murzi and Rossi9] to augment a given sequent calculus with natural deduction-style rules for assuming and discharging sequents.Footnote
^{14}
Given an axiom of the form
$\Gamma \vdash _{m}\Delta $
, I suggest that we may
$assume$
any sequent of the form
$\Gamma !,p,\neg p,\Sigma \vdash _{m}\Pi $
, where
$p\notin \Gamma !$
. Such an assumption may be
$discharged$
when the information it carries is conjoined with the information codified by the sequent that licenses it, namely,
$\Gamma \vdash _{m}\Delta $
. To be concrete, I suggest that
$\Gamma !,p,\neg p,\Sigma \vdash _{m}\Pi $
may be discharged whenever a sequent whose subderivation makes use of this assumption serves as a premise along with a sequent whose own subderivation makes use of
$\Gamma \vdash _{m}\Delta $
.

To be more precise, observe that only a two-premise rule is a possible candidate for discharging an assumption in this way, and that such a rule has to have premise-sequents with different antecedents. The only rule that meets these requirements is $L\vee $ . Therefore, I suggest that the discharging rule takes the form of the disjunction left rule, which I shall call the “disjunction discharging” (DD) rule. This is a rule of the form:

where (i) $p\notin \Sigma !$ , (ii) $\Theta ,\Lambda $ are arbitrary, and (iii) the two subderivations marked by the three dots—those of $\Gamma ,A\vdash _{m}\Delta $ and $\Gamma ,B\vdash _{m}\Delta $ from $\Sigma \vdash _{m}\Pi $ and $\Sigma !,p,\neg p,\Theta \vdash _{m}\Lambda $ , respectively—may involve more than one leaf (in addition to $\Sigma \vdash _{m}\Pi $ and $\Sigma !,p,\neg p,\Theta \vdash _{m}\Lambda $ , respectively). The assumption is presented inside square brackets, and the superscript $1$ is not part of the syntax, but rather meant to keep track of the assumptions that need to be discharged. For the same reason, all of our assumptions need to be stated explicitly, and so we do not allow vacuous discharges. On the other hand, we do allow multiple discharges of the same assumption, as well as a discharge of several assumptions at once.

At this point, a few examples would be illuminating. Suppose that we begin our derivation with
$p,q\vdash _{m}q$
. This sequent codifies the information that our non-logical theory has concluded that the situation is classical, namely, that no atom is a glut (
$\{p,q\}!=\emptyset $
). Thus, we may safely assume the sequent
$p,\neg p\vdash _{m}q$
because if *p* is not a glut, it “explodes.” This assumption will be discharged by
$p,q\vdash _{m}q$
, which generates the following derivation:

By contrast, suppose we begin a derivation with
$p,\neg p,q\vdash _{m}q$
, which codifies the information that *p* is a glut (
$\{p,\neg p,q\}!=\{p,\neg p\}$
). In this situation, we cannot make the assumption
$p,\neg p\vdash _{m}q$
: *p* is specified as a glut by our non-logical theory, and so it is not explosive. For this reason, we cannot derive
$p,\neg p,\neg p\vee q\vdash _{m}q$
. Thus, we may derive as many instances of the disjunctive syllogism as we like, provided that no gluts are involve in our premises.

Similarly, suppose that our derivation begins with $r\vdash _{m}r$ , which codifies the information that no sentence is a glut ( $\{r\}!=\emptyset $ ). So we may safely assume $p,\neg p\vdash _{m}r$ and discharge it in the following way:

On the other hand, we cannot derive
$p,\neg p,r\vee p!\vdash _{m}r$
because the assumption
$p,\neg p\vdash _{m}r$
cannot be discharged by
$p,\neg p,r\vdash _{m}r$
, since *p* is codified as a glut by this axiom. For this reason,
$p!,r\vee p!\vdash _{m}r$
is not a derivable sequent.

Here is another example. We begin with $r\vdash _{m}r$ , which licenses the two assumptions $p,\neg p\vdash _{m}r$ and $q,\neg q\vdash _{m}r$ , which are then both discharged at once, by one application of DD:

And here is a proof of an equivalent sequent, where the assumptions are discharged step-wise:

We could also assume and discharge the same sequent multiple times:

In summary, the sequent system for MiLP results from making the following changes in the above LP sequent calculus: (i) replacing multiplicative Cut with additive Cut, (ii) dropping left Weakening, (iii) restricting axioms to sequents that contain only literals, and (iv) adding the possibility to assume and discharge sequents by applications of DD. We say that a sequent $\Gamma \vdash _{m}\Delta $ is MiLP-derivable if it can be derived via a proof tree where (i) every step is an application of one of the system’s rules, and (ii) every assumption is discharged at some point. Here is a full presentation of the resultant system:

Theorem 1. All the rules except for DD are sound. Moreover, a model *v* is a minimal model of the conclusion-sequent of any such rule only if *v* is a minimal model of at least one premise-sequent of that rule.

Remark. Observe that, strictly speaking, DD is also sound: if the premise-sequents are MiLP-valid, then so is the conclusion-sequent. It’s just that this rule allows us to integrate into our proof tree $invalid$ assumptions. Therefore, the soundness proof for proof trees that make use of DD will be treated separately (see Theorem 4).

Proof. By induction on proof height. For height $1$ , notice that the axioms are satisfied by all LP models, and so they must be satisfied by all the minimal models of their antecedents.

Likewise, it is transparent that all the right rules (including the structural rule RW) are sound. One only needs to observe that the rules of LP (and MiLP) are all
$separable$
; i.e., the antecedents are kept fixed in all the premise-sequents and the conclusion-sequent of each such rule. Hence, a model *v* is a minimal model of the conclusion-sequent of each such rule iff *v* is a minimal model of all the premise-sequents. Consequently, since all these rules are LP-sound, they must be MiLP-sound.

Here is the proof that additive Cut is sound: assume to the contrary that
$\Gamma ,A\models _{m}\Delta $
and
$\Gamma \models _{m}A,\Delta $
, but
$\Gamma \nvDash _{m}\Delta $
. Namely, there is some *v* that is a minimal model of
$\Gamma $
such that
$v(\delta )=0$
for all
$\delta \in \Delta $
. Since
$\Gamma \models _{m}A,\Delta $
, it follows that
$v(A)\in \{\frac {1}{2},1\}$
. But then *v* must be a minimal model of
$\Gamma ,A$
: it is a minimal model of
$\Gamma $
, that is, for every
$v^{\prime }<v$
there is some
$\gamma \in \Gamma $
such that
$v^{\prime }(\gamma )=0$
, and so, in particular, any such
$v^{\prime }$
is not a model of
$\Gamma ,A$
. Since
$\Gamma ,A\models _{m}\Delta $
we get that there is some
$\delta \in \Delta $
such that
$v(\delta )\in \{\frac {1}{2},1\}$
, contradicting our assumption.

It is rather easy to verify that all one-premise left rules are sound, since the set of minimal models of the conclusion-sequent of each such rule is exactly the set of minimal models of the premise-sequent. For example, consider
$L\neg \neg $
. We assume that in every minimal model of
$\Gamma ,A$
there is some
$\delta \in \Delta $
that is assigned either
$1$
or
$\frac {1}{2}$
. But *v* is a minimal model of
$\Gamma ,A$
iff it is a minimal model of
$\Gamma ,\neg \neg A$
, and so the condition is met.

A similar consideration shows that
$L\wedge $
is sound, since a model *v* is a minimal model of
$\Gamma ,A,B$
iff *v* is a minimal model of
$\Gamma ,A\wedge B$
.

It remains to deal with
$L\vee $
. Assume that
$\Gamma ,A\vee B\nvDash _{m}\Delta $
. Namely, there is a minimal model *v* of
$\Gamma ,A\vee B$
such that
$v(\delta )=0$
for all
$\delta \in \Delta $
. Now,
$v(A\vee B)\in \{\frac {1}{2},1\}$
, and so either
$v(A)\in \{\frac {1}{2},1\}$
or
$v(B)\in \{\frac {1}{2},1\}$
. It follows that *v* is a minimal model either of
$\Gamma ,A$
, or of
$\Gamma ,B$
(otherwise, it wouldn’t be a minimal model of
$\Gamma ,A\vee B$
). Thus, either
$\Gamma ,A\nvDash _{m}\Delta $
or
$\Gamma ,B\nvDash _{m}\Delta $
, as required.

Now, the following notation will prove useful: let $MC(\Gamma \vdash _{m}\Delta )$ be the set of all MiLP counterexample models of $\Gamma \vdash _{m}\Delta $ , namely:

With this notation, we can introduce the following corollary:

Corollary 2. Assume that $\Gamma \vdash _{m}\Delta $ can be derived from $\Sigma _{1}\vdash _{m}\Pi _{1},\ldots ,\Sigma _{n}\vdash _{m}\Pi _{n}$ , where the latter sequents are not necessarily axioms. Then $MC(\Gamma \vdash _{m}\Delta )\subseteq \bigcup _{i=1}^{n}MC(\Sigma _{i}\vdash _{m}\Pi _{i})$ .

Proof. Follows directly from Theorem 1: by soundness, every minimal counterexample model of $\Gamma \vdash _{m}\Delta $ is a minimal model of one of the premise-sequents in the last step of the derivation, which is also a minimal counterexample model of a premise-sequent in the previous step, and so on.

Lemma 3. Assume that we derive $\Gamma ,A\vee B\vdash _{m}\Delta $ by DD, where (i) $\Gamma ,A\vdash _{m}\Delta $ is the discharging sequent, (ii) $\Gamma ,B\vdash _{m}\Delta $ is the discharged sequent, and (iii) DD is applicable since the subderivation of $\Gamma ,A\vdash _{m}\Delta $ makes use of the axiom $\Sigma \vdash _{m}\Pi $ and the subderivation of $\Gamma ,B\vdash _{m}\Delta $ makes use of the assumption $\Sigma !,p,\neg p,\Theta \vdash _{m}\Lambda $ licensed by $\Sigma \vdash _{m}\Pi $ . Then,

Proof. By Corollary 2,
$MC(\Gamma ,A\vee B\vdash _{m}\Delta )\subseteq MC(\Gamma ,A\vdash _{m}\Delta )\cup MC(\Gamma ,B\vdash _{m}\Delta )$
. So it remains to show that if
$v\in MC(\Sigma !,p,\neg p,\Theta \vdash _{m}\Lambda )$
, then
$v\notin MC(\Gamma ,A\vee B\vdash _{m}\Delta )$
. Assume that
$v\in MC(\Sigma !,p,\neg p,\Theta \vdash _{m}\Lambda )$
. By definition, for every atom *q*, if
$q\in \Sigma !\cup \{p\}$
then
$v(q)=\frac {1}{2}$
. Now, consider any
$v^{\prime }$
that is a minimal model of the axiom
$\Sigma \vdash _{m}\Pi $
. By definition,
$v^{\prime }(q)=\frac {1}{2}$
iff
$q\in \Sigma !$
. Therefore,
$v^{\prime }<v$
. Moreover,
$v^{\prime }$
is clearly an
$LP-model$
of
$\Gamma ,A\vee B$
.Footnote
^{15}
Thus, we have two options: either
$v^{\prime }$
is not a minimal model of
$\Gamma ,A\vee B$
, in which case *v* is not a minimal model of
$\Gamma ,A\vee B$
because
$v^{\prime }<v$
, or
$v^{\prime }$
is a minimal model of
$\Gamma ,A\vee B$
, in which case *v* is not a minimal model of
$\Gamma ,A\vee B$
because
$v^{\prime }<v$
. Either way,
$v\notin MC(\Gamma ,A\vee B\vdash _{m}\Delta )$
, as required.

Theorem 4. Soundness: If $\Gamma \vdash _{m}\Delta $ is derivable then $\Gamma \models _{m}\Delta $ .

Proof. If the proof doesn’t make any use of DD, then $\Gamma \models _{m}\Delta $ by Theorem 1. It remains to deal with proof trees that do make use of DD, as well as of underivable assumptions. The idea behind the proof is simple: by Lemma 3, each time we discharge an assumption we “get rid” of all its minimal counterexample models. Since $\Gamma \vdash _{m}\Delta $ is derivable, any assumption in the proof tree is discharged at some point. Hence, $\Gamma \vdash _{m}\Delta $ has no minimal counterexample models, namely, $\Gamma \models _{m}\Delta $ .

Assume then that the proof of
$\Gamma \vdash _{m}\Delta $
makes use of the assumptions
$\Sigma _{1}\vdash _{m}\Pi _{1},\ldots ,\Sigma _{n}\vdash _{m}\Pi _{n}.$
Without loss of generality, we can assume that any assumption is made and discharged only once in the tree, and that
$\Sigma _{i}!\nsubseteq \Sigma _{j}!$
for all
$i\neq j$
(
$1\leq i,j\leq n$
). (Otherwise, we only need to care about assumptions that are “minimal,” and for each such assumption find the “lowest” point in the tree where it is discharged.) Let
$\Lambda _{i}\vdash _{m}\Theta _{i}$
be the conclusion-sequent of the application of DD that discharges
$\Sigma _{i}\vdash _{m}\Pi _{i}$
(
$1\leq i\leq n$
). By Lemma 3, for every
$v\in MC(\Sigma _{i}\vdash _{m}\Pi _{i}),$
we have
$v\notin MC(\Lambda _{i}\vdash _{m}\Theta _{i})$
. Moreover, *v* is not a counterexample model of any sequent below
$\Lambda _{i}\vdash _{m}\Theta _{i}$
in the tree, because Theorem 1 guarantees that the sequent rules do not “add” such counterexample models.Footnote
^{16}
Therefore, for all
$1\leq i\leq n$
, if
$v\in MC(\Sigma _{i}\vdash _{m}\Pi _{i})$
then
$v\notin MC(\Gamma \vdash _{m}\Delta )$
. But Corollary 2 gives us
$MC(\Gamma \vdash _{m}\Delta )\subseteq \bigcup _{i=1}^{n}MC(\Sigma _{i}\vdash _{m}\Pi _{i})$
, and so
$MC(\Gamma \vdash _{m}\Delta )=\emptyset $
, as required.

For simplicity, I shall now drop RW and additive Cut, and prove completeness without them. That will also allow us to obtain admissibility results for these rules.

Theorem 5. Completeness: If $\Gamma \models _{m}\Delta $ then $\Gamma \vdash _{m}\Delta $ is derivable.

Proof. I shall use a slight variation of the method known as proof search. Given an underivable sequent $\Gamma \nvdash _{m}\Delta $ , we first decompose it by applying the rules of MiLP “in reverse.” For all the rules except for $L\vee ,R\wedge $ and DD, such an application results in a sequent with one less-complex formula. For $L\vee ,R\wedge $ and DD, on the other hand, we have to bifurcate the proof tree into two branches, one for each premise-sequent. Since $\Gamma \cup \Delta $ is finite, the decomposition process must come to an end after finitely many steps. At that point, we have a tree whose leaves are all sequents that contain only literals. For, if a given sequent contains a formula that is not a literal, this formula can be further decomposed by the MiLP rules.

As a second step, we consider the leaves in the resultant tree, looking for instances of the axioms of MiLP. Call a branch “closed” if it ends with a leaf with an axiom, and “unclosed” otherwise.

As a third step, we consider the unclosed leaves, looking for assumptions that may be discharged by any axiom pinpointed in the previous step. Recall that, given such an axiom $\Gamma \vdash _{m}\Delta $ , the assumptions it may discharge are of the form $\Gamma !,p,\neg p,\Sigma \vdash _{m}\Pi $ , where $p\notin \Gamma !$ and $\Sigma ,\Pi $ are arbitrary. Call a branch that ends with such an assumption “semi-closed.”

Now, if all the branches in our tree are either closed or semi-closed, we get a proof of $\Gamma \vdash _{m}\Delta $ . But we’ve just assumed that this sequent is underivable, and so there must be at least one branch in the tree that is neither closed nor semi-closed. Therefore, there must also be a minimal such branch with respect to $<$ . Thus, there must be some branch that ends with a sequent $\Gamma _{0}\vdash _{m}\Delta _{0}$ such that, for any other branch $\Sigma \vdash _{m}\Pi $ that ends with an axiom, we have: $\Sigma !\nsubseteq \Gamma _{0}!$ , or otherwise $\Gamma _{0}\vdash _{m}\Delta _{0}$ would have been declared an assumption that $\Sigma \vdash _{m}\Pi $ may discharge in the third step. For the same reason, $\Sigma !\nsubseteq \Gamma _{0}!$ for any assumption $\Sigma \vdash _{m}\Pi $ pinpointed in the third step, since any such assumption is “more inconsistent” than the axiom that licenses it, and that axiom, in turn, cannot be “less inconsistent” than $\Gamma _{0}\vdash _{m}\Delta _{0}$ . In conclusion, $\Gamma _{0}\vdash _{m}\Delta _{0}$ is minimal with respect to $<$ , regarding all the leaves in the tree: there is no other leaf whose antecedent-gluts form a proper subset of $\Gamma _{0}!$ .

As explained above, all the members of
$\Gamma _{0}\cup \Delta _{0}$
are literals, and since
$\Gamma _{0}\vdash _{m}\Delta _{0}$
is not an axiom,
$\Gamma _{0}\cap \Delta _{0}=\emptyset $
, and there is no *p* such that
$p,\neg p\in \Delta _{0}$
. Thus, the model *v* defined by

is clearly a minimal counterexample model of $\Gamma _{0}\vdash _{m}\Delta _{0}$ . In symbols, $v\in MC(\Gamma _{0}\vdash _{m}\Delta _{0})$ .

Next, we have to show that
$v\in MC(\Gamma \vdash _{m}\Delta )$
. We do that by proving that *v* is a minimal counterexample model of each sequent in the branch that ends with
$\Gamma _{0}\vdash _{m}\Delta _{0}$
. The proof is done by backward induction on the length of the branch of
$\Gamma _{0}\vdash _{m}\Delta _{0}$
. In the base case, namely, where the branch is of length
$1$
,
$\Gamma \vdash _{m}\Delta $
is simply
$\Gamma _{0}\vdash _{m}\Delta _{0}$
, and we’re done. If
$\Gamma \vdash _{m}\Delta $
is derived by
$R\neg \neg $
, then
$\Delta $
is of the form
$\Delta ^{\prime },\neg \neg A$
and the sequent above
$\Gamma \vdash _{m}\Delta ^{\prime },\neg \neg A$
in the tree is
$\Gamma \vdash _{m}\Delta ^{\prime },A$
. By the inductive hypothesis,
$v\in MC(\Gamma \vdash _{m}\Delta ^{\prime },A)$
; that is, *v* is a minimal model of
$\Gamma $
such that
$v(\delta )=0$
for all
$\delta \in \Delta ^{\prime }\cup \{A\}$
. In particular,
$v(A)=0$
and so
$v(\neg \neg A)=0$
. Consequently,
$v\in MC(\Gamma \vdash _{m}\Delta ^{\prime },\neg \neg A)$
, as required. The proofs for
$R\neg \wedge $
,
$R\neg \vee $
, and
$R\vee $
are analogous.

If $\Gamma \vdash _{m}\Delta $ is derived by $R\wedge $ then $\Delta =\Delta ^{\prime },A\wedge B$ , and $\Gamma \vdash _{m}\Delta $ is derived from $\Gamma \vdash _{m}\Delta ^{\prime },A$ and $\Gamma \vdash _{m}\Delta ^{\prime },B$ . Without loss of generality, assume that $\Gamma \vdash _{m}A,\Delta ^{\prime }$ is in the branch of $\Gamma _{0}\vdash _{m}\Delta _{0}$ . By the inductive hypothesis, $v\in MC(\Gamma \vdash _{m}A,\Delta ^{\prime })$ and so for all $\delta \in \Delta ^{\prime }\cup \{A\}$ we have $v(\delta )=0$ . In particular, $v(A)=0$ and so $v(A\wedge B)=0$ . Consequently, $v\in MC(\Gamma \vdash _{m}\Delta ^{\prime },A\wedge B)$ , as required.

If $\Gamma \vdash _{m}\Delta $ is derived by $L\neg \neg $ , then $\Gamma $ is of the form $\Gamma ^{\prime },\neg \neg A$ and $\Gamma \vdash _{m}\Delta $ is derived from $\Gamma ^{\prime },A\vdash _{m}\Delta $ . By the inductive hypothesis, $v\in MC(\Gamma ^{\prime },A\vdash _{m}\Delta )$ . In that case, clearly $v\in MC(\Gamma ^{\prime },\neg \neg A\vdash _{m}\Delta )$ , as required. The proofs for the cases of $L\neg \vee $ , $L\neg \wedge $ , and $L\wedge $ are analogous.

It remains to prove the case where
$\Gamma \vdash _{m}\Delta $
is derived by
$L\vee $
or by DD. In that case,
$\Gamma $
is of the form
$\Gamma ^{\prime },A\vee B$
, and
$\Gamma \vdash _{m}\Delta $
is derived from
$\Gamma ^{\prime },A\vdash _{m}\Delta $
and
$\Gamma ^{\prime },B\vdash _{m}\Delta $
. Without loss of generality, assume that
$\Gamma ^{\prime },A\vdash _{m}\Delta $
is in the branch of
$\Gamma _{0}\vdash _{m}\Delta _{0}$
. By the inductive hypothesis,
$v\in MC(\Gamma ^{\prime },A\vdash _{m}\Delta )$
; that is, *v* is a minimal model of
$\Gamma ^{\prime },A$
such that for all
$\delta \in \Delta $
:
$v(\delta )=0$
. Moreover, it is rather easy to verify that for any
$v^{\prime }$
that is a minimal model of
$\Gamma ^{\prime },B$
there is a leaf in the tree
$\Sigma \vdash _{m}\Pi $
(among the leaves that take part in the subderivation of
$\Gamma ^{\prime },B\vdash _{m}\Delta $
) such that for every atom *p*:
$v^{\prime }(p)=\frac {1}{2}$
iff
$p\in \Sigma !$
.Footnote
^{17}
But we already saw that for any such
$\Sigma $
:
$\Sigma !\nsubseteq \Gamma _{0}!$
, since
$\Gamma _{0}\vdash _{m}\Delta _{0}$
is minimal with respect to
$<$
regarding all the leaves in the tree. Therefore, there is no
$v^{\prime }<v$
that is a minimal model of
$\Gamma ,B$
. Consequently, *v* is a minimal model not only of
$\Gamma ^{\prime },A$
, but also of
$\Gamma ^{\prime },A\vee B$
. As we saw,
$v(\delta )=0$
for all
$\delta \in \Delta $
, and so
$v\in MC(\Gamma ^{\prime },A\vee B\vdash _{m}\Delta )$
, as required.

Corollary 6. RW and additive Cut are admissible.

Proof. As explained above, the completeness proof doesn’t makes use of these rules.

## 4 Beall’s criticism

In this section, I point out a philosophical implication of the above analysis: it can be used in response to a criticism of MiLP put forward by Beall [Reference Beall2]. To understand the criticism, a few words on reassurance are in order. We saw that MiLP is stronger than LP. Therefore, a suspicion arises that in some situations MiLP will prove too strong for paraconsistent reasoning. Let *X* be some set of sentences, and let
$X^{LP},X^{m}$
, and
$X^{CL}$
be the closures of *X* under the consequence relations of LP, MiLP, and classical logic, respectively. Clearly
$X^{LP}\subseteq X^{m}\subseteq X^{CL}$
, but the suspicion is that for some *X*, it will turn out that
$X^{m}$
is trivial whereas
$X^{LP}$
is not. If this suspicion were to be confirmed, then MiLP, unlike LP, would in certain cases unnecessarily collapse into triviality.

To deal with this problem, Priest proves what he calls “reassurance,” namely, that there is no *X* such that
$X^{m}$
is trivial, but
$X^{LP}$
isn’tFootnote
^{18}
[Reference Priest11, Reference Priest12, pp. 221–230]. However, Beall points out that a collapse into triviality is but the limiting case of theoretical badness [Reference Beall2, p. 519]. Indeed, a trivial theory contains all sentences; and while it thereby contains all truths, it also contains every untruth. But theoretical badness can also occur if the proof system at hand allows one to prove an untrue sentence from true sentences. To prevent this from happening, the system at hand has to meet
$general$
$reassurance$
, i.e., the condition that if
$X^{LP}$
does not contain any untrue sentence (given that the sentences in *X* are all true), then neither does
$X^{m}$
.

Yet, Beall contends, MiLP fails to meet general reassurance. For as we saw,
${r\vee p!\vdash _{m}r}$
(where *r* is an atom). Suppose then that *r* is some untrue sentence, and that *p* is a glut. Hence,
$p!$
is true, which makes
$p!\vee r$
also true, independently of *r*’s truth value. Beall describes a “real-life” example of such a failure:

“[W]e are convinced, by the powerful parade of philosophical or scientific discovery, that some apparent absurdity is true, but we remain unsure about which one – unsure about the witness for our existential claim or, simplifying, disjunction. (A common analogous case: theists convince us that some god or other exists, but we remain unconvinced as to which one. Similarly, we might be convinced that something outrageous that Richard Routley said is true though unsure of the witness.) Now, among the apparent absurdities are not only contradictions but non-contradictory absurdities (e.g. ‘Priest is a scrambled egg’). In the simplest such case, our theory contains some disjunction of gluts and non-glutty absurdities without containing any witness for the disjunction. But this is enough for the problem: MiLP delivers *r* from the given disjunction, even where *r* is untrue.” [Reference Beall2, pp. 520–521]

Whereas MiLP licenses the inference from
$p!\vee r$
to *r*, LP forbids us from making that inference. The way Beall sees it, whereas LP meets general reassurance, MiLP violates it, and so it does not provide a reliable way to recover classical reasoning within a paraconsistent framework.

It is worth pointing out that Priest himself responds to the criticism simply by giving up general reassurance. MiLP is a nonmonotonic, i.e., inductive logic, he contends, and so:

“[I]t is precisely the definition of such logics that they may lead us from truth to untruth. The point is as old a Hume (‘The sun has risen every day so far. So the sun will rise tomorrow.’) and as new as that much over-worked member of the spheniscidae (‘Tweety is a bird. So Tweety flies.’) If they did not have this property, these logics would be deductive logics, which they are not. This is not a bug of such logics; it is a feature. Such logics do not preserve truth, by definition.” [[Reference Priest14], p. 740]

But this is a defective solution: not only does it evade responding directly to Beall’s worry, but its justification for doing so is baseless. As was pointed out by Restall [Reference Restall15], a nonmonotonic logic need not lead from truth to untruth,Footnote
^{19}
and so one cannot justify MiLP’s alledged leading from truth to untruth on the basis that it is nonmonotonic.

However, Beall’s criticism rests on shaky grounds. The fact of the matter is that MiLP $does$ meet general reassurance, as guaranteed by the above soundness theorem (Theorem 4). To see how it does so in the case of Beall’s example, consider the derivation of the sequent he is referring to:

On Beall’s story, *p* is supposed to be a glut, and *r* an apparent absurdity. By contrast, the above derivation is based on the information (provided by our non-logical theory) that neither sentence is glutty: that is exactly the information codified in the derivation’s starting point, i.e., the axiom
$r\vdash _{m}r$
. Indeed, only because *p* is
$assumed$
to behave classically—expressed by the sequent where the pair
$p,\neg p$
“explodes”—can we derive
$r\vee p!\vdash _{m}r$
in the first place. Thus, contrary to Beall’s story, the above derivation actually manifests the MiLP agenda, namely, that unless established otherwise, any sentence may safely be assumed to be non-glutty.

## 5 Conclusion

The main purpose of the present paper was to provide a simple sound and complete sequent system for MiLP, in light of which we gained a comprehensive account of how, and to what extent, this logic recovers classical reasoning. The above account also reveals to us that Beall’s criticism rests on shaky grounds, as it is not in line with how MiLP derivations are carried out. In effect, the criticism is not even in line with how such derivations get started.

## Acknowledgments

This paper was supported by a Minerva fellowship at Freie Universität Berlin. The paper benefited from fruitful discussions with Robert Brandom, Ulf Hlobil, Dan Kaplan, Shuhei Shimamura, and Ryan Simonelli.