Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I did a bit of reorganization and added more examples at paraconsistent logic, including some comments about linear logic.
Here I copy an old discussion from that page:
I made a big change here; I would argue that the failure of $\bot \vdash B$ means that ‘$\bot$’ simply doesn't mean $\bot$; but in any case, I've always seen the definition given in terms of negation. In particular, dual-intuitionistic logic has $\bot \vdash B$ (just as intuitionistic logic has $B \vdash \top$) but is still considered paraconsistent.
Finn: Hmm. (I presume you meant the ’)’ to come before ’but is still…’; as it stands that last statement is false.) The definitions I’ve seen correspond to what I wrote, but you make a good point – if we want to think of $LJ^{op}$ as paraconsistent then the definition by means of ex falso quodlibet does seem wrong. If this is the standard definition, then I certainly won’t object – I’ll just avoid relying on philosophers for information about logic in future.
Toby: Yeah, I'm sure about the standard; our links agree with me too. (And thanks for catching my parenthesis.) As for philosophers, they're not always as precise as mathematicians; that may be the problem. Not to mention, there's a tendency not to include $\bot$ as a logical constant but instead to simply define it as $A \wedge \neg{A}$ (after proving that these are all equivalent, but ignoring the possibility of an empty model), which leads to conflating the two versions. (In a paraconsistent logic where $A \wedge \neg{A} \equiv B \wedge \neg{B}$ need not hold, one ought to catch the inapplicability of this definition, but maybe not.)
Finn: I think you’ve hit the nail on the head there: I think ’ex falso quodlibet’ means $\bot\vdash A$ (as you said above, this says that ’$\bot$’ means $\bot$, the least truth-value), but it seems my sources may have meant $A\wedge\neg A\vdash B$, without saying so. This is why I never cared much for what’s called ’philosophical logic’ – even though I’m a philosophy graduate studying logic (albeit in a computer science department), it was always too fuzzy for my tastes.
Thanks for clearing this up. Obviously your version should stand.
Toby: You're welcome. But your fancy Latin reminds me that what we have here is technically really the combination of the law of non-contradiction (any contradiction is false: $A \wedge \neg{A} \equiv \bot$) and ex falso quodlibet (anything follows from falsehood: $\bot \vdash B$), so I changed the name above.
Finn: Right – I was going to mention that (no, really), but forgot. Perhaps we should incorporate some of this discussion into the article proper, to help resolve any imprecision in other sources. I’ll do that, if you’d rather not, but not now – it’s way past bedtime here in GMT-land.
I added a reference to Mortensen’s book Inconsistent Mathematics.
A few years ago Toby dropped a query box at paraconsistent logic asking about relevance logic:
But why doesn’t the Lewis independent argument work?
My impression, from the small amount of reading I’ve done, is that “no sequent can be valid unless the hypothesis and conclusion have a variable in common” is not a rule of relevance logic but a desideratum. Mortensen’s book, which I happen to have in front of me, says “A logic is said to be relevant if whenever $A\to B$ is a theorem of the logic then $A$ and $B$ share an atomic sentence in common”. A particular relevance logic itself is not defined by this property, but by axioms and rules that are later observed to have that property, and so it may happen that such a logic is also paraconsistent. Does that seem right?
I would like to understand relevant and paraconsistent logics better from a categorical point of view. I feel like I understand linear logic reasonably well, and in some sense it seems like the “universal solution” to paraconsistency, that allows all of the problematic sequents but de-couples their connectives to avoid the conclusion of ECQ. That makes me feel that other paraconsistent logics ought to be viewable as fragments of linear logic, perhaps with extra structural rules or axioms. But whenever I try to read stuff written about paraconsistent or relevant logics, I get stuck because they are nearly always presented “Hilbert-style” as a long list of axioms with one or two rules, whereas I only know how to think in rule-based systems like natural deduction or sequent calculus. Is there any reference that describes the common paraconsistent/relevant logics in such a style?
You might try the following one from Kosta Dosen though it discusses just a fragment.
Thanks Thomas! That’s a start. Their basic idea seems to be to identify “relevance logic” with a substructural one having contraction but not weakening, hence corresponding to monoidal categories having diagonals but not projections. So in a sense, these are dual to semicartesian monoidal categories — although somewhat different in that weakening without contraction makes the monoidal unit the terminal object (in linear logic language, $\mathbf{1} = \top$) whereas contraction without weakening doesn’t make the binary monoidal product coincide with the cartesian product (in linear logic language, $\otimes \neq \&$). And they consider adding cartesian products and coproducts plus a monoidal internal-hom (confusingly notated $\to$ rather than $\multimap$), but not any sort of negation, nor anything like $\bot$ or $\invamp$.
This is a nice picture, but what’s not clear to me how this logic of a “relevance monoidal category” corresponds to the various axiomatic systems that relevance-logicians tend to write down, which seem to have a lot more variety than one would expect from this categorical picture. In particular, relevance-logicians and paraconsistentists frequently seem to confuse the additive and multiplicative connectives — as well as giving the multiplicatives second-class status if they make any appearance at all, so the connection to a monoidal-category framework in which the multiplicative conjunction is central seems murky to me.
That’s an interesting article by Došen and Petrić. I’m especially interested in the reference Helman, Completeness of the Normal Typed Fragment of the $\lambda$-system U, which they say gives an interpretation of “relevant lambda terms” in pointed sets.
I don’t know very much about relevance logics besides linear logic, but recently I got interested in an old reference by Grisha Mints (cited in his “Closed categories and the theory of proofs”):
This paper is in Russian, which I don’t read, but a colleague who reads a bit of it (Danko Ilik) helped me to get the gist. Mints gives an intriguing sequent calculus on p.3/p.92 for the negation-free fragment of the relevance logic “R” extended with an S4 box modality (“N”). Contraction is built-in to the system through some careful use of punctuation, but to me it looks a lot like intuitionistic linear logic! In particular, I thought that the “N” modality was acting like a “!” to add the option of weakening a formula…but then again I don’t read Russian, and Danko and I couldn’t quite figure out whether that’s what is really going on. Maybe someone with better Russian knowledge can answer that?
In any case, though, I found it interesting to learn that there are relevance logics which do distinguish two forms of conjunction. The logic R is due to Andersen and Belnap, and apparently they referred to the multiplicative conjunction $A\circ B$ as “intensional” conjunction.
Since we had no such entry yet, I began relevance logic. ’Relevant logic’, an alternative expression, is redirected there.
@#5: I must confess I’ve never got too deeply into relevance logic. Concerning non-standard negations you might see if some of the references at co-Heyting negation can do something for you. In this context the natural idea is to try to model a paraconsistent logic by using Lawvere’s operations on the subtopos lattice but unfortunately (at least to me) these seem to be notoriously difficult to calculate.
I’ve also encountered some relevance/paraconsistent logicians who call the multiplicative conjunction “fusion”.
I created relevance monoidal category, with a more abstract definition that I think is equivalent to Došen and Petrić’s.
Noam, did you and Danko understand enough of the Russian to be able to say definitely what is going on if we ignore the $N$ modality? I agree that it looks a lot like ILL with $\otimes,\&,\oplus,\multimap$ (notated $\circ,\&,\vee,\to$), though without reading any Russian myself I can’t make heads or tails of the punctuation. Could we say that the relevance logic R (without the modality $N$) coincides with the logic of ILL with contraction added? That would be very nice.
Looking at the paper again and trying to remember my discussion with Danko, I think the punctuation is actually just to deal with the modal parts. The sentence after the logical rules of the sequent calculus says something like “where $\{\Omega\}$ denotes $\langle\Omega\rangle$ when in the scope of $\lceil\,\rceil$, and denotes $\Omega$ otherwise”, so I think if we concentrate on the $\circ,\&,\vee,\to$ fragment it’s safe to ignore the braces and all the structural rules involving the other forms of punctuation. Then it looks like you just have the usual ILL rules plus the contraction rule $(C)$. This reading is also suggested by a nice quote from the end of Mints’ “Closed categories and the theory of proofs”, which is what made me originally want to dig up the paper:
We note that the theorem about the normal form for canonical morphisms $f : A \to B$ for $A,B$ which do not contain $I$, in symmetric monoidal closed categories, is equivalent to the one proved in [Mints 1972] on cut elimination for a formal system which is obtained by excluding the contraction rule from the $(0,\supset)$-fragment of the relevant logic $R$. Actually [Mints 1972] deals with the calculus $R^+$, a positive fragment of $R$, containing as connectives $(0,\supset,\vee,\&)$ [sic, but should that be $\circ$ instead of 0?], as well as the contraction rule. However the theorem on cut-elimination in [Mints 1972] is formulated in a way that gives us the result we are interested in: reductions do not introduce applications of the contraction rule (cf. also [Mints 1968]). Thus the problem posed in [Mac Lane 1976, Bull. of the AMS], of obtaining the theorems in [Kelly & Mac Lane 1971] by direct application of Gentzen methods, was solved before it was set. The author learnt of the work [Kelly & Mac Lane 1971] only from [Mac Lane 1976].
Great. And it looks like on p2/p91 he has a Hilbert-style axiomatization that looks a lot like the logic R, although he’s added a couple of rules to do with $\circ$, omitted the negation rules (since he’s considering a positive fragment), and changed the distribution rule to look more linear (probably equivalent).
I would be more confident if I understood how to translate between these two presentations. I can see how to prove all the Hilbert axioms from the sequent calculus, except for distribution which I don’t see any way to get; the natural distribution of linear logic would involve the multiplicative conjunction rather than the additive one. I’m also confused by the rule called “adjunction” in the Hilbert-style version, $A,B\vdash A\& B$, which doesn’t seem to be valid under the usual interpretation of comma as multiplicative. And I’m even less sure how you would go the other way: show that all the rules of sequent calculus are derivable from the axioms and the few rules of the Hilbert system?
By the way the paper was published in an English translation as
You might want to have a look at
as well.
Re #11, no, there isn’t any way to prove distributivity in either form $A \& (B \oplus C) \vdash (A \& B) \oplus C$ or $A \& (B \oplus C) \vdash (A \& B) \oplus (A \& C)$ in linear logic, even with contraction. Huh. I guess the most likely explanation is that the extra punctuation in Mints’ system really is playing an important role in axiomatizing the non-modal connectives (at least for $\&$). So much for reading a paper without understanding the language! (Re #12, it would great to get a hold of that English translation!)
I’m familiar with the issue of (non)distributivity of $\&$ over $\oplus$ also showing up in comparisons of linear logic with BI (bunched implications) logic. BI is motivated in part by a Kripke/presheaf semantics over a “monoid of resources”, where the idea is that the usual “intuitionistic” conjunction and disjunction ($w \models A \wedge B$ iff $w \models A$ and $w \models B$; $w \models A \vee B$ iff $w \models A$ or $w \models B$) can coexist with a multiplicative conjunction defined by convolution ($w \models A * B$ iff $\exists w_1,w_1.w = w_1\circ w_2$ and $w_1 \models A$ and $w_2 \models B$). The sequent calculus for BI is defined using two separate forms of structural punctuation: both a comma “,” internalizing $*$ and a semicolon “;” internalizing $\wedge$. The rules of weakening and contraction can be freely applied for the semicolon, and this lets you capture distributivity of $\wedge$ over $\vee$ in contrast to the situation for linear logic.
I can see a 2 page preview of the English translation on the Springer web site. After giving the Hilbert axioms he says
We describe the Gentzen variant $G R^+$ of the calculus $R^+$. The formulas of this calculus are: 1) the formulas of $R^+$; 2) expressions of the form $⌜\Gamma⌝$, where $\Gamma$ is a list (possibly empty) of formulas of $G R^+$; 3) expressions of the form $\langle\Gamma\rangle$, where $\Gamma$ is a list (possibly empty) of formulas of $G R^+$.
Sequents are expressions of the form $\Gamma\to C$, where $\Gamma$ is a list of formulas of $G R^+$, and $C$ is a formula of $R^+$. The positive part of a list $\Gamma$ is any occurrence of a formula in $\Gamma$ that is not in the scope of logical connectives. If $V$ is [in?] the positive part of a list $\Gamma$ and $\Delta$ is a formula or list of formulas, then $\Gamma_V[\Delta]$ denotes the result of the substitution of $\Delta$ for $V$. Wherever there is no risk of confusion we omit any mention of an occurrence of $V$ and simply write the result of the substitution in the form $\Gamma[\Delta]$.
Maybe if we combine that with your
where $\{\Omega\}$ denotes $\langle\Omega\rangle$ when in the scope of $⌜\;⌝$ and denotes $\Omega$ otherwise.
we can figure out what his punctuation is doing. Given his comments about “positive parts”, the fact that the rules only act on formulas in the “positive part”, and that his two kinds of brackets can’t occur on the RHS of the sequent, it seems to me like his brackets should really be regarded as structuring the context rather than as an actual operation on formulas. So perhaps it is actually similar to BI? In particular, his rule (Y) seems to be permitting weakening inside $⌜\;⌝$.
(Does one really say that a judgmental structure like comma or semicolon “internalizes” the corresponding connective? My natural inclination would have been to say rather that the connective internalizes the judgmental structure, since the connective is something inside the logic and the judgmental structure is outside of it.)
Given his comments about “positive parts”, the fact that the rules only act on formulas in the “positive part”, and that his two kinds of brackets can’t occur on the RHS of the sequent, it seems to me like his brackets should really be regarded as structuring the context rather than as an actual operation on formulas. So perhaps it is actually similar to BI?
Yes, I think you’re right. I managed to get a hold of that English translation, and see that after the proof of Lemma 1 (the translation from $R^+$ to $GR^+$, on p.94 of the Russian version) and before the proof of Lemma 2 (the translation from $GR^+$ to $R^+$), he describes how to translate sequents:
In this case, for $n \gt 0$ the formula $\langle A_1,\dots,A_n \rangle$ is replaced by $(A_1 \circ \dots \circ A_n)$ and $⌜ A_1,\dots,A_n ⌝$ by $(A_1 \& \dots \& A_n)$. The formula $\langle\,\rangle$ is replaced by $\&_{i\le m}(p_i \to p_i)$ or $N\&_{i\le m}(p_i \to p_i)$ depending on whether $N$ does or does not occur in the deduction. Here $p_1,\dots,p_m$ is the complete list of variables occurring in the given deduction. The formula $⌜ \, ⌝$ cannot occur in a $GR^+$ deduction, because it does not occur in axioms and is not induced by the deduction rules.
So it does seem similar to the approach in BI, and I’m guessing probably similar to Michael Dunn’s sequent calculus or the sequent calculus based on “Display Logic” alluded to in the SEP article.
Does one really say that a judgmental structure like comma or semicolon “internalizes” the corresponding connective? My natural inclination would have been to say rather that the connective internalizes the judgmental structure, since the connective is something inside the logic and the judgmental structure is outside of it.
Right, I was being loose, and usually one says that the connective internalizes the judgmental structure. Though in systems with a lot of punctuation (cf. Display Logic) sometimes it’s hard to keep track of who’s internalizing whom.
Back to your question about the “adjunction” rule $A,B \vdash A\& B$, which Mints calls “composition”, I think I understand what’s going on. Like the rule of modus ponens in Hilbert-style systems, this should be read as “if $A$ is a theorem and $B$ is a theorem then $A\& B$ is a theorem”. It should not be read as an implication $A\circ B \to A\&B$, which would indeed break the relevance interpretation by composing with $A\&B \to A$ and $A\&B \to B$.
Now I have a copy of the translation too.
The formula $\langle\,\rangle$ is replaced by $\&_{i\le m}(p_i \to p_i)$ or $N\&_{i\le m}(p_i \to p_i)$ depending on whether $N$ does or does not occur in the deduction
This seems pretty weird. I guess part of what’s going on is that he has to work around the unaccountable omission of the nullary conjunctions $\mathbf{1},\top$ in his logic. But $p_i \to p_i$ should already be $\mathbf{1}$, since his $\to$ is $\multimap$, so why does he want to $\&$ together $m$ of them? Overall his system feels a bit ad hoc; I guess I should go look up Dunn’s version and “Display logic”.
Like the rule of modus ponens in Hilbert-style systems, this should be read as “if $A$ is a theorem and $B$ is a theorem then $A\& B$ is a theorem”.
Ah, I see: the symbol $\vdash$ in a Hilbert-style system means something different from in a sequent calculus. The notation would match better if we wrote the “adjunction/composition” and “modus ponens” rules as, well, rules:
$\frac{A \quad B}{A\& B} \qquad \frac{A \quad A\to B}{B}$the symbol $\vdash$ in a Hilbert-style system means something different from in a sequent calculus.
I guess this was already remarked upon at deductive system, but I just created Hilbert system including a more extended discussion.
Re: #16, of course $\mathbf{1}$ is logically equivalent to $\mathbf{1}\& \mathbf{1}$ (any positive finite number of times), since $\&$ has diagonals and projections. So maybe his big $\&$ is just to avoid treating any variable specially.
Also, I finally looked back at the BI paper linked in #13, and I see a number of other references to sequent calculi for relevant logics as predecessors. They appear to claim that BI differs from the positive fragment of the relevant logic R (in Dunn’s sequent calculus form, which I have not yet looked up) exactly in that (1) positive R has contraction for the multiplicative conjunction, while (2) BI has the multiplicative true, the additive false, and the additive implication. So BI with (2) removed and (1) added ought to be another sequent calculus for R, which is probably more or less like Mints’s, but cleaner and more modern.
Oh, it’s nice to see that O’Hearn and Pym’s paper has an explicit discussion of relevance logics – I read it a long time ago and did not notice/remember that. Yes, I believe their claim that R can be seen as a fragment of BI with contraction added for the multiplicative conjunction “$*$”. It might be worth creating a page for BI itself, though I don’t have time for that right now (still, I’ll go ahead and add a link to O’Hearn and Pym’s paper to relevance logic)
I was thinking making a page for BI too. As a matter of terminology, does “BI” still generally refer specifically to their exact system with multiplicative and additive conjunction, additive disjunction, and multiplicative and additive implication? Or has it become a more general term for logics whose contexts have “multiple commas”?
Do you know of any BI-like sequent calculus that incorporates negation, hopefully leading to the full logic R (not just its “positive part”)? I guess one obvious approach would be to allow multiple formulas on the right, but it’s not obvious to me how that should work. Should the right-hand contexts be additive or multiplicative, or bunched? How does an involutive negation that moves formulas back and forth interact with bunched contexts?
I see that this presentation of R does not have LEM for the additive disjunction ($A\vee\neg A$); instead it has DNE $\neg\neg A\multimap A$ for the relevant/multiplicative implication, which is probably at least closely related to “multiplicative LEM” $A\invamp\neg A$. But I feel like I’ve seen other relevant logics that do have additive LEM, though I can’t find any right now.
In my (limited) experience “BI” refers to the specific system in their paper, which is also closely tied to separation logic (an extension of Hoare logic used to reason about memory separation and memory sharing in programs with pointer structures). But I noticed that there is a discussion of “bunches” in Stephen Read’s Relevant Logic: A Philosophical Examination of Inference, which is cited in O’Hearn and Pym’s paper, so it’s likely that they got their terminology from there. I’m also pretty sure that I’ve heard people more generally talking about a sequent calculus having “bunches” when contexts are not merely lists but have some non-trivial tree structure.
Re #22, I remember seeing an attempt at such a “multiple conclusion BI” in David Pym’s book, The Semantics and Proof Theory of the Logic of Bunched Implications, but that it had various shortcomings. From a few years ago, there is also Classical BI: Its Semantics and Proof Theory by Brotherston and Calcagno. Their approach is based on Display Logic, but I am not at all familiar with the details.
Why not start stubs for terms we don’t yet have entries for? So separation logic.
Here’s a master thesis on bounced implications (BI) and separation logic. Lars Birkedal is the head of the group I am in now.
Ok, I created bunched logic.
On p17, Brotherston and Calcagno say
one of Belnap’s original applications of display logic was in giving a display calculus for the full relevant logic R
However, Belnap’s 1982 display logic paper says instead on p23/p397
The problem of finding a consecution calculus for R without boolean negation remains open. The difficulty is that relevance logic has things to say about boolean conjunction and disjunction, so that we need the boolean family to help us out; but the techniques of this paper do not permit enjoying boolean conjunction and disjunction without carrying along boolean negation as well.
I gather that by “boolean” he means the same as “additive”. (What is a “consecution calculus”?)
Another reference I stumbled over: A consecution calculus for positive relevant implication with necessity by Belnap, Gupta, and Dunn. Their introduction says
JMD’s result for the system without necessity was presented in a colloquium at the University of Pittsburgh in the spring of 1968… In the meantime, Minc in 1972… proved essentially the same theorem we report below, i.e., cut for the system with necessity.
We should first note an obviously needed correction [to Minc]: cut cannot be proved for the system he describes unless, in the statement in the structural rules, variables for formulas of $R_+$ are replaced by variables for what below we call ’antecedents’, i.e., formulas of the consecution calculus $GR_+$. Otherwise case 1.3 of [7] cannot go through. In the terminology developed below, Section 6, the rules of [7] as stated are not all closed under parametric substitution. For example, it is crucial to be able to contract not only pairs of single formulas, but pairs of lists of such formulas — see [1], p. 390, for an instructive example. But this correction is easily made, after which the proof of [7] is fine. What we offer below is a new proof…
Apparently Minc = Mints and [7] is the paper of his we’ve been looking at.
Ah, but in footnote 12 of the display logic paper, Belnap then says
Let me take this opportunity to correct an error in Belnap et al. (1980): that paper wrongly stated that the rules of Minc (1972) were inaccurate. “Our” mistake was due to my inattentive conflation of Minc’s clearly expressed distinction between “formula” and “formula of $R+$“.
I guess a “consecution calculus” is another word for “sequent calculus”.
Re #27: I’m not exactly sure what is the issue. So Belnap’s calculus for R also includes an additive negation (both as a structural connective and a logical connective), and he says that it’s difficult to remove it (does he mean both the structural and the logical, or just the structural connective?). But isn’t it still conservative over R?
Re #28: that’s good to know.
I presume that he means just the structural connective; you should be able to remove the logical one by just, well, removing it and its associated rules, right? He does claim that it’s conservative over R, or at least that a Hilbert system containing additive negation is conservative over R, and presumably this is proven in the references he cites.
But I think it’s reasonable to be incompletely satisfied with a sequent calculus for a logic that requires introducing a new structural connective not present in the original logic. Categorically, for instance, I would expect models of R to be categories with a monoidal structure that is both star-autonomous and relevance and that are also distributive categories, and I see no reason every such category would contain an “additive negation”. A faithful sequent calculus for R ought to be a syntax for morphisms in such categories, so it shouldn’t contain anything not interpretable therein.
These papers are over 30 years old, of course. I wonder whether any progress has been made since then.
I added a mention of paraconsistent set theory, and its difficulties with Curry’s paradox, to paraconsistent logic.
From #23:
I remember seeing an attempt at such a “multiple conclusion BI” in David Pym’s book, The Semantics and Proof Theory of the Logic of Bunched Implications, but that it had various shortcomings.
Finally tracked down a copy of this book. His multiple-conclusions are also bunched, with comma and semicolon there representing multiplicative and additive disjunctions (there written $#$ and $\vee$), and he has both an additive and multiplicative negation, written $\neg$ and $\sim$, both of which are classical (we have $A\vee \neg A$ and also $A # \sim A$).
His rules for the negations, conjunctions, and disjunctions all make sense, although I think there is a typo in his rule $\wedge R$; he has
$\frac{\Gamma \vdash \phi;\Delta \qquad \Gamma' \vdash \psi;\Delta'}{\Gamma;\Gamma' \vdash \Delta(\phi\wedge \psi); \Delta'(\phi\wedge\psi)}$which doesn’t even parse for me; in the conclusion $\Delta(-)$ and $\Delta'(-)$ are “bunches with holes” whereas in the premises they are not. So it seems to me that the premises should be $\Gamma \vdash \Delta(\phi)$ and $\Gamma' \vdash \Delta'(\psi)$; this also would be precise dualization of his rule $\vee L$.
He claims (without proof) that this calculus has cut-elimination, but that cut-elimination fails when certain rules for the implications $\multimap$ (which BI people write with a $\ast$ instead of the $\circ$ in $\multimap$, but I don’t know how to make that symbol here) and $\to$ are added. Unfortunately he gives no indication of what goes wrong with cut-elimination.
It seems to me that since both $\neg$ and $\sim$ are classical, we ought to be able to define $(A\multimap B) = (\sim A # B)$ and $(A\to B) = (\neg A \vee B)$, and indeed he says that “it may readily be seen that $(A\multimap B) = (\sim A # B)$”, although it’s not clear to me exactly what this means before $\multimap$ has been defined. In any case this definition of $\multimap$ would apparently not validate the rules he wants for it; for instance, his $\multimap L$ rule is
$\frac{\Gamma' \vdash \Delta(\phi) \qquad \Gamma(\psi) \vdash \Delta'}{\Gamma(\Gamma',\phi\multimap\psi) \vdash \Delta(\Delta')}$whereas his rule $# L$ only allows introduction at the “top of a bunch”:
$\frac{\Gamma,\phi \vdash \Delta \qquad \Gamma',\psi\vdash \Delta'}{\Gamma,\Gamma',\phi # \psi \vdash \Delta,\Delta'}$so it would only yield a similarly restricted left-rule for $\multimap$:
$\frac{\Gamma' \vdash \Delta,\phi \qquad \Gamma,\psi\vdash \Delta'}{\Gamma,\Gamma',\phi \multimap \psi \vdash \Delta,\Delta'}$He does mention that “we should be able to recover cut-elimination” with such a restriction” on $\multimap L$, but says that “such a system would be of questionable semantic value”. I have no idea what that means, and he gives no further explanation.
So it seems to me that there is a very reasonable “doubly-classical BI” with two conjunctions, two disjunctions, two implications (with the weaker rule above), and two negations. If we restrict attention to the additive conjunction and disjunction and the multiplicative implication and negation, and add contraction for the comma, this again looks a lot like classical R. Unlike the display logic approach, it seems more like we can really ignore the additive negation, since it has no direct structural-connective analogue, and we can certainly ignore the additive implication. The multiplicative conjunction and disjunction are present as structural connectives whether or not we internalize them, but I would think those shouldn’t be as problematic to relevantists, since they can be defined in terms of the multiplicative implication and negation. However, it’s not obvious to me whether the resulting system induces exactly the same relations between these connectives that hold in classical R, or exactly what its categorical semantics will look like.
Nevermind
1 to 35 of 35