Nevermind

]]>From #23:

I remember seeing an attempt at such a “multiple conclusion BI” in David Pym’s book, The Semantics and Proof Theory of the Logic of Bunched Implications, but that it had various shortcomings.

Finally tracked down a copy of this book. His multiple-conclusions are also bunched, with comma and semicolon there representing multiplicative and additive disjunctions (there written $#$ and $\vee$), and he has both an additive and multiplicative negation, written $\neg$ and $\sim$, both of which are classical (we have $A\vee \neg A$ and also $A # \sim A$).

His rules for the negations, conjunctions, and disjunctions all make sense, although I think there is a typo in his rule $\wedge R$; he has

$\frac{\Gamma \vdash \phi;\Delta \qquad \Gamma' \vdash \psi;\Delta'}{\Gamma;\Gamma' \vdash \Delta(\phi\wedge \psi); \Delta'(\phi\wedge\psi)}$which doesn’t even parse for me; in the conclusion $\Delta(-)$ and $\Delta'(-)$ are “bunches with holes” whereas in the premises they are not. So it seems to me that the premises should be $\Gamma \vdash \Delta(\phi)$ and $\Gamma' \vdash \Delta'(\psi)$; this also would be precise dualization of his rule $\vee L$.

He claims (without proof) that this calculus has cut-elimination, but that cut-elimination fails when certain rules for the implications $\multimap$ (which BI people write with a $\ast$ instead of the $\circ$ in $\multimap$, but I don’t know how to make that symbol here) and $\to$ are added. Unfortunately he gives no indication of what goes wrong with cut-elimination.

It seems to me that since both $\neg$ and $\sim$ are classical, we ought to be able to define $(A\multimap B) = (\sim A # B)$ and $(A\to B) = (\neg A \vee B)$, and indeed he says that “it may readily be seen that $(A\multimap B) = (\sim A # B)$”, although it’s not clear to me exactly what this means *before* $\multimap$ has been defined. In any case this definition of $\multimap$ would apparently not validate the rules he wants for it; for instance, his $\multimap L$ rule is

whereas his rule $# L$ only allows introduction at the “top of a bunch”:

$\frac{\Gamma,\phi \vdash \Delta \qquad \Gamma',\psi\vdash \Delta'}{\Gamma,\Gamma',\phi # \psi \vdash \Delta,\Delta'}$so it would only yield a similarly restricted left-rule for $\multimap$:

$\frac{\Gamma' \vdash \Delta,\phi \qquad \Gamma,\psi\vdash \Delta'}{\Gamma,\Gamma',\phi \multimap \psi \vdash \Delta,\Delta'}$He does mention that “we should be able to recover cut-elimination” with such a restriction” on $\multimap L$, but says that “such a system would be of questionable semantic value”. I have no idea what that means, and he gives no further explanation.

So it seems to me that there is a very reasonable “doubly-classical BI” with two conjunctions, two disjunctions, two implications (with the weaker rule above), and two negations. If we restrict attention to the additive conjunction and disjunction and the multiplicative implication and negation, and add contraction for the comma, this again looks a lot like classical R. Unlike the display logic approach, it seems more like we can really ignore the additive negation, since it has no direct structural-connective analogue, and we can certainly ignore the additive implication. The multiplicative conjunction and disjunction are present as structural connectives whether or not we internalize them, but I would think those shouldn’t be as problematic to relevantists, since they can be defined in terms of the multiplicative implication and negation. However, it’s not obvious to me whether the resulting system induces exactly the same relations between these connectives that hold in classical R, or exactly what its categorical semantics will look like.

]]>I added a mention of paraconsistent set theory, and its difficulties with Curry’s paradox, to paraconsistent logic.

]]>I presume that he means just the structural connective; you should be able to remove the logical one by just, well, removing it and its associated rules, right? He does claim that it’s conservative over R, or at least that a Hilbert system containing additive negation is conservative over R, and presumably this is proven in the references he cites.

But I think it’s reasonable to be incompletely satisfied with a sequent calculus for a logic that requires introducing a new structural connective not present in the original logic. Categorically, for instance, I would expect models of R to be categories with a monoidal structure that is both star-autonomous and relevance and that are also distributive categories, and I see no reason every such category would contain an “additive negation”. A faithful sequent calculus for R ought to be a syntax for morphisms in such categories, so it shouldn’t contain anything not interpretable therein.

These papers are over 30 years old, of course. I wonder whether any progress has been made since then.

]]>Re #27: I’m not exactly sure what is the issue. So Belnap’s calculus for R also includes an additive negation (both as a structural connective and a logical connective), and he says that it’s difficult to remove it (does he mean both the structural and the logical, or just the structural connective?). But isn’t it still conservative over R?

Re #28: that’s good to know.

]]>Ah, but in footnote 12 of the display logic paper, Belnap then says

Let me take this opportunity to correct an error in Belnap et al. (1980): that paper

wronglystated that the rules of Minc (1972) were inaccurate. “Our” mistake was due tomyinattentive conflation of Minc’s clearly expressed distinction between “formula” and “formula of $R+$“.

I guess a “consecution calculus” is another word for “sequent calculus”.

]]>Another reference I stumbled over: A consecution calculus for positive relevant implication with necessity by Belnap, Gupta, and Dunn. Their introduction says

JMD’s result for the system

withoutnecessity was presented in a colloquium at the University of Pittsburgh in the spring of 1968… In the meantime, Minc in 1972… proved essentially the same theorem we report below, i.e., cut for the systemwithnecessity.We should first note an obviously needed correction [to Minc]: cut cannot be proved for the system he describes unless, in the statement in the structural rules, variables for formulas of $R_+$ are replaced by variables for what below we call ’antecedents’, i.e., formulas of the consecution calculus $GR_+$. Otherwise case 1.3 of [7] cannot go through. In the terminology developed below, Section 6, the rules of [7] as stated are not all closed under parametric substitution. For example, it is crucial to be able to contract not only pairs of single formulas, but pairs of lists of such formulas — see [1], p. 390, for an instructive example. But this correction is easily made, after which the proof of [7] is fine. What we offer below is a new proof…

Apparently Minc = Mints and [7] is the paper of his we’ve been looking at.

]]>On p17, Brotherston and Calcagno say

one of Belnap’s original applications of display logic was in giving a display calculus for the full relevant logic R

However, Belnap’s 1982 display logic paper says instead on p23/p397

The problem of finding a consecution calculus for R without boolean negation remains open. The difficulty is that relevance logic has things to say about boolean conjunction and disjunction, so that we need the boolean family to help us out; but the techniques of this paper do not permit enjoying boolean conjunction and disjunction without carrying along boolean negation as well.

I gather that by “boolean” he means the same as “additive”. (What is a “consecution calculus”?)

]]>Ok, I created bunched logic.

]]>Here’s a master thesis on bounced implications (BI) and separation logic. Lars Birkedal is the head of the group I am in now.

]]>Why not start stubs for terms we don’t yet have entries for? So separation logic.

]]>In my (limited) experience “BI” refers to the specific system in their paper, which is also closely tied to separation logic (an extension of Hoare logic used to reason about memory separation and memory sharing in programs with pointer structures). But I noticed that there is a discussion of “bunches” in Stephen Read’s Relevant Logic: A Philosophical Examination of Inference, which is cited in O’Hearn and Pym’s paper, so it’s likely that they got their terminology from there. I’m also pretty sure that I’ve heard people more generally talking about a sequent calculus having “bunches” when contexts are not merely lists but have some non-trivial tree structure.

Re #22, I remember seeing an attempt at such a “multiple conclusion BI” in David Pym’s book, The Semantics and Proof Theory of the Logic of Bunched Implications, but that it had various shortcomings. From a few years ago, there is also Classical BI: Its Semantics and Proof Theory by Brotherston and Calcagno. Their approach is based on Display Logic, but I am not at all familiar with the details.

]]>Do you know of any BI-like sequent calculus that incorporates negation, hopefully leading to the full logic R (not just its “positive part”)? I guess one obvious approach would be to allow multiple formulas on the right, but it’s not obvious to me how that should work. Should the right-hand contexts be additive or multiplicative, or bunched? How does an involutive negation that moves formulas back and forth interact with bunched contexts?

I see that this presentation of R does not have LEM for the additive disjunction ($A\vee\neg A$); instead it has DNE $\neg\neg A\multimap A$ for the relevant/multiplicative implication, which is probably at least closely related to “multiplicative LEM” $A\invamp\neg A$. But I feel like I’ve seen other relevant logics that do have additive LEM, though I can’t find any right now.

]]>I was thinking making a page for BI too. As a matter of terminology, does “BI” still generally refer specifically to their exact system with multiplicative and additive conjunction, additive disjunction, and multiplicative and additive implication? Or has it become a more general term for logics whose contexts have “multiple commas”?

]]>Oh, it’s nice to see that O’Hearn and Pym’s paper has an explicit discussion of relevance logics – I read it a long time ago and did not notice/remember that. Yes, I believe their claim that R can be seen as a fragment of BI with contraction added for the multiplicative conjunction “$*$”. It might be worth creating a page for BI itself, though I don’t have time for that right now (still, I’ll go ahead and add a link to O’Hearn and Pym’s paper to relevance logic)

]]>Also, I finally looked back at the BI paper linked in #13, and I see a number of other references to sequent calculi for relevant logics as predecessors. They appear to claim that BI differs from the positive fragment of the relevant logic R (in Dunn’s sequent calculus form, which I have not yet looked up) exactly in that (1) positive R has contraction for the multiplicative conjunction, while (2) BI has the multiplicative true, the additive false, and the additive implication. So BI with (2) removed and (1) added ought to be another sequent calculus for R, which is probably more or less like Mints’s, but cleaner and more modern.

]]>Re: #16, of course $\mathbf{1}$ is logically equivalent to $\mathbf{1}\& \mathbf{1}$ (any positive finite number of times), since $\&$ has diagonals and projections. So maybe his big $\&$ is just to avoid treating any variable specially.

]]>the symbol $\vdash$ in a Hilbert-style system means something different from in a sequent calculus.

I guess this was already remarked upon at deductive system, but I just created Hilbert system including a more extended discussion.

]]>Now I have a copy of the translation too.

The formula $\langle\,\rangle$ is replaced by $\&_{i\le m}(p_i \to p_i)$ or $N\&_{i\le m}(p_i \to p_i)$ depending on whether $N$ does or does not occur in the deduction

This seems pretty weird. I guess part of what’s going on is that he has to work around the unaccountable omission of the nullary conjunctions $\mathbf{1},\top$ in his logic. But $p_i \to p_i$ should already be $\mathbf{1}$, since his $\to$ is $\multimap$, so why does he want to $\&$ together $m$ of them? Overall his system feels a bit *ad hoc*; I guess I should go look up Dunn’s version and “Display logic”.

Like the rule of modus ponens in Hilbert-style systems, this should be read as “if $A$ is a theorem and $B$ is a theorem then $A\& B$ is a theorem”.

Ah, I see: the symbol $\vdash$ in a Hilbert-style system means something different from in a sequent calculus. The notation would match better if we wrote the “adjunction/composition” and “modus ponens” rules as, well, *rules*:

Given his comments about “positive parts”, the fact that the rules only act on formulas in the “positive part”, and that his two kinds of brackets can’t occur on the RHS of the sequent, it seems to me like his brackets should really be regarded as structuring the context rather than as an actual operation on formulas. So perhaps it is actually similar to BI?

Yes, I think you’re right. I managed to get a hold of that English translation, and see that after the proof of Lemma 1 (the translation from $R^+$ to $GR^+$, on p.94 of the Russian version) and before the proof of Lemma 2 (the translation from $GR^+$ to $R^+$), he describes how to translate sequents:

In this case, for $n \gt 0$ the formula $\langle A_1,\dots,A_n \rangle$ is replaced by $(A_1 \circ \dots \circ A_n)$ and $⌜ A_1,\dots,A_n ⌝$ by $(A_1 \& \dots \& A_n)$. The formula $\langle\,\rangle$ is replaced by $\&_{i\le m}(p_i \to p_i)$ or $N\&_{i\le m}(p_i \to p_i)$ depending on whether $N$ does or does not occur in the deduction. Here $p_1,\dots,p_m$ is the complete list of variables occurring in the given deduction. The formula $⌜ \, ⌝$ cannot occur in a $GR^+$ deduction, because it does not occur in axioms and is not induced by the deduction rules.

So it does seem similar to the approach in BI, and I’m guessing probably similar to Michael Dunn’s sequent calculus or the sequent calculus based on “Display Logic” alluded to in the SEP article.

Does one really say that a judgmental structure like comma or semicolon “internalizes” the corresponding connective? My natural inclination would have been to say rather that the connective internalizes the judgmental structure, since the connective is something inside the logic and the judgmental structure is outside of it.

Right, I was being loose, and usually one says that the connective internalizes the judgmental structure. Though in systems with a lot of punctuation (cf. Display Logic) sometimes it’s hard to keep track of who’s internalizing whom.

Back to your question about the “adjunction” rule $A,B \vdash A\& B$, which Mints calls “composition”, I think I understand what’s going on. Like the rule of modus ponens in Hilbert-style systems, this should be read as “if $A$ is a theorem and $B$ is a theorem then $A\& B$ is a theorem”. It should *not* be read as an implication $A\circ B \to A\&B$, which would indeed break the relevance interpretation by composing with $A\&B \to A$ and $A\&B \to B$.

I can see a 2 page preview of the English translation on the Springer web site. After giving the Hilbert axioms he says

We describe the Gentzen variant $G R^+$ of the calculus $R^+$. The formulas of this calculus are: 1) the formulas of $R^+$; 2) expressions of the form $⌜\Gamma⌝$, where $\Gamma$ is a list (possibly empty) of formulas of $G R^+$; 3) expressions of the form $\langle\Gamma\rangle$, where $\Gamma$ is a list (possibly empty) of formulas of $G R^+$.

Sequents are expressions of the form $\Gamma\to C$, where $\Gamma$ is a list of formulas of $G R^+$, and $C$ is a formula of $R^+$. The positive part of a list $\Gamma$ is any occurrence of a formula in $\Gamma$ that is not in the scope of logical connectives. If $V$ is [in?] the positive part of a list $\Gamma$ and $\Delta$ is a formula or list of formulas, then $\Gamma_V[\Delta]$ denotes the result of the substitution of $\Delta$ for $V$. Wherever there is no risk of confusion we omit any mention of an occurrence of $V$ and simply write the result of the substitution in the form $\Gamma[\Delta]$.

Maybe if we combine that with your

where $\{\Omega\}$ denotes $\langle\Omega\rangle$ when in the scope of $⌜\;⌝$ and denotes $\Omega$ otherwise.

we can figure out what his punctuation is doing. Given his comments about “positive parts”, the fact that the rules only act on formulas in the “positive part”, and that his two kinds of brackets can’t occur on the RHS of the sequent, it seems to me like his brackets should really be regarded as structuring the context rather than as an actual operation on formulas. So perhaps it is actually similar to BI? In particular, his rule (Y) seems to be permitting weakening inside $⌜\;⌝$.

(Does one really say that a judgmental structure like comma or semicolon “internalizes” the corresponding connective? My natural inclination would have been to say rather that the connective internalizes the judgmental structure, since the connective is something inside the logic and the judgmental structure is outside of it.)

]]>Re #11, no, there isn’t any way to prove distributivity in either form $A \& (B \oplus C) \vdash (A \& B) \oplus C$ or $A \& (B \oplus C) \vdash (A \& B) \oplus (A \& C)$ in linear logic, even with contraction. Huh. I guess the most likely explanation is that the extra punctuation in Mints’ system really is playing an important role in axiomatizing the non-modal connectives (at least for $\&$). So much for reading a paper without understanding the language! (Re #12, it would great to get a hold of that English translation!)

I’m familiar with the issue of (non)distributivity of $\&$ over $\oplus$ also showing up in comparisons of linear logic with BI (bunched implications) logic. BI is motivated in part by a Kripke/presheaf semantics over a “monoid of resources”, where the idea is that the usual “intuitionistic” conjunction and disjunction ($w \models A \wedge B$ iff $w \models A$ and $w \models B$; $w \models A \vee B$ iff $w \models A$ or $w \models B$) can coexist with a multiplicative conjunction defined by convolution ($w \models A * B$ iff $\exists w_1,w_1.w = w_1\circ w_2$ and $w_1 \models A$ and $w_2 \models B$). The sequent calculus for BI is defined using two separate forms of structural punctuation: both a comma “,” internalizing $*$ and a semicolon “;” internalizing $\wedge$. The rules of weakening and contraction can be freely applied for the semicolon, and this lets you capture distributivity of $\wedge$ over $\vee$ in contrast to the situation for linear logic.

]]>By the way the paper was published in an English translation as

- G. E. Mints,
*Cut-Elimination Theorem in Relevant Logics*, Journal of Soviet Mathematics**6**(1976) pp.422-8.

You might want to have a look at

- N. Leslie, E. D. Mares,
*CHR - A Constructive Relevant Natural-Deduction Logic*, ENTCS**91**(2004) pp.158–170.

as well.

]]>Great. And it looks like on p2/p91 he has a Hilbert-style axiomatization that looks a lot like the logic R, although he’s added a couple of rules to do with $\circ$, omitted the negation rules (since he’s considering a positive fragment), and changed the distribution rule to look more linear (probably equivalent).

I would be more confident if I understood how to translate between these two presentations. I can see how to prove all the Hilbert axioms from the sequent calculus, except for distribution which I don’t see any way to get; the natural distribution of linear logic would involve the multiplicative conjunction rather than the additive one. I’m also confused by the rule called “adjunction” in the Hilbert-style version, $A,B\vdash A\& B$, which doesn’t seem to be valid under the usual interpretation of comma as multiplicative. And I’m even less sure how you would go the other way: show that all the rules of sequent calculus are derivable from the axioms and the few rules of the Hilbert system?

]]>