Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology definitions deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory object of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeJul 23rd 2012
    • (edited Jul 23rd 2012)

    I did a bit of reorganization and added more examples at paraconsistent logic, including some comments about linear logic.

    Here I copy an old discussion from that page:


    I made a big change here; I would argue that the failure of B\bot \vdash B means that ‘\bot’ simply doesn't mean \bot; but in any case, I've always seen the definition given in terms of negation. In particular, dual-intuitionistic logic has B\bot \vdash B (just as intuitionistic logic has BB \vdash \top) but is still considered paraconsistent.

    Finn: Hmm. (I presume you meant the ’)’ to come before ’but is still…’; as it stands that last statement is false.) The definitions I’ve seen correspond to what I wrote, but you make a good point – if we want to think of LJ opLJ^{op} as paraconsistent then the definition by means of ex falso quodlibet does seem wrong. If this is the standard definition, then I certainly won’t object – I’ll just avoid relying on philosophers for information about logic in future.

    Toby: Yeah, I'm sure about the standard; our links agree with me too. (And thanks for catching my parenthesis.) As for philosophers, they're not always as precise as mathematicians; that may be the problem. Not to mention, there's a tendency not to include \bot as a logical constant but instead to simply define it as A¬AA \wedge \neg{A} (after proving that these are all equivalent, but ignoring the possibility of an empty model), which leads to conflating the two versions. (In a paraconsistent logic where A¬AB¬BA \wedge \neg{A} \equiv B \wedge \neg{B} need not hold, one ought to catch the inapplicability of this definition, but maybe not.)

    Finn: I think you’ve hit the nail on the head there: I think ’ex falso quodlibet’ means A\bot\vdash A (as you said above, this says that ’\bot’ means \bot, the least truth-value), but it seems my sources may have meant A¬ABA\wedge\neg A\vdash B, without saying so. This is why I never cared much for what’s called ’philosophical logic’ – even though I’m a philosophy graduate studying logic (albeit in a computer science department), it was always too fuzzy for my tastes.

    Thanks for clearing this up. Obviously your version should stand.

    Toby: You're welcome. But your fancy Latin reminds me that what we have here is technically really the combination of the law of non-contradiction (any contradiction is false: A¬AA \wedge \neg{A} \equiv \bot) and ex falso quodlibet (anything follows from falsehood: B\bot \vdash B), so I changed the name above.

    Finn: Right – I was going to mention that (no, really), but forgot. Perhaps we should incorporate some of this discussion into the article proper, to help resolve any imprecision in other sources. I’ll do that, if you’d rather not, but not now – it’s way past bedtime here in GMT-land.

    • CommentRowNumber2.
    • CommentAuthorDavidRoberts
    • CommentTimeJul 25th 2012

    I added a reference to Mortensen’s book Inconsistent Mathematics.

    • CommentRowNumber3.
    • CommentAuthorMike Shulman
    • CommentTimeApr 26th 2016

    A few years ago Toby dropped a query box at paraconsistent logic asking about relevance logic:

    But why doesn’t the Lewis independent argument work?

    My impression, from the small amount of reading I’ve done, is that “no sequent can be valid unless the hypothesis and conclusion have a variable in common” is not a rule of relevance logic but a desideratum. Mortensen’s book, which I happen to have in front of me, says “A logic is said to be relevant if whenever ABA\to B is a theorem of the logic then AA and BB share an atomic sentence in common”. A particular relevance logic itself is not defined by this property, but by axioms and rules that are later observed to have that property, and so it may happen that such a logic is also paraconsistent. Does that seem right?

    I would like to understand relevant and paraconsistent logics better from a categorical point of view. I feel like I understand linear logic reasonably well, and in some sense it seems like the “universal solution” to paraconsistency, that allows all of the problematic sequents but de-couples their connectives to avoid the conclusion of ECQ. That makes me feel that other paraconsistent logics ought to be viewable as fragments of linear logic, perhaps with extra structural rules or axioms. But whenever I try to read stuff written about paraconsistent or relevant logics, I get stuck because they are nearly always presented “Hilbert-style” as a long list of axioms with one or two rules, whereas I only know how to think in rule-based systems like natural deduction or sequent calculus. Is there any reference that describes the common paraconsistent/relevant logics in such a style?

    • CommentRowNumber4.
    • CommentAuthorThomas Holder
    • CommentTimeApr 26th 2016
    • (edited Apr 26th 2016)

    You might try the following one from Kosta Dosen though it discusses just a fragment.

    • CommentRowNumber5.
    • CommentAuthorMike Shulman
    • CommentTimeApr 27th 2016

    Thanks Thomas! That’s a start. Their basic idea seems to be to identify “relevance logic” with a substructural one having contraction but not weakening, hence corresponding to monoidal categories having diagonals but not projections. So in a sense, these are dual to semicartesian monoidal categories — although somewhat different in that weakening without contraction makes the monoidal unit the terminal object (in linear logic language, 1=\mathbf{1} = \top) whereas contraction without weakening doesn’t make the binary monoidal product coincide with the cartesian product (in linear logic language, &\otimes \neq \&). And they consider adding cartesian products and coproducts plus a monoidal internal-hom (confusingly notated \to rather than \multimap), but not any sort of negation, nor anything like \bot or \invamp.

    This is a nice picture, but what’s not clear to me how this logic of a “relevance monoidal category” corresponds to the various axiomatic systems that relevance-logicians tend to write down, which seem to have a lot more variety than one would expect from this categorical picture. In particular, relevance-logicians and paraconsistentists frequently seem to confuse the additive and multiplicative connectives — as well as giving the multiplicatives second-class status if they make any appearance at all, so the connection to a monoidal-category framework in which the multiplicative conjunction is central seems murky to me.

  1. That’s an interesting article by Došen and Petrić. I’m especially interested in the reference Helman, Completeness of the Normal Typed Fragment of the λ\lambda-system U, which they say gives an interpretation of “relevant lambda terms” in pointed sets.

    I don’t know very much about relevance logics besides linear logic, but recently I got interested in an old reference by Grisha Mints (cited in his “Closed categories and the theory of proofs”):

    • G. E. Mints. Cut-elimination theorem for relevant logics, Zap. Nauchn. Sem. LOMI, 1972, Volume 32, Pages 90–97. (math-net.ru)

    This paper is in Russian, which I don’t read, but a colleague who reads a bit of it (Danko Ilik) helped me to get the gist. Mints gives an intriguing sequent calculus on p.3/p.92 for the negation-free fragment of the relevance logic “R” extended with an S4 box modality (“N”). Contraction is built-in to the system through some careful use of punctuation, but to me it looks a lot like intuitionistic linear logic! In particular, I thought that the “N” modality was acting like a “!” to add the option of weakening a formula…but then again I don’t read Russian, and Danko and I couldn’t quite figure out whether that’s what is really going on. Maybe someone with better Russian knowledge can answer that?

    In any case, though, I found it interesting to learn that there are relevance logics which do distinguish two forms of conjunction. The logic R is due to Andersen and Belnap, and apparently they referred to the multiplicative conjunction ABA\circ B as “intensional” conjunction.

    • CommentRowNumber7.
    • CommentAuthorDavid_Corfield
    • CommentTimeApr 27th 2016

    Since we had no such entry yet, I began relevance logic. ’Relevant logic’, an alternative expression, is redirected there.

    • CommentRowNumber8.
    • CommentAuthorThomas Holder
    • CommentTimeApr 27th 2016
    • (edited Apr 27th 2016)

    @#5: I must confess I’ve never got too deeply into relevance logic. Concerning non-standard negations you might see if some of the references at co-Heyting negation can do something for you. In this context the natural idea is to try to model a paraconsistent logic by using Lawvere’s operations on the subtopos lattice but unfortunately (at least to me) these seem to be notoriously difficult to calculate.

    • CommentRowNumber9.
    • CommentAuthorMike Shulman
    • CommentTimeApr 27th 2016

    I’ve also encountered some relevance/paraconsistent logicians who call the multiplicative conjunction “fusion”.

    I created relevance monoidal category, with a more abstract definition that I think is equivalent to Došen and Petrić’s.

    Noam, did you and Danko understand enough of the Russian to be able to say definitely what is going on if we ignore the NN modality? I agree that it looks a lot like ILL with ,&,,\otimes,\&,\oplus,\multimap (notated ,&,,\circ,\&,\vee,\to), though without reading any Russian myself I can’t make heads or tails of the punctuation. Could we say that the relevance logic R (without the modality NN) coincides with the logic of ILL with contraction added? That would be very nice.

  2. Looking at the paper again and trying to remember my discussion with Danko, I think the punctuation is actually just to deal with the modal parts. The sentence after the logical rules of the sequent calculus says something like “where {Ω}\{\Omega\} denotes Ω\langle\Omega\rangle when in the scope of \lceil\,\rceil, and denotes Ω\Omega otherwise”, so I think if we concentrate on the ,&,,\circ,\&,\vee,\to fragment it’s safe to ignore the braces and all the structural rules involving the other forms of punctuation. Then it looks like you just have the usual ILL rules plus the contraction rule (C)(C). This reading is also suggested by a nice quote from the end of Mints’ “Closed categories and the theory of proofs”, which is what made me originally want to dig up the paper:

    We note that the theorem about the normal form for canonical morphisms f:ABf : A \to B for A,BA,B which do not contain II, in symmetric monoidal closed categories, is equivalent to the one proved in [Mints 1972] on cut elimination for a formal system which is obtained by excluding the contraction rule from the (0,)(0,\supset)-fragment of the relevant logic RR. Actually [Mints 1972] deals with the calculus R +R^+, a positive fragment of RR, containing as connectives (0,,,&)(0,\supset,\vee,\&) [sic, but should that be \circ instead of 0?], as well as the contraction rule. However the theorem on cut-elimination in [Mints 1972] is formulated in a way that gives us the result we are interested in: reductions do not introduce applications of the contraction rule (cf. also [Mints 1968]). Thus the problem posed in [Mac Lane 1976, Bull. of the AMS], of obtaining the theorems in [Kelly & Mac Lane 1971] by direct application of Gentzen methods, was solved before it was set. The author learnt of the work [Kelly & Mac Lane 1971] only from [Mac Lane 1976].

    • CommentRowNumber11.
    • CommentAuthorMike Shulman
    • CommentTimeApr 27th 2016

    Great. And it looks like on p2/p91 he has a Hilbert-style axiomatization that looks a lot like the logic R, although he’s added a couple of rules to do with \circ, omitted the negation rules (since he’s considering a positive fragment), and changed the distribution rule to look more linear (probably equivalent).

    I would be more confident if I understood how to translate between these two presentations. I can see how to prove all the Hilbert axioms from the sequent calculus, except for distribution which I don’t see any way to get; the natural distribution of linear logic would involve the multiplicative conjunction rather than the additive one. I’m also confused by the rule called “adjunction” in the Hilbert-style version, A,BA&BA,B\vdash A\& B, which doesn’t seem to be valid under the usual interpretation of comma as multiplicative. And I’m even less sure how you would go the other way: show that all the rules of sequent calculus are derivable from the axioms and the few rules of the Hilbert system?

    • CommentRowNumber12.
    • CommentAuthorThomas Holder
    • CommentTimeApr 27th 2016
    • (edited Apr 29th 2016)

    By the way the paper was published in an English translation as

    • G. E. Mints, Cut-Elimination Theorem in Relevant Logics , Journal of Soviet Mathematics 6 (1976) pp.422-8.

    You might want to have a look at

    • N. Leslie, E. D. Mares, CHR - A Constructive Relevant Natural-Deduction Logic , ENTCS 91 (2004) pp.158–170.

    as well.

  3. Re #11, no, there isn’t any way to prove distributivity in either form A&(BC)(A&B)CA \& (B \oplus C) \vdash (A \& B) \oplus C or A&(BC)(A&B)(A&C)A \& (B \oplus C) \vdash (A \& B) \oplus (A \& C) in linear logic, even with contraction. Huh. I guess the most likely explanation is that the extra punctuation in Mints’ system really is playing an important role in axiomatizing the non-modal connectives (at least for &\&). So much for reading a paper without understanding the language! (Re #12, it would great to get a hold of that English translation!)

    I’m familiar with the issue of (non)distributivity of &\& over \oplus also showing up in comparisons of linear logic with BI (bunched implications) logic. BI is motivated in part by a Kripke/presheaf semantics over a “monoid of resources”, where the idea is that the usual “intuitionistic” conjunction and disjunction (wABw \models A \wedge B iff wAw \models A and wBw \models B; wABw \models A \vee B iff wAw \models A or wBw \models B) can coexist with a multiplicative conjunction defined by convolution (wA*Bw \models A * B iff w 1,w 1.w=w 1w 2\exists w_1,w_1.w = w_1\circ w_2 and w 1Aw_1 \models A and w 2Bw_2 \models B). The sequent calculus for BI is defined using two separate forms of structural punctuation: both a comma “,” internalizing ** and a semicolon “;” internalizing \wedge. The rules of weakening and contraction can be freely applied for the semicolon, and this lets you capture distributivity of \wedge over \vee in contrast to the situation for linear logic.

    • CommentRowNumber14.
    • CommentAuthorMike Shulman
    • CommentTimeApr 27th 2016

    I can see a 2 page preview of the English translation on the Springer web site. After giving the Hilbert axioms he says

    We describe the Gentzen variant GR +G R^+ of the calculus R +R^+. The formulas of this calculus are: 1) the formulas of R +R^+; 2) expressions of the form Γ⌜\Gamma⌝, where Γ\Gamma is a list (possibly empty) of formulas of GR +G R^+; 3) expressions of the form Γ\langle\Gamma\rangle, where Γ\Gamma is a list (possibly empty) of formulas of GR +G R^+.

    Sequents are expressions of the form ΓC\Gamma\to C, where Γ\Gamma is a list of formulas of GR +G R^+, and CC is a formula of R +R^+. The positive part of a list Γ\Gamma is any occurrence of a formula in Γ\Gamma that is not in the scope of logical connectives. If VV is [in?] the positive part of a list Γ\Gamma and Δ\Delta is a formula or list of formulas, then Γ V[Δ]\Gamma_V[\Delta] denotes the result of the substitution of Δ\Delta for VV. Wherever there is no risk of confusion we omit any mention of an occurrence of VV and simply write the result of the substitution in the form Γ[Δ]\Gamma[\Delta].

    Maybe if we combine that with your

    where {Ω}\{\Omega\} denotes Ω\langle\Omega\rangle when in the scope of ⌜\;⌝ and denotes Ω\Omega otherwise.

    we can figure out what his punctuation is doing. Given his comments about “positive parts”, the fact that the rules only act on formulas in the “positive part”, and that his two kinds of brackets can’t occur on the RHS of the sequent, it seems to me like his brackets should really be regarded as structuring the context rather than as an actual operation on formulas. So perhaps it is actually similar to BI? In particular, his rule (Y) seems to be permitting weakening inside ⌜\;⌝.

    (Does one really say that a judgmental structure like comma or semicolon “internalizes” the corresponding connective? My natural inclination would have been to say rather that the connective internalizes the judgmental structure, since the connective is something inside the logic and the judgmental structure is outside of it.)

    • CommentRowNumber15.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeApr 28th 2016
    • (edited Apr 28th 2016)

    Given his comments about “positive parts”, the fact that the rules only act on formulas in the “positive part”, and that his two kinds of brackets can’t occur on the RHS of the sequent, it seems to me like his brackets should really be regarded as structuring the context rather than as an actual operation on formulas. So perhaps it is actually similar to BI?

    Yes, I think you’re right. I managed to get a hold of that English translation, and see that after the proof of Lemma 1 (the translation from R +R^+ to GR +GR^+, on p.94 of the Russian version) and before the proof of Lemma 2 (the translation from GR +GR^+ to R +R^+), he describes how to translate sequents:

    In this case, for n>0n \gt 0 the formula A 1,,A n\langle A_1,\dots,A_n \rangle is replaced by (A 1A n)(A_1 \circ \dots \circ A_n) and A 1,,A n⌜ A_1,\dots,A_n ⌝ by (A 1&&A n)(A_1 \& \dots \& A_n). The formula \langle\,\rangle is replaced by & im(p ip i)\&_{i\le m}(p_i \to p_i) or N& im(p ip i)N\&_{i\le m}(p_i \to p_i) depending on whether NN does or does not occur in the deduction. Here p 1,,p mp_1,\dots,p_m is the complete list of variables occurring in the given deduction. The formula ⌜ \, ⌝ cannot occur in a GR +GR^+ deduction, because it does not occur in axioms and is not induced by the deduction rules.

    So it does seem similar to the approach in BI, and I’m guessing probably similar to Michael Dunn’s sequent calculus or the sequent calculus based on “Display Logic” alluded to in the SEP article.

    Does one really say that a judgmental structure like comma or semicolon “internalizes” the corresponding connective? My natural inclination would have been to say rather that the connective internalizes the judgmental structure, since the connective is something inside the logic and the judgmental structure is outside of it.

    Right, I was being loose, and usually one says that the connective internalizes the judgmental structure. Though in systems with a lot of punctuation (cf. Display Logic) sometimes it’s hard to keep track of who’s internalizing whom.

    Back to your question about the “adjunction” rule A,BA&BA,B \vdash A\& B, which Mints calls “composition”, I think I understand what’s going on. Like the rule of modus ponens in Hilbert-style systems, this should be read as “if AA is a theorem and BB is a theorem then A&BA\& B is a theorem”. It should not be read as an implication ABA&BA\circ B \to A\&B, which would indeed break the relevance interpretation by composing with A&BAA\&B \to A and A&BBA\&B \to B.

    • CommentRowNumber16.
    • CommentAuthorMike Shulman
    • CommentTimeApr 28th 2016

    Now I have a copy of the translation too.

    The formula \langle\,\rangle is replaced by & im(p ip i)\&_{i\le m}(p_i \to p_i) or N& im(p ip i)N\&_{i\le m}(p_i \to p_i) depending on whether NN does or does not occur in the deduction

    This seems pretty weird. I guess part of what’s going on is that he has to work around the unaccountable omission of the nullary conjunctions 1,\mathbf{1},\top in his logic. But p ip ip_i \to p_i should already be 1\mathbf{1}, since his \to is \multimap, so why does he want to &\& together mm of them? Overall his system feels a bit ad hoc; I guess I should go look up Dunn’s version and “Display logic”.

    Like the rule of modus ponens in Hilbert-style systems, this should be read as “if AA is a theorem and BB is a theorem then A&BA\& B is a theorem”.

    Ah, I see: the symbol \vdash in a Hilbert-style system means something different from in a sequent calculus. The notation would match better if we wrote the “adjunction/composition” and “modus ponens” rules as, well, rules:

    ABA&BAABB\frac{A \quad B}{A\& B} \qquad \frac{A \quad A\to B}{B}
    • CommentRowNumber17.
    • CommentAuthorMike Shulman
    • CommentTimeApr 28th 2016

    the symbol \vdash in a Hilbert-style system means something different from in a sequent calculus.

    I guess this was already remarked upon at deductive system, but I just created Hilbert system including a more extended discussion.

    • CommentRowNumber18.
    • CommentAuthorMike Shulman
    • CommentTimeMay 3rd 2016

    Re: #16, of course 1\mathbf{1} is logically equivalent to 1&1\mathbf{1}\& \mathbf{1} (any positive finite number of times), since &\& has diagonals and projections. So maybe his big &\& is just to avoid treating any variable specially.

    • CommentRowNumber19.
    • CommentAuthorMike Shulman
    • CommentTimeMay 3rd 2016

    Also, I finally looked back at the BI paper linked in #13, and I see a number of other references to sequent calculi for relevant logics as predecessors. They appear to claim that BI differs from the positive fragment of the relevant logic R (in Dunn’s sequent calculus form, which I have not yet looked up) exactly in that (1) positive R has contraction for the multiplicative conjunction, while (2) BI has the multiplicative true, the additive false, and the additive implication. So BI with (2) removed and (1) added ought to be another sequent calculus for R, which is probably more or less like Mints’s, but cleaner and more modern.

  4. Oh, it’s nice to see that O’Hearn and Pym’s paper has an explicit discussion of relevance logics – I read it a long time ago and did not notice/remember that. Yes, I believe their claim that R can be seen as a fragment of BI with contraction added for the multiplicative conjunction “**”. It might be worth creating a page for BI itself, though I don’t have time for that right now (still, I’ll go ahead and add a link to O’Hearn and Pym’s paper to relevance logic)

    • CommentRowNumber21.
    • CommentAuthorMike Shulman
    • CommentTimeMay 3rd 2016

    I was thinking making a page for BI too. As a matter of terminology, does “BI” still generally refer specifically to their exact system with multiplicative and additive conjunction, additive disjunction, and multiplicative and additive implication? Or has it become a more general term for logics whose contexts have “multiple commas”?

    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeMay 3rd 2016

    Do you know of any BI-like sequent calculus that incorporates negation, hopefully leading to the full logic R (not just its “positive part”)? I guess one obvious approach would be to allow multiple formulas on the right, but it’s not obvious to me how that should work. Should the right-hand contexts be additive or multiplicative, or bunched? How does an involutive negation that moves formulas back and forth interact with bunched contexts?

    I see that this presentation of R does not have LEM for the additive disjunction (A¬AA\vee\neg A); instead it has DNE ¬¬AA\neg\neg A\multimap A for the relevant/multiplicative implication, which is probably at least closely related to “multiplicative LEM” A¬AA\invamp\neg A. But I feel like I’ve seen other relevant logics that do have additive LEM, though I can’t find any right now.

  5. In my (limited) experience “BI” refers to the specific system in their paper, which is also closely tied to separation logic (an extension of Hoare logic used to reason about memory separation and memory sharing in programs with pointer structures). But I noticed that there is a discussion of “bunches” in Stephen Read’s Relevant Logic: A Philosophical Examination of Inference, which is cited in O’Hearn and Pym’s paper, so it’s likely that they got their terminology from there. I’m also pretty sure that I’ve heard people more generally talking about a sequent calculus having “bunches” when contexts are not merely lists but have some non-trivial tree structure.

    Re #22, I remember seeing an attempt at such a “multiple conclusion BI” in David Pym’s book, The Semantics and Proof Theory of the Logic of Bunched Implications, but that it had various shortcomings. From a few years ago, there is also Classical BI: Its Semantics and Proof Theory by Brotherston and Calcagno. Their approach is based on Display Logic, but I am not at all familiar with the details.

    • CommentRowNumber24.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 3rd 2016

    Why not start stubs for terms we don’t yet have entries for? So separation logic.

    • CommentRowNumber25.
    • CommentAuthorspitters
    • CommentTimeMay 3rd 2016

    Here’s a master thesis on bounced implications (BI) and separation logic. Lars Birkedal is the head of the group I am in now.

    • CommentRowNumber26.
    • CommentAuthorMike Shulman
    • CommentTimeMay 3rd 2016

    Ok, I created bunched logic.

    • CommentRowNumber27.
    • CommentAuthorMike Shulman
    • CommentTimeMay 3rd 2016

    On p17, Brotherston and Calcagno say

    one of Belnap’s original applications of display logic was in giving a display calculus for the full relevant logic R

    However, Belnap’s 1982 display logic paper says instead on p23/p397

    The problem of finding a consecution calculus for R without boolean negation remains open. The difficulty is that relevance logic has things to say about boolean conjunction and disjunction, so that we need the boolean family to help us out; but the techniques of this paper do not permit enjoying boolean conjunction and disjunction without carrying along boolean negation as well.

    I gather that by “boolean” he means the same as “additive”. (What is a “consecution calculus”?)

    • CommentRowNumber28.
    • CommentAuthorMike Shulman
    • CommentTimeMay 3rd 2016

    Another reference I stumbled over: A consecution calculus for positive relevant implication with necessity by Belnap, Gupta, and Dunn. Their introduction says

    JMD’s result for the system without necessity was presented in a colloquium at the University of Pittsburgh in the spring of 1968… In the meantime, Minc in 1972… proved essentially the same theorem we report below, i.e., cut for the system with necessity.

    We should first note an obviously needed correction [to Minc]: cut cannot be proved for the system he describes unless, in the statement in the structural rules, variables for formulas of R +R_+ are replaced by variables for what below we call ’antecedents’, i.e., formulas of the consecution calculus GR +GR_+. Otherwise case 1.3 of [7] cannot go through. In the terminology developed below, Section 6, the rules of [7] as stated are not all closed under parametric substitution. For example, it is crucial to be able to contract not only pairs of single formulas, but pairs of lists of such formulas — see [1], p. 390, for an instructive example. But this correction is easily made, after which the proof of [7] is fine. What we offer below is a new proof…

    Apparently Minc = Mints and [7] is the paper of his we’ve been looking at.

    • CommentRowNumber29.
    • CommentAuthorMike Shulman
    • CommentTimeMay 4th 2016

    Ah, but in footnote 12 of the display logic paper, Belnap then says

    Let me take this opportunity to correct an error in Belnap et al. (1980): that paper wrongly stated that the rules of Minc (1972) were inaccurate. “Our” mistake was due to my inattentive conflation of Minc’s clearly expressed distinction between “formula” and “formula of R+R+“.

    I guess a “consecution calculus” is another word for “sequent calculus”.

  6. Re #27: I’m not exactly sure what is the issue. So Belnap’s calculus for R also includes an additive negation (both as a structural connective and a logical connective), and he says that it’s difficult to remove it (does he mean both the structural and the logical, or just the structural connective?). But isn’t it still conservative over R?

    Re #28: that’s good to know.

    • CommentRowNumber31.
    • CommentAuthorMike Shulman
    • CommentTimeMay 4th 2016

    I presume that he means just the structural connective; you should be able to remove the logical one by just, well, removing it and its associated rules, right? He does claim that it’s conservative over R, or at least that a Hilbert system containing additive negation is conservative over R, and presumably this is proven in the references he cites.

    But I think it’s reasonable to be incompletely satisfied with a sequent calculus for a logic that requires introducing a new structural connective not present in the original logic. Categorically, for instance, I would expect models of R to be categories with a monoidal structure that is both star-autonomous and relevance and that are also distributive categories, and I see no reason every such category would contain an “additive negation”. A faithful sequent calculus for R ought to be a syntax for morphisms in such categories, so it shouldn’t contain anything not interpretable therein.

    These papers are over 30 years old, of course. I wonder whether any progress has been made since then.

    • CommentRowNumber32.
    • CommentAuthorMike Shulman
    • CommentTimeMay 5th 2016

    I added a mention of paraconsistent set theory, and its difficulties with Curry’s paradox, to paraconsistent logic.

    • CommentRowNumber33.
    • CommentAuthorMike Shulman
    • CommentTimeMay 11th 2016

    From #23:

    I remember seeing an attempt at such a “multiple conclusion BI” in David Pym’s book, The Semantics and Proof Theory of the Logic of Bunched Implications, but that it had various shortcomings.

    Finally tracked down a copy of this book. His multiple-conclusions are also bunched, with comma and semicolon there representing multiplicative and additive disjunctions (there written ## and \vee), and he has both an additive and multiplicative negation, written ¬\neg and \sim, both of which are classical (we have A¬AA\vee \neg A and also A#AA # \sim A).

    His rules for the negations, conjunctions, and disjunctions all make sense, although I think there is a typo in his rule R\wedge R; he has

    Γϕ;ΔΓψ;ΔΓ;ΓΔ(ϕψ);Δ(ϕψ) \frac{\Gamma \vdash \phi;\Delta \qquad \Gamma' \vdash \psi;\Delta'}{\Gamma;\Gamma' \vdash \Delta(\phi\wedge \psi); \Delta'(\phi\wedge\psi)}

    which doesn’t even parse for me; in the conclusion Δ()\Delta(-) and Δ()\Delta'(-) are “bunches with holes” whereas in the premises they are not. So it seems to me that the premises should be ΓΔ(ϕ)\Gamma \vdash \Delta(\phi) and ΓΔ(ψ)\Gamma' \vdash \Delta'(\psi); this also would be precise dualization of his rule L\vee L.

    He claims (without proof) that this calculus has cut-elimination, but that cut-elimination fails when certain rules for the implications \multimap (which BI people write with a *\ast instead of the \circ in \multimap, but I don’t know how to make that symbol here) and \to are added. Unfortunately he gives no indication of what goes wrong with cut-elimination.

    It seems to me that since both ¬\neg and \sim are classical, we ought to be able to define (AB)=(A#B)(A\multimap B) = (\sim A # B) and (AB)=(¬AB)(A\to B) = (\neg A \vee B), and indeed he says that “it may readily be seen that (AB)=(A#B)(A\multimap B) = (\sim A # B)”, although it’s not clear to me exactly what this means before \multimap has been defined. In any case this definition of \multimap would apparently not validate the rules he wants for it; for instance, his L\multimap L rule is

    ΓΔ(ϕ)Γ(ψ)ΔΓ(Γ,ϕψ)Δ(Δ) \frac{\Gamma' \vdash \Delta(\phi) \qquad \Gamma(\psi) \vdash \Delta'}{\Gamma(\Gamma',\phi\multimap\psi) \vdash \Delta(\Delta')}

    whereas his rule #L# L only allows introduction at the “top of a bunch”:

    Γ,ϕΔΓ,ψΔΓ,Γ,ϕ#ψΔ,Δ \frac{\Gamma,\phi \vdash \Delta \qquad \Gamma',\psi\vdash \Delta'}{\Gamma,\Gamma',\phi # \psi \vdash \Delta,\Delta'}

    so it would only yield a similarly restricted left-rule for \multimap:

    ΓΔ,ϕΓ,ψΔΓ,Γ,ϕψΔ,Δ \frac{\Gamma' \vdash \Delta,\phi \qquad \Gamma,\psi\vdash \Delta'}{\Gamma,\Gamma',\phi \multimap \psi \vdash \Delta,\Delta'}

    He does mention that “we should be able to recover cut-elimination” with such a restriction” on L\multimap L, but says that “such a system would be of questionable semantic value”. I have no idea what that means, and he gives no further explanation.

    So it seems to me that there is a very reasonable “doubly-classical BI” with two conjunctions, two disjunctions, two implications (with the weaker rule above), and two negations. If we restrict attention to the additive conjunction and disjunction and the multiplicative implication and negation, and add contraction for the comma, this again looks a lot like classical R. Unlike the display logic approach, it seems more like we can really ignore the additive negation, since it has no direct structural-connective analogue, and we can certainly ignore the additive implication. The multiplicative conjunction and disjunction are present as structural connectives whether or not we internalize them, but I would think those shouldn’t be as problematic to relevantists, since they can be defined in terms of the multiplicative implication and negation. However, it’s not obvious to me whether the resulting system induces exactly the same relations between these connectives that hold in classical R, or exactly what its categorical semantics will look like.

    • CommentRowNumber34.
    • CommentAuthorDavidRoberts
    • CommentTimeMay 11th 2016
    • (edited May 11th 2016)

    Nevermind

    • CommentRowNumber35.
    • CommentAuthorUrs
    • CommentTimeJun 4th 2022

    just looking at this page due to the recent paraconsistent activity:

    For the record, the aim or conclusion of the paragraph Aristotelian syllogistic (here) remains unclear.

    diff, v20, current