Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeOct 11th 2017

    The linear logic wiki describes several translations of classical logic into linear logic, which translate the classical implication ABA\to B as the linear !?A?B!? A' \multimap ?B' or ?(!AB)?(!A' \multimap B') or !(A?B)!(A' \multimap ?B'), or !A?!B!A' \multimap ?!B' (where AA' and BB' translate AA and BB respectively). But wikipedia claims that it can be translated as !A?B!A' \multimap ?B', which is none of these. Is wikipedia wrong?

  1. I’m pretty sure that the translation !A?B!A' \multimap ?B' is correct, from the intuition that classical sequent calculus allows weakening/contraction both on the left and on the right. As a sanity check, here is a derivation of Peirce’s law found by llprover:

    ------- Ax
    p --> p
    -------- R?
    p --> ?p
    ----------- W?
    p --> ?q,?p
    ------------ L!
    !p --> ?q,?p
    --------------- R->
     --> !p-> ?q,?p
    ------------------- R!  --------- Ax
     --> ! (!p-> ?q),?p     ?p --> ?p
    --------------------------------- L->
       ! (!p-> ?q)-> ?p --> ?p,?p
       -------------------------- C?
        ! (!p-> ?q)-> ?p --> ?p
        --------------------------- L!
        ! (! (!p-> ?q)-> ?p) --> ?p
        ------------------------------ R->
         --> ! (! (!p-> ?q)-> ?p)-> ?p
    

    The “T” and “Q” translations on the linear logic wiki are references to an old paper by Danos, Joinet and Schellinx, LKQ and LKT: sequent calculi for second order logic based upon dual linear decompositions of classical implication.

    • CommentRowNumber3.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeOct 11th 2017
    • (edited Oct 11th 2017)

    By the way, part of the background/motivation for these less “economical” translations T and Q was to obtain non-degenerate semantics of proofs in classical sequent calculus (rather than just classical provabiliy), where “non-degenerate” means “not all proofs are equated under cut-elimination”. The naive interpretation of classical sequent calculus runs into the problem of “Lafont’s critical pair”: given any pair of proofs α 1,α 2\alpha_1,\alpha_2 of ΓΔ\Gamma \vdash \Delta, we right-weaken α 1\alpha_1 to a proof of ΓΔ,A\Gamma\vdash \Delta,A, left-weaken α 2\alpha_2 to a proof of A,ΓΔA,\Gamma\vdash \Delta, cut the two resulting proofs together to get a proof of Γ,ΓΔ,Δ\Gamma,\Gamma \vdash \Delta,\Delta, and apply repeated contraction to get a new proof β\beta of ΓΔ\Gamma \vdash \Delta. Under cut-elimination, β\beta can reduce to either α 1\alpha_1 or α 2\alpha_2, so we must have [[α 1]][[α 2]][[ \alpha_1 ]] \equiv [[ \alpha_2 ]] (at least under a naive semantics of proofs).

    I think of the DJS paper as an attempt at giving more constructive semantics to classical logic via translation to linear logic, and as a precursor to better ways of understanding such translations via polarities and focusing and all that jazz.

    • CommentRowNumber4.
    • CommentAuthorMike Shulman
    • CommentTimeOct 11th 2017

    Ah, interesting, thanks. Do you know a citation for the !A?B!A' \multimap ?B' translation?

    • CommentRowNumber5.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeOct 11th 2017
    • (edited Oct 11th 2017)

    Do you know a citation for the !A?B!A' \multimap ?B' translation?

    Actually I’m not sure, I was wondering about that…

    It doesn’t seem to be in Girard’s original linear logic paper (see Sections 5.4 and 5.5 for different translations of classical logic), or in linear logic: its syntax and semantics. And I just saw in some notes by Emmanuel Beffara it says that “translating ABA \Rightarrow B into !A?B!A \multimap ?B does not work”, so I might be missing something.

  2. Okay, here is a counterexample: CL(qr)((pq)(pr))\vdash^{CL} (q \supset r) \supset ((p \supset q) \supset (p \supset r)) but not LL!(!q?r)?(!(!p?q)?(!p?r)))\vdash^{LL} !(! q \multimap ? r) \multimap ?(!(! p \multimap ? q) \multimap ? (! p \multimap ? r) ) ).

    So in #2 I was thinking about weakening and contraction, but the counterexample is just composition! Whoops!

    • CommentRowNumber7.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeOct 11th 2017
    • (edited Oct 11th 2017)

    One way to see the reason for the failure is to consider a classical cut:

    Γ 1A,Δ 1Γ 2,AΔ 2Γ 1,Γ 2Δ 1,Δ 2 \frac{\Gamma_1 \vdash A,\Delta_1 \quad \Gamma_2, A \vdash \Delta_2}{\Gamma_1,\Gamma_2 \vdash \Delta_1,\Delta_2}

    If we translate classical sequents ΓΔ\Gamma \vdash \Delta by linear sequents !Γ *?Δ *! \Gamma^* \vdash ? \Delta^*, then we get stuck trying to compose a proof of !Γ 1 *?A *,?Δ 1 *!\Gamma_1^* \vdash ?A^*,?\Delta_1^* with a proof of !Γ 2 *,!A *?Δ 2 *!\Gamma_2^*, !A^* \vdash ?\Delta_2^*. On the other hand, if we use the translation !Γ *?!Δ *! \Gamma^* \vdash ?! \Delta^*, then we get a proof of !Γ 1 *?!A *,?!Δ 1 *!\Gamma_1^* \vdash ?!A^*,?!\Delta_1^* and a proof of !Γ 2 *,!A *?!Δ 2 *!\Gamma_2^*, !A^* \vdash ?!\Delta_2^*, and we can apply left-promotion on the right premise to get !Γ 2 *,?!A *?!Δ 2 *!\Gamma_2^*, ?!A^* \vdash ?!\Delta_2^*, and then cut. Dually, if we use the translation !?Γ *?Δ *!? \Gamma^* \vdash ? \Delta^* then we can apply right-promotion on the left premise and then cut.

    • CommentRowNumber8.
    • CommentAuthorMike Shulman
    • CommentTimeOct 11th 2017

    Interesting! How can you be sure that not LL!(!q?r)?(!(!p?q)?(!p?r)))\vdash^{LL} !(! q \multimap ? r) \multimap ?(!(! p \multimap ? q) \multimap ? (! p \multimap ? r) ) )?

    Someone should fix wikipedia, and we should also record this at linear logic.

    The reason I ask is that I’m looking at a particular model in which !A?B!A \multimap ?B is an interesting thing, and unless I’m mistaken, in this model such things do compose. I wonder what extra structure this model has that makes that work.

    • CommentRowNumber9.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeOct 11th 2017
    • (edited Oct 11th 2017)

    How can you be sure that not LL!(!q?r)?(!(!p?q)?(!p?r)))\vdash^{LL} !(! q \multimap ? r) \multimap ?(!(! p \multimap ? q) \multimap ? (! p \multimap ? r) ) )?

    Good question. I’m reasoning backwards about all possible cut-free proofs in the sequent calculus. Some of the !s and ?s are useless, so this goal reduces to proving !q?r,!p?q,!p?r!q \multimap ?r, !p \multimap ?q, !p \vdash ?r, which further reduces to proving !q?r,?q?r!q \multimap ?r, ?q \vdash ?r after applying the \multimap-left rule, then !q?r,q?r!q \multimap ?r, q \vdash ?r after a left-promotion, and finally q!qq \vdash !q after another \multimap-left rule. But there is no way to prove the goal q!qq \vdash !q in general, so there is no proof of our original sequent.

    The reason I ask is that I’m looking at a particular model in which !A?B!A \multimap ?B is an interesting thing, and unless I’m mistaken, in this model such things do compose. I wonder what extra structure this model has that makes that work.

    Yeah, that’s interesting. Does the model let you define morphisms of type !q?r,!p?q!p?r!q \multimap ?r, !p \multimap ?q \vdash !p \multimap ?r, or does composition work in a different way?

    • CommentRowNumber10.
    • CommentAuthorMike Shulman
    • CommentTimeOct 12th 2017

    Unless I got a dual misplaced (which is entirely possible), I think the model doesn’t have !q?r,!p?q!p?r!q \multimap ?r, !p \multimap ?q \vdash !p \multimap ?r, but instead something like !(!q?r),!(!p?q)!p?r!(!q \multimap ?r), !(!p \multimap ?q) \vdash !p \multimap ?r.

    The model is the *\ast-autonomous category Sup, with !A=DA!A = D A the suplattice of downsets in AA ordered by containment. This is the comonad induced on SupSup by the adjunction exhibiting it as monadic over PosPos, i.e. DAD A is the free suplattice on the poset AA. Then ?A=(DA *) *=(DA op) op? A = (D A^*)^* = (D A^{op})^{op}, which is the suplattice of upsets in AA ordered by reverse containment. I haven’t checked all the details to be sure this acts like !!, but I think this satisfies at least the law !(A×B)!A!B!(A\times B) \cong !A \otimes !B, since maps out of both of them classify arbitrary poset-maps out of A×BA\times B.

    Note that the embedding of intuitionistic logic taking ABA\to B to !AB!A \multimap B corresponds to building the Kleisli category of DD, which is the category of suplattices and arbitrary poset-maps between them.

    To see what happens with composition, write AB=(AB *) *A \multimap B = (A \otimes B^*)^*, so that

    !A?B=(!A(?B) *) *=(!A!(B *)) *=(!(A×B *)) *=?(A *×B),!A \multimap ?B = (!A \otimes (?B)^*)^* = (!A \otimes !(B^*))^* = (!(A\times B^*))^* = ?(A^* \times B),

    the suplattice of upsets in A *×BA^* \times B ordered by reverse containment. This is also known as the opposite of the poset of 2\mathbf{2}-enriched profunctors A op×B2A^{op} \times B \to \mathbf{2}. Since profunctors can be composed, taking the opposite of this composition map we get a map of posets (!A?B)×(!B?C)(!A?C)(!A \multimap ?B) \times (!B \multimap ?C) \to (!A \multimap ?C). Composition of profunctors is cocontinuous in each variable, but this is no longer the case when we take opposites, so this doesn’t extend to a suplattice map on (!A?B)(!B?C)(!A \multimap ?B) \otimes (!B \multimap ?C). But it does of course induce a suplattice map on !((!A?B)×(!B?C))!((!A \multimap ?B) \times (!B \multimap ?C)), which as noted above is equivalent to !(!A?B)!(!B?C)!(!A \multimap ?B) \otimes !(!B \multimap ?C).

    This model also has other very special features not present in syntactic linear logic, such as A&B=ABA\& B = A\oplus B and (!AB) *=(!(A *)B *)(!A\multimap B)^* = (!(A^*) \multimap B^*), so it’s not too surprising if this is another one of them. But I would like to understand it better. In particular, can the existence of this composition be expressed as a property (or property-like structure) of DD among comonads on *\ast-autonomous categories?

  3. Re #8, I went ahead and fixed the Wikipedia article, and then looking back through the history realized that I was the one who originally introduced this mistake, way back in June 2009! That adds some (painful) irony to my being “pretty sure” the article’s claim was correct… :-o So thanks, Mike, for bringing this issue up and helping me to correct the errors of my youth!

    • CommentRowNumber12.
    • CommentAuthorMike Shulman
    • CommentTimeOct 18th 2017

    Haha! Thanks.