Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 12 of 12
The linear logic wiki describes several translations of classical logic into linear logic, which translate the classical implication $A\to B$ as the linear $!? A' \multimap ?B'$ or $?(!A' \multimap B')$ or $!(A' \multimap ?B')$, or $!A' \multimap ?!B'$ (where $A'$ and $B'$ translate $A$ and $B$ respectively). But wikipedia claims that it can be translated as $!A' \multimap ?B'$, which is none of these. Is wikipedia wrong?
I’m pretty sure that the translation $!A' \multimap ?B'$ is correct, from the intuition that classical sequent calculus allows weakening/contraction both on the left and on the right. As a sanity check, here is a derivation of Peirce’s law found by llprover:
------- Ax p --> p -------- R? p --> ?p ----------- W? p --> ?q,?p ------------ L! !p --> ?q,?p --------------- R-> --> !p-> ?q,?p ------------------- R! --------- Ax --> ! (!p-> ?q),?p ?p --> ?p --------------------------------- L-> ! (!p-> ?q)-> ?p --> ?p,?p -------------------------- C? ! (!p-> ?q)-> ?p --> ?p --------------------------- L! ! (! (!p-> ?q)-> ?p) --> ?p ------------------------------ R-> --> ! (! (!p-> ?q)-> ?p)-> ?p
The “T” and “Q” translations on the linear logic wiki are references to an old paper by Danos, Joinet and Schellinx, LKQ and LKT: sequent calculi for second order logic based upon dual linear decompositions of classical implication.
By the way, part of the background/motivation for these less “economical” translations T and Q was to obtain non-degenerate semantics of proofs in classical sequent calculus (rather than just classical provabiliy), where “non-degenerate” means “not all proofs are equated under cut-elimination”. The naive interpretation of classical sequent calculus runs into the problem of “Lafont’s critical pair”: given any pair of proofs $\alpha_1,\alpha_2$ of $\Gamma \vdash \Delta$, we right-weaken $\alpha_1$ to a proof of $\Gamma\vdash \Delta,A$, left-weaken $\alpha_2$ to a proof of $A,\Gamma\vdash \Delta$, cut the two resulting proofs together to get a proof of $\Gamma,\Gamma \vdash \Delta,\Delta$, and apply repeated contraction to get a new proof $\beta$ of $\Gamma \vdash \Delta$. Under cut-elimination, $\beta$ can reduce to either $\alpha_1$ or $\alpha_2$, so we must have $[[ \alpha_1 ]] \equiv [[ \alpha_2 ]]$ (at least under a naive semantics of proofs).
I think of the DJS paper as an attempt at giving more constructive semantics to classical logic via translation to linear logic, and as a precursor to better ways of understanding such translations via polarities and focusing and all that jazz.
Ah, interesting, thanks. Do you know a citation for the $!A' \multimap ?B'$ translation?
Do you know a citation for the $!A' \multimap ?B'$ translation?
Actually I’m not sure, I was wondering about that…
It doesn’t seem to be in Girard’s original linear logic paper (see Sections 5.4 and 5.5 for different translations of classical logic), or in linear logic: its syntax and semantics. And I just saw in some notes by Emmanuel Beffara it says that “translating $A \Rightarrow B$ into $!A \multimap ?B$ does not work”, so I might be missing something.
Okay, here is a counterexample: $\vdash^{CL} (q \supset r) \supset ((p \supset q) \supset (p \supset r))$ but not $\vdash^{LL} !(! q \multimap ? r) \multimap ?(!(! p \multimap ? q) \multimap ? (! p \multimap ? r) ) )$.
So in #2 I was thinking about weakening and contraction, but the counterexample is just composition! Whoops!
One way to see the reason for the failure is to consider a classical cut:
$\frac{\Gamma_1 \vdash A,\Delta_1 \quad \Gamma_2, A \vdash \Delta_2}{\Gamma_1,\Gamma_2 \vdash \Delta_1,\Delta_2}$If we translate classical sequents $\Gamma \vdash \Delta$ by linear sequents $! \Gamma^* \vdash ? \Delta^*$, then we get stuck trying to compose a proof of $!\Gamma_1^* \vdash ?A^*,?\Delta_1^*$ with a proof of $!\Gamma_2^*, !A^* \vdash ?\Delta_2^*$. On the other hand, if we use the translation $! \Gamma^* \vdash ?! \Delta^*$, then we get a proof of $!\Gamma_1^* \vdash ?!A^*,?!\Delta_1^*$ and a proof of $!\Gamma_2^*, !A^* \vdash ?!\Delta_2^*$, and we can apply left-promotion on the right premise to get $!\Gamma_2^*, ?!A^* \vdash ?!\Delta_2^*$, and then cut. Dually, if we use the translation $!? \Gamma^* \vdash ? \Delta^*$ then we can apply right-promotion on the left premise and then cut.
Interesting! How can you be sure that not $\vdash^{LL} !(! q \multimap ? r) \multimap ?(!(! p \multimap ? q) \multimap ? (! p \multimap ? r) ) )$?
Someone should fix wikipedia, and we should also record this at linear logic.
The reason I ask is that I’m looking at a particular model in which $!A \multimap ?B$ is an interesting thing, and unless I’m mistaken, in this model such things do compose. I wonder what extra structure this model has that makes that work.
How can you be sure that not $\vdash^{LL} !(! q \multimap ? r) \multimap ?(!(! p \multimap ? q) \multimap ? (! p \multimap ? r) ) )$?
Good question. I’m reasoning backwards about all possible cut-free proofs in the sequent calculus. Some of the !s and ?s are useless, so this goal reduces to proving $!q \multimap ?r, !p \multimap ?q, !p \vdash ?r$, which further reduces to proving $!q \multimap ?r, ?q \vdash ?r$ after applying the $\multimap$-left rule, then $!q \multimap ?r, q \vdash ?r$ after a left-promotion, and finally $q \vdash !q$ after another $\multimap$-left rule. But there is no way to prove the goal $q \vdash !q$ in general, so there is no proof of our original sequent.
The reason I ask is that I’m looking at a particular model in which $!A \multimap ?B$ is an interesting thing, and unless I’m mistaken, in this model such things do compose. I wonder what extra structure this model has that makes that work.
Yeah, that’s interesting. Does the model let you define morphisms of type $!q \multimap ?r, !p \multimap ?q \vdash !p \multimap ?r$, or does composition work in a different way?
Unless I got a dual misplaced (which is entirely possible), I think the model doesn’t have $!q \multimap ?r, !p \multimap ?q \vdash !p \multimap ?r$, but instead something like $!(!q \multimap ?r), !(!p \multimap ?q) \vdash !p \multimap ?r$.
The model is the $\ast$-autonomous category Sup, with $!A = D A$ the suplattice of downsets in $A$ ordered by containment. This is the comonad induced on $Sup$ by the adjunction exhibiting it as monadic over $Pos$, i.e. $D A$ is the free suplattice on the poset $A$. Then $? A = (D A^*)^* = (D A^{op})^{op}$, which is the suplattice of upsets in $A$ ordered by reverse containment. I haven’t checked all the details to be sure this acts like $!$, but I think this satisfies at least the law $!(A\times B) \cong !A \otimes !B$, since maps out of both of them classify arbitrary poset-maps out of $A\times B$.
Note that the embedding of intuitionistic logic taking $A\to B$ to $!A \multimap B$ corresponds to building the Kleisli category of $D$, which is the category of suplattices and arbitrary poset-maps between them.
To see what happens with composition, write $A \multimap B = (A \otimes B^*)^*$, so that
$!A \multimap ?B = (!A \otimes (?B)^*)^* = (!A \otimes !(B^*))^* = (!(A\times B^*))^* = ?(A^* \times B),$the suplattice of upsets in $A^* \times B$ ordered by reverse containment. This is also known as the opposite of the poset of $\mathbf{2}$-enriched profunctors $A^{op} \times B \to \mathbf{2}$. Since profunctors can be composed, taking the opposite of this composition map we get a map of posets $(!A \multimap ?B) \times (!B \multimap ?C) \to (!A \multimap ?C)$. Composition of profunctors is cocontinuous in each variable, but this is no longer the case when we take opposites, so this doesn’t extend to a suplattice map on $(!A \multimap ?B) \otimes (!B \multimap ?C)$. But it does of course induce a suplattice map on $!((!A \multimap ?B) \times (!B \multimap ?C))$, which as noted above is equivalent to $!(!A \multimap ?B) \otimes !(!B \multimap ?C)$.
This model also has other very special features not present in syntactic linear logic, such as $A\& B = A\oplus B$ and $(!A\multimap B)^* = (!(A^*) \multimap B^*)$, so it’s not too surprising if this is another one of them. But I would like to understand it better. In particular, can the existence of this composition be expressed as a property (or property-like structure) of $D$ among comonads on $\ast$-autonomous categories?
Re #8, I went ahead and fixed the Wikipedia article, and then looking back through the history realized that I was the one who originally introduced this mistake, way back in June 2009! That adds some (painful) irony to my being “pretty sure” the article’s claim was correct… :-o So thanks, Mike, for bringing this issue up and helping me to correct the errors of my youth!
Haha! Thanks.
1 to 12 of 12