Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 12 of 12
The linear logic wiki describes several translations of classical logic into linear logic, which translate the classical implication as the linear or or , or (where and translate and respectively). But wikipedia claims that it can be translated as , which is none of these. Is wikipedia wrong?
I’m pretty sure that the translation is correct, from the intuition that classical sequent calculus allows weakening/contraction both on the left and on the right. As a sanity check, here is a derivation of Peirce’s law found by llprover:
------- Ax p --> p -------- R? p --> ?p ----------- W? p --> ?q,?p ------------ L! !p --> ?q,?p --------------- R-> --> !p-> ?q,?p ------------------- R! --------- Ax --> ! (!p-> ?q),?p ?p --> ?p --------------------------------- L-> ! (!p-> ?q)-> ?p --> ?p,?p -------------------------- C? ! (!p-> ?q)-> ?p --> ?p --------------------------- L! ! (! (!p-> ?q)-> ?p) --> ?p ------------------------------ R-> --> ! (! (!p-> ?q)-> ?p)-> ?p
The “T” and “Q” translations on the linear logic wiki are references to an old paper by Danos, Joinet and Schellinx, LKQ and LKT: sequent calculi for second order logic based upon dual linear decompositions of classical implication.
By the way, part of the background/motivation for these less “economical” translations T and Q was to obtain non-degenerate semantics of proofs in classical sequent calculus (rather than just classical provabiliy), where “non-degenerate” means “not all proofs are equated under cut-elimination”. The naive interpretation of classical sequent calculus runs into the problem of “Lafont’s critical pair”: given any pair of proofs of , we right-weaken to a proof of , left-weaken to a proof of , cut the two resulting proofs together to get a proof of , and apply repeated contraction to get a new proof of . Under cut-elimination, can reduce to either or , so we must have (at least under a naive semantics of proofs).
I think of the DJS paper as an attempt at giving more constructive semantics to classical logic via translation to linear logic, and as a precursor to better ways of understanding such translations via polarities and focusing and all that jazz.
Ah, interesting, thanks. Do you know a citation for the translation?
Do you know a citation for the translation?
Actually I’m not sure, I was wondering about that…
It doesn’t seem to be in Girard’s original linear logic paper (see Sections 5.4 and 5.5 for different translations of classical logic), or in linear logic: its syntax and semantics. And I just saw in some notes by Emmanuel Beffara it says that “translating into does not work”, so I might be missing something.
Okay, here is a counterexample: but not .
So in #2 I was thinking about weakening and contraction, but the counterexample is just composition! Whoops!
One way to see the reason for the failure is to consider a classical cut:
If we translate classical sequents by linear sequents , then we get stuck trying to compose a proof of with a proof of . On the other hand, if we use the translation , then we get a proof of and a proof of , and we can apply left-promotion on the right premise to get , and then cut. Dually, if we use the translation then we can apply right-promotion on the left premise and then cut.
Interesting! How can you be sure that not ?
Someone should fix wikipedia, and we should also record this at linear logic.
The reason I ask is that I’m looking at a particular model in which is an interesting thing, and unless I’m mistaken, in this model such things do compose. I wonder what extra structure this model has that makes that work.
How can you be sure that not ?
Good question. I’m reasoning backwards about all possible cut-free proofs in the sequent calculus. Some of the !s and ?s are useless, so this goal reduces to proving , which further reduces to proving after applying the -left rule, then after a left-promotion, and finally after another -left rule. But there is no way to prove the goal in general, so there is no proof of our original sequent.
The reason I ask is that I’m looking at a particular model in which is an interesting thing, and unless I’m mistaken, in this model such things do compose. I wonder what extra structure this model has that makes that work.
Yeah, that’s interesting. Does the model let you define morphisms of type , or does composition work in a different way?
Unless I got a dual misplaced (which is entirely possible), I think the model doesn’t have , but instead something like .
The model is the -autonomous category Sup, with the suplattice of downsets in ordered by containment. This is the comonad induced on by the adjunction exhibiting it as monadic over , i.e. is the free suplattice on the poset . Then , which is the suplattice of upsets in ordered by reverse containment. I haven’t checked all the details to be sure this acts like , but I think this satisfies at least the law , since maps out of both of them classify arbitrary poset-maps out of .
Note that the embedding of intuitionistic logic taking to corresponds to building the Kleisli category of , which is the category of suplattices and arbitrary poset-maps between them.
To see what happens with composition, write , so that
the suplattice of upsets in ordered by reverse containment. This is also known as the opposite of the poset of -enriched profunctors . Since profunctors can be composed, taking the opposite of this composition map we get a map of posets . Composition of profunctors is cocontinuous in each variable, but this is no longer the case when we take opposites, so this doesn’t extend to a suplattice map on . But it does of course induce a suplattice map on , which as noted above is equivalent to .
This model also has other very special features not present in syntactic linear logic, such as and , so it’s not too surprising if this is another one of them. But I would like to understand it better. In particular, can the existence of this composition be expressed as a property (or property-like structure) of among comonads on -autonomous categories?
Re #8, I went ahead and fixed the Wikipedia article, and then looking back through the history realized that I was the one who originally introduced this mistake, way back in June 2009! That adds some (painful) irony to my being “pretty sure” the article’s claim was correct… :-o So thanks, Mike, for bringing this issue up and helping me to correct the errors of my youth!
Haha! Thanks.
1 to 12 of 12