Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I incorporated some of my spiel from the blog into the page type theory.
I don't like JA's propositions as types page either, but I don't think that we should let it destroy the links in the Lab. He's already written that he expects it to be replaced, so it would be nice if wiki magic makes links automatically work when we do that.
You probably just missed Mike's removing the link to propositions as types from type theory.
And I shouldn't really say that I ‘don't like’ your page, only that I don't like having it as the page on propositions as types —which I think you anticipated when you wrote ‘if it’s necessary to adjust the title after a while, that’s okay, but this name will do for the time being’ there. So someday I'll at least add some introductory material explaining what I think the propositions as types paradigm is about (or maybe Mike will), and then maybe we'll agree to move your stuff to a subsidiary page. But I'm not trying to start a discussion about that early; I'm just trying to save the link.
I'm thinking that propositions as types done right might be just the thing.
I started doing this ambidextrous thing back in the days when I had group theory profs with opposing prefs — later on I was encouraged to see that a small but valiant band of catgorevores like Ernie Manes made of point of applying operators on the right.
So now you know who to blame …
I don't want type theory to link to "propositions as types" for as long as doing so will take people to Jon's page, or, to be honest, any page with Jon's stuff on it. Especially since type theory has been linked from the ncafe and there has recently been discussion about "propositions as types," and I think Jon's page will do nothing more than confuse people and turn them off to the nlab. The idea of propositions as types is confusing enough already. But I didn't feel competent to guess a better name for Jon's page myself, since it doesn't make any sense to me, so instead I removed the link. Would it have been better to rename that page to, say, "Jon Awbrey's propositions as types" and let him rename it again if he has a better name in mind?
Well, Jon just moved his page, so the danger is passed.
I said this before: if a page makes no sense to sombody who feels like he should be in position to follow it, he should at least leave a brief query box indicating this.
This is in the interest of all parties involved:
the author of a page wants to be alerted if people who should be able to follow the page, don't. So that it can be improved on if possible.
the non-expert reader wants to be alerted to material that is regarded as controversial even among the experts. That's only fair.
I mean, that's the point of us putting material here instead of on our private hard-disk. To get the feedback. And critical feedback.
Urs, it's increasingly difficult to believe that Jon Awbrey as author of a page wants to be alerted if people who should be able to follow his page, don't. Regardless, I agree query boxes should be applied where skepticism is warranted. Applied perhaps liberally.
Having Jon Awbrey retitle the page in this way is completely inappropriate and antisocial and against the collaborative spirit of this enterprise, and my patience with him is now right at the breaking point.
Regarding Urs's two listed reasons above:
Todd wrote:
Having Jon Awbrey retitle the page in this way is completely inappropriate and antisocial
I completely disagree. He wrote all of that, none of it is what Mike or I is likely to want to see on that page, and he even wrote in his first draft that he expected that the name would need to be changed. He changed it now to please us! He was trying to be nice.
Although, it would probably be better if he put all of this stuff on a personal web in the first place, since it is idiosyncratic and doesn't particularly mesh with the idiosyncracies of the rest of us.
Look at the title. It's Jon thumbing his nose at his critics. I find it puerile and antisocial. It just doesn't belong on the nLab IMO.
I agree with you strongly that it would definitely be better relegated to his personal web.
I think that the title is another of his silly puns, referring to his usage of reverse Polish notation; see the end of the middle line of his comment #5 here.
Maybe. That's a generous interpretation. If that is really the explanation, then at a minimum the explanation belongs right at the top of the page. But at this point, I am in no mood to play games.
It never ceases to amaze me how nice you guys are.
For me it is more a combination of laziness and cowardice, a desire to avoid unnecessary confrontation. However, I'm getting more fed up and I think I'm now with David and Todd. Whether or not the title is a pun, it is inappropriate. Jon has provided no evidence that his theories are related to the collaborative enterprise the rest of us are engaged in, and so they don't belong on the main nlab web.
Jon, if you were given a private web, would you accept to have all of your pages moved there?
It looks like Jon moved it back. I put some stuff up top to orient the reader, and I think that it would be perfectly reasonable for anybody to move Jon's stuff to another page as I suggested there.
Some other movements of these pages were obscured for a while by the cache bug; all should be clear now.
Maybe somebody can help me with sorting out reference data concerning the following:
I know of this citationn
and I know of this online available pdf of Martin-Löf of essentially, but not quite, the same title.
I have trouble seeing if the reference data corresponds to that pdf. Can anyone help?
The pdf you found looks like an early version, compared to the one available at the doi 10.1016/S0049-237X(08)71945-1 – which at least has a longer introduction (there may be other changes as well). Many passages are word-for-word the same, though.
Okay, thanks for checking!!
Some trivia:
Added at type theory a pointer to data type. There I added a pointer to Sale 77, “Primitive Data Types” (and nothing else yet).
That article shows up when one searches for the phrase “A type is a concept”. I happened to wonder which other sources might explicitly introduce types under the name “concept”, “notion” (maybe pre-fixed with “mathematical”).
Thomas Holder kindly points out further relevant articles on the pre-history of type theory, have added them to the References there (right at the beginning of the list).
I added a link to an online version of Appendix B ’The doctrine of types’ by Russell
Responding to a question by email, I added to type theory a remark that when we construct a “term model” by arbitrarily choosing truth values for the undecided statements, we no longer get a model that proves the completeness theorem.
Thanks for these fixes!
Added Mike’s recent paper to the references
Mike has chosen to allow composition along multiple types at once rather than only along one type at a time, corresponding to a prop rather than a polycategory. To the extent that this type theory is aimed at noncartesian monoidal categories and so perhaps of interest in quantum physics, what difference does the prop/polycategory choice represent there?
We should add it to the table at relation between type theory and category theory, but what to call it?
Despite the temptation to speculate that the funny disjunction $\parr$ has something to do with entanglement, as far as I know there is no known application of polycategories to quantum physics. So this new type theory seems more likely to be applicable therein than standard classical linear logic. But I’d be ecstatic to be proven wrong…
I’m reading Mike’s paper and having some fun with the examples, but I don’t actually understand what’s so bad about using MILL for symmetric monoidal categories.
I thought MILL stood for a fragment of linear logic involving both $\otimes$ and $\multimap$, or at least I feel certain I’ve seen the phrase/acronym used this way in the past.
@Todd: yes, that’s what the page linear logic says too. I didn’t realize multiplicative conjunction disagreed; I agree MILL should be $\otimes,\mathbf{1},\multimap$ since those are all the multiplicative and intuitionistic operations.
My bad, regarding MILL. I just forgot about $\multimap$.
Some typos:
In the step of tr(fg) = tr(gf) that has two “redexes”, it look like g is applied to the wrong label.
In self-duality of a Frobenius monoid, I think you swapped the components of the definition of eta, based on how you unfold it later.
Re #35:
So far I’ve redone the Frobenius monoids section with unit and tensor types. I also did a version of the “centered” Frobenius axiom proof using unit (of the monoid) rather than counit. I see that the destructuring let bindings are verbose. Is there any other problem I’m supposed to be seeing?
Thanks for the typos, I’m impressed you read it carefully enough to see that! Yes, the verbosity of matching is what this syntax avoids, staying closer to the intuition of types as having “elements” and the elements of a (tensor) product being tuples with “components”.
Is the matching syntax not also based on the intuition of elements? It just gives temporary names to the components.
I don’t have a problem with having projections; I have two minor problems with this type system, that I don’t have with linear logic. One is that without let binding, terms tend to grow exponentially. Maybe you already know about this sort of thing. For example, to express this term in your system, given generators f : (X,X) -> X
and g : X -> (X,X)
:
x:X |- let (x1,x2) = g(x) in let (x3,x4) = g(f(x1,x2)) in let (x5,x6) = g(f(x3,x4)) in f(x5,x6) : X
This is, in some sense, not serious, since it’s straightforward to fix with simple metalanguage. But then for performance reasons, you really want to typecheck the metalanguage, so you don’t need to do the exponential expansion. It seems pretty typical to leave this sort of performance issue in papers though. So my problem is just that punting performance issues to a metalanguage is unlike programming language design.
The other problem is just that I personally find the typing rules very unfamiliar, and I don’t know how to extend the system with type constructors. (Without totally changing at least the derivations.) (Actually, I bet you broke the world record for most complicated individual typing rule!)
It sounds like my goals are different from yours. I’m not designing a programming language; I’m designing a syntax for reasoning about monoidal categories. If desired one could certainly add a let-binding syntax, but in all the actual examples I’ve looked at so far it is unnecessary. The fact is that, in my experience, MILL isn’t used much in practice by people actually working with monoidal categories, as compared to say string diagrams, and I think it’s at least worth exploring whether a different kind of type theory would be more useful. If we never try things that seem unfamiliar at first, we’ll stay stuck in a rut forever.
If you’re interested in implementing it, adding let-bindings, or adding more type constructors, I think that would be awesome! I can certainly see that the distinctive features of this system make all of those things more difficult, or at least more interesting, than in many existing type theories. But those distinctive features are the whole point, so if you get rid of them then you wouldn’t be implementing or extending this system.
I can certainly see that the distinctive features of this system make all of those things more difficult… But those distinctive features are the whole point,
What exactly is the point?
As long as you get projections, do you actually care if it’s based on props or polycategories? Like, what if the projections worked on elements of tensors instead?
How attached are you to unique derivtions for the projections rather than translating to a matching representation, which would be the subject of derivations? Relatedly, are you OK with needing equations to commute nested matches?
Probably the weirdest thing about your system is that you need to compare terms to do type checking, although there’s no type dependency on terms. This really seems like something to avoid. What if all function calls are labeled, and the arguments are only on one of the appearances? This would also avoid the exponential blowup.
Would fixing anything I don’t like defeat the point for you? This sounds like complaining but it’s not. Serious questions here.
Also, in the introduction, you have a problem with “covariables”. I haven’t read any of those papers. How similar are C#’s out
parameters to this idea of covariable? (And what if out
parameters were single-assignment?)
So you say the problem with covariables is essentially that set theory doesn’t have them. But programming languages do (maybe/potentially). Type theory is not all a set theory imitation club, ya know.
If you’d like to write up a specific alternative theory, then we could write out some examples in it and compare. It’s hard for me to tell what the theories you’re envisioning would look like.
Also, can’t type theory be a big tent? If you want to use a type theory with covariables and matches, but I don’t, can’t we both respect each other’s preferences, and even see that each choice has its own objective advantages and disadvantages? I tried to make that point relating to string diagrams in the paper, but maybe I should explicitly say the same thing about other type theories too.
Re #47: I would rather write down examples first, if they aren’t clear, before I bother to work out rules. #45 was actually multiple different proposals, and I’m asking which you like. I guess I should expand on #45.
Re #48: I’m trying to find out if the advantage(s) you see with your new system can be retained while removing some of the aspects I see as disadvantages. Of course the tent gets bigger whenever people can’t agree. But then newcomers face a tougher decision.
(For reference, to me, any disagreement about how to use type theory for monoidal categories is much less concerning than disagreement about intensional vs extensional dependent type theory. (That is, whether the judgmental equality is extensional.))
Anyway, what about out
parameters?
First of all, notice that instead of $(M1,M2\,|\,N1,N2)$, you could write N1; N2; (M1,M2)
. The semicolon forms could be considered alternative formatting for your result tuple. But it can also be considered the unit elimination form.
Re #45:
As long as you get projections, do you actually care if it’s based on props or polycategories? Like, what if the projections worked on elements of tensors instead?
This is about whether to consider the (M1,M2)
returned as a special result tuple, or the introduction form of tensor. I propose allowing (M1,M2,M3)
as sugar for ((M1,M2),M3)
, because we will consistently treat tensor as left associative. (Or, of course, we could consistently treat it as right associative.)
Instead of $M_{(1)}$ and $M_{(2)}$, I will write the more keyboard-friendly M.1
and M.2
. These are a permitted form (possibly in addition to matching) of elimination for tensor. Of course, if you use M.1
, you have to use M.2
, and you cannot otherwise use M
.
For conceptual symmetry with the sugar (M1,M2,M3)
, if M
has type $A1 \otimes A2 \otimes A3$ ($\equiv (A1 \otimes A2) \otimes A3$), then M.1
is sugar for M.1.1
, M.2
for M.1.2
, and M.3
for M.2
.
Of course, I’m not proposing that the sugar for left nested tensors actually be limited to two levels. I’m just showing the representative example.
Note that this use of projections renders your implicit use of comultiplication ambiguous.
…translating to a matching representation, which would be the subject of derivations? Relatedly, are you OK with needing equations to commute nested matches?
The idea here is that projections are sugar entirely. So
p:A * B |- (p.2,p.1) : B * A
would get desugared to
p:A * B |- let (p1,p2) = p in (p2,p1) : B * A
. Consequently, there would be extra terms that don’t have sugared forms, but are equal to sugared forms by commuting matches. Like in the cyclicity proof:
eps(eta{u}.2,f(g(eta{u}.1))) = let (u1,u2) = eta{u} in eps(eta{v}.2,g(u1)); eps(u2,f(eta{v}.1)) = eps(eta{u}.2,f(eta{v}.1)); eps(eta{v}.2,g(eta{u}.1)) = let (v1,v2) = eta{v} in eps(eta{u}.2,f(v1)); eps(v2,g(eta{u}.1)) = eps(eta{v}.2,g(f(eta{v}.1)))
You know? I don’t think I like that idea anymore.
…you need to compare terms to do type checking, although there’s no type dependency on terms. This really seems like something to avoid. What if all function calls are labeled, and the arguments are only on one of the appearances?
This would keep projections primitive, but change the rules to avoid repeating argument tuples. The statement of the Frobenius axiom (with center) would be:
x:M,y:M |- (cp{u}(x).1,cp{u}.2 * y) = (cp{v}(x * y).1,cp{v}.2) = (x * cp{w}(y).1,cp{w}.2) : M * M
It seems relatively easy to add sugar allowing a “default” label, so that none of the labels would be needed in that example. So just (cp(x * y).1,cp.2)
.
Alternatively, we could think of labels more like variables and say ((v=cp(x * y)).1,v.2)
.
Did you read my discussion of tensor product types with projections on p11? It looks like your first suggestion is just syntactic sugar for that. I’m not sure exactly what it buys you to write f(M).1
and f(M).2
instead of $f_{(1)}(M)$ and $f_{(2)}(M)$. (This has nothing to do with polycategories, as far as I can see.)
With your permission, I’ll ignore the idea that you don’t like any more either.
I have trouble reading your third idea; can you please write it as mathematics instead of code?
Oh, and yes of course I could write N1; N2; (M1,M2)
, but I don’t want to, because part of the problem of many other syntaxes for linear logic that I’ve seen is that they “sequentialize” things that should really be regarded as simultaneous. If you’re thinking of in terms of “unit elimination” then N1; N2; (M1,M2)
sounds like “first eliminate N1, then N2, then do (M1,M2)” which requires a commuting conversion to identify with “first eliminate N2, then N1, then do (M1,M2)”, whereas simply permuting the scalar terms is a directly definable and intuitive equivalence relation.
Re #51:
Did you read… It looks like your first suggestion is just syntactic sugar for that.
Yes. They are different. Never mind polycategories for now, since everything in #50 works in multicategories.
Morphisms have a single output; you can’t literally project a function call in the proposals of #50. f(M).1
projects from f(M)
, which should have a type formed with $\otimes$. You might want to write it as $f(M)_{(1)}$. The point is that this convention is very similar to what you do, but doesn’t need props. In particular, it can conceivably be made compatible with polycategory-based features.
$f_{(1)}(M)$ seems very misleading for this. OTOH mathematicians are apparently quirky enough to write $sin^2(x)$ for $(sin(x))^2$. :)
With your permission,
Granted!
Re #52:
It doesn’t have to be a semicolon, if that seems misleading.
Are commuting conversions a big problem? I admit I don’t know much about them. Random point: I think ((N1; M1),(N2; M2))
would be allowed too, in my proposals, and that commuting conversions handle that too. It might seem like excessive notational flexibility, and maybe it is.
Re #51:
I have trouble reading your third idea; can you please write it as mathematics instead of code?
Frobenius axiom with center:
$x:M,y:M \vdash (\Delta^{u}(x)_{(1)},\Delta^{u}_{(2)} y) = (\Delta^{v}(x y)_{(1)},\Delta^{v}_{(2)}) = (x \Delta^{w}(y)_{(1)},\Delta^{w}_{(2)})\;:\;M \otimes M$
Using “default” label:
$x:M,y:M \vdash (\Delta(x)_{(1)},\Delta_{(2)} y) = (\Delta(x y)_{(1)},\Delta_{(2)}) = (x \Delta(y)_{(1)},\Delta_{(2)})\;:\;M \otimes M$
Variable-like labels:
$x:M,y:M \vdash ((u \coloneqq \Delta(x))_{(1)},u_{(2)} y) = ((v \coloneqq \Delta(x y))_{(1)},v_{(2)}) = (x (w \coloneqq \Delta(y))_{(1)},w_{(2)})\;:\;M \otimes M$
Hrm, TeX changes things, huh? The variable-like labels looked really good in fixed-width ASCII.
Of course nothing in my type theory needs props either; it could be done with symmetric monoidal categories directly, just as ordinary cartesian type theory can be (and often is) done with finite-product categories rather than cartesian multicategories. The props are just a technical convenience, a semantic notion that’s closer to the syntax and provides a bridge between it and the “real” semantics in symmetric monoidal categories. I’m not sure why you want to write $f(M)_{(1)}$ instead of $f_{(1)}(M)$ or $\pi_{(1)}(f(M))$, and I’m still failing to see how this proposal differs substantively from mine at the level of syntax.
I don’t like commuting conversions, and I think they’re somewhat icky type-theoretically too, but I’m probably not the best person to explain why. Ask another type theorist. (-:
And I have to say I find all the options in #54 to be pretty ugly and unintuitive compared to my $(x_{(1)}, x_{(2)}y) = ((x y)_{(1)},(x y)_{(2)}) = (x y_{(1)},y_{(2)})$. (And the fixed-width ASCII was even uglier.)
I was using “prop”, “multicategory”, and “polycateogry” as names for the judgment structure, not the semantics. My proposals deliberately did not add any truly new semantics.
Yeah the implicit comultiplication really helps with a lot of your examples.
I guess I’m not successfully communicating my concerns. Perhaps they’re unfounded.
Your paper mentioned an $Int$ embedding of traced monoidal categories into compact closed categories. Is there an embedding of symmetric closed monoidal categories into compact closed categories? Maybe it somehow turns out your system doesn’t need extensions. Otherwise adding $\multimap$ seems like a good test.
I was using “prop”, “multicategory”, and “polycateogry” as names for the judgment structure,
In other words, I have in mind a type system where either there’s only one consequent or multiple consequents are disjunctive.
OK here’s a thought:
What if $x y = y x$? Then you might expect $((x y)_{(1)},(y x)_{(2)})$ to be well-typed. But it’s not, according to your rules. So in particular, you can’t use $((x y)_{(1)},(x y)_{(2)}) = ((x y)_{(1)},(y x)_{(2)})$ because it’s nonsense. I know this is not a real problem, but it seems unintuitive.
Do you know any way of avoiding the syntactic equality constraints that you’d accept?
If what you want is to write $x:A, y:B \vdash (f(x,y)_{(1)},f(x,y)_{(2)}) : C\otimes D$ instead of $x:A, y:B \vdash (f_{(1)}(x,y),f_{(2)}(x,y)) : (C,D)$, with an $\otimes$ on the right instead of a comma and a variation in syntax, then I don’t really see the point, and it seems offhand as though it would be harder to set up the rules formally. But maybe not; if you want to give it a try I’d be happy to see what you come up with.
Any full monoidal subcategory of a compact closed category is traced, and not every closed symmetric monoidal category is traced, so not every closed symmetric monoidal category can be monoidally embedded in a compact closed one (related). I did think a bit about adding $\multimap$, but it’s tricky. If you can think of a good way to make that easier while not sacrificing the advantages of the current system, I’d love to hear it!
I wouldn’t expect $((x y)_{(1)},(y x)_{(2)})$ to be well-typed; if you duplicate $x y$ you’ll get $(x y)_{(1)}$ and $(x y)_{(2)}$, and if you then substitute along the equality $x y = y x$ you have to do it everywhere, getting $(y x)_{(1)}$ and $(y x)_{(2)}$. You just have to think of the entire list of terms as a unit.
If what you want is to write… then I don’t really see the point,
Well, the point is that I expect it to tolerate extensions better…
I did think a bit about adding $\multimap$, but it’s tricky.
…Like that one. Of course I don’t know yet. It just seems kind of like you lucked out because generators are the only non-base-case term former. With your rules, you derive mutiple terms at once, in some sense. So I figure you get an all-base-cases rule, and an everything-else rule. When you add connectives, I figure the complexity will get crazy without some reorganization.
$\multimap$ seems particularly tricky, assuming the intro form is a binder, but the switch to $A \otimes B$ instead of your $(A,B)$ judgment structure is based on my tentative conclusion that the mileage you get out of props doesn’t scale.
so not every closed symmetric monoidal category can be monoidally embedded in a compact closed one
Cool. How resourceful of you to already know that! Maybe you already knew: I asked because I guess such an embedding would let you handle $\multimap$ without adding it as primitive, like you did for traces.
If you can think of a good way to make that easier while not sacrificing the advantages of the current system, I’d love to hear it!
That’s why I’m asking what you think are the important advantages. I’m still trying to find out if you’ll let go of repeated argument tuples. From a type checking perspective, those equality constraints seem really silly. I don’t think they’re a source of deep difficulty, but they shouldn’t be there at all. They make the terms larger (asymptotically, at least, even if your use cases are OK) and give the type checker a lot of extra work.
I’m probably going to think about rules, but I’d really rather think about labels than argument tuples.
if you then substitute along the equality $x y = y x$ you have to do it everywhere,
I understand. But in the cartesian case, that isn’t so. That includes set theory, so this is not in accordance with the set theoretic intuition. My thought was that the intuitive mismatch is minimized by forbidding this repetition. (So substituting everywhere means substituting in one place.)
If by “let go of repeated argument tuples” you mean moving to a syntax like those in #54, then no, I don’t want to do that. I don’t think it would be a good idea to allow writing $((x y)_{(1)},(y x)_{(2)})$, because it’d be hard to figure out (even for a human reader) which terms “match”. I guess your point is that you don’t want to have to write $x y$ twice at all, but to me it’s an important aspect of the syntax that it’s the same thing $x y$ that’s appearing (“duplicated”) in both places.
From a type checking perspective, those equality constraints seem really silly. I don’t think they’re a source of deep difficulty, but they shouldn’t be there at all. They make the terms larger (asymptotically, at least, even if your use cases are OK) and give the type checker a lot of extra work.
To me this betrays a fundamental misunderstanding. The goal of a type theory is not to make things easier for the type checker! The goal of a type theory is to make things easier for the human!
I can agree that there are probably cases where a human would appreciate having a match/let syntax as an option. In principle I wouldn’t object to using a theory with only match/let as an intermediate language and compiling the nice syntax down to it (maybe this is the idea that got discarded in #50?) — although that seems like it would only matter when implementing it in a proof assistant, since mathematically the process of compiling the nice syntax down to the intermediate language would, I expect, already be doing the same work as interpreting it directly into the desired semantics (you’d essentially be interpreting it into the syntactic monoidal category generated by MILL).
I guess one way to handle $\multimap$ would be to just apply the ordinary MILL rule for abstractions when the codomain is unary, and rely on $\otimes$ to tuple up non-unary codomains if necessary.
Re #61: Oh all right. I wasn’t proposing to support $((x y)_{(1)},(y x)_{(2)})$, by the way.
Re #62: If the terms are larger, that gives the human and the type checker extra work. But I can see that you want to keep it as an option.
(maybe this is the idea that got discarded in #50?)
The discarded idea allowed the user to use either matching or projections. The translation between them did not seem intuitive enough to expose to the user.
Re #63: Are you saying you were thinking of having $\multimap$ types with multiple codomains?
Are you saying you were thinking of having $\multimap$ types with multiple codomains?
No, that’s not what I was saying, but that’s another option one might consider.
I just mean have a rule
$\frac{\Gamma, x:A \vdash (M) : (B)}{\Gamma \vdash (\lambda x.M) : (A\multimap B)}$that only applies when the codomain is unary.
How so?
Hmmm… I suppose you would want to do that. Well, I guess you could try to incorporate it in the generator rule somehow… but probably that rule is about as overextended as possible already and the best thing to do would be to retreat from terms-have-unique derivations. Maybe it would work…
I think unique derivations could be regained with a different style of rules motivated by type checking. That’s what I’m planning. There’s a nice way to do linear type checking by passing a state that keeps track of which variables were used. I think it can be modified to handle tensor projections too.
I just read Hasegawa’s thesis, in which among other things he gives a multiple-conclusion type theory where both commas represent the same tensor product. (Actually he writes the right-hand commas as $\otimes$, but he considers them to be definitionally associative, so they act like a comma.) His setup allows contraction and weakening, but not naturally: semantically it’s like a linear/nonlinear adjunction but without the right adjoint. He also has a sort of multiple-domain multiple-codomain internal-hom too, although it’s somewhat different because it always lands in the nonlinear world.
It doesn’t have tensor projections, like your system does. The non-natural contraction seems different from implicit comultiplication because the copies aren’t distinguished. So I guess there’s not really much to learn for extending your system?
I suppose it’s a thought to have completely implicit, non-natural weakening, instead of your counit notation. Does that really work? What do you think of that style? Oh wait, that seems ambiguous with your sugar for multiple comultiplications. Or maybe not, because of the counital and coassiciative axioms!
I got distracted from extending your system. I tried looking up a ready-made solution but didn’t find it. I think I could sit down and work out something probably right, but I haven’t. I’m not that excited about it; I can’t tell if you are.
I think that semantically, having unnatural contraction and weakening is equivalent to saying that every type is equipped with a specified cocommutative comultiplication. So there’s some overlap in the intended semantics, but my system is more flexible because it doesn’t require any or all types to have comultiplications, let alone cocommutative ones.
I can’t imagine how implicit weakening could be unnatural. How do you distinguish implicitly between weakening a variable $x$ and weakening the value $f(x)$?
I’d be rather interested if you can think of a good way to add internal-homs to my system, but I’m not planning to spend any time on such an extension myself. (I am, however, planning to run a group at the ACT 2020 summer school aiming to extend this system to monoidal bicategories.)
How do you distinguish implicitly between weakening a variable $x$ and weakening the value $f(x)$?
Good point. I guess by not implicitly doing anything else, like call $f$. So you’d still need explicit counit.
In other words $x:C \vdash () = (| \epsilon(x)) \neq (| \epsilon(f(x)))\;:\;()$.
Then that’s not stable under substitution.
To handle lollipop (internal-hom), do you want unidirectional inference with annotations on lambdas, or bidirectional typing with an optional annotation/cut rule. Do you want to require eta-long terms?
Well, in principle both would be nice to have – one for simplicity of exposition, the other for convenience of practical use. I would say start with whichever is easier for you.
For your entertainment, I’ll tell you about a tricky situation I ran into involving your tensor projections and generators, combined with $\&$-intro.
For both examples,
$x:A \qquad f:A \to (B_1,B_2)$
Example 1:
$g_1:B_1 \to C \qquad g_2:B_1 \to D \qquad h:B_2 \to E$
$([g_1(f_{(1)}(x)),g_2(f_{(1)}(x))],h(f_{(2)}(x)))\;:\;(C \& D) \otimes E$
Example 2:
$g:(B_1,B_2) \to C \qquad h:A \to D$
$[g(f_{(1)}(x),f_{(2)}(x)),h(x)]\;:\;C \& D$
The problem is whether you use the generator rule on $f$ inside or outside the $\&$-intro. In example 1, you have to use it outside; in example 2, you have to use it inside the left component. But processing both examples left to right, the $f$ is first encountered in the left component, because the $\otimes$-elim is implicit.
I’m not sure if I’m going to handle this.
I guess there’s basically the same problem with $\oplus$-elim.
Actually, I’m not sure about the semantics of a similar example, where the generator can be inside or outside the pair:
$g_1:(B_1,B_2) \to C \qquad g_2:(B_1,B_2) \to D$
$[g_1(f_{(1)}(x),f_{(2)}(x)),g_2(f_{(1)}(x),f_{(2)}(x))]\;:\;C \& D$
Dispensing with tensor projections, this could be written in two ways:
$let\;(b_1,b_2) = f(x)\;in\;[g_1(b_1,b_2),g_2(b_1,b_2)]$
$[let\;(b_1,b_2) = f(x)\;in g_1(b_1,b_2),let\;(b_1,b_2) = f(x)\;in g_2(b_1,b_2)]$
Are these actually equal? Maybe sometimes implicit $\otimes$-elim is just plain wrong, not tricky.
Maybe I’m misunderstanding your notation, but doesn’t that equality simply follow from the naturality of the universal property of the cartesian product?
Probably? If I knew I wouldn’t have to ask. This is $[g_1,g_2] \circ f$ versus $[g_1 \circ f,g_2 \circ f]$. Where brackets are pairing for $\&$, $f:A \to B_1 \otimes B_2$, yada yada. I know they’re the same in a cartesian category where $\otimes = \& = \times$, but I don’t know why, in categorical terms.
So you’re saying they should be equal. OK.
By definition, $[g_1\circ f, g_2\circ f]$ is the unique morphism that produces $g_1\circ f$ and $g_2\circ f$ when composed with the two projections $\pi_1$ and $\pi_2$ respectively. But $[g_1,g_2]\circ f$ also has this property, since $[g_1,g_2]$ produces (by definition) $g_1$ and $g_2$ when composed with $\pi_1$ and $\pi_2$ respectively, and composition is associative. Therefore, they are equal.
I see it clearly now. I was guessing they were equal, I just wasn’t sure. You see? This is why I don’t like category theory much. I’m bad at it. :)
I’m going to put off the additive connectives at first. I have a plan, but the bookkeeping is more complicated, so I should work out the details of MILL first.
As I always say to my calculus students, the more you practice it, the better you’ll get at it, and the more you’ll like it.
Odd: if $C$ is a comonoid, in the context $x:C$, you’d allow $(x_{(1)},x_{(2)})$ and $(x_{(1)},x_{(2)},x_{(3)})$ but not $(x_{(1)},x_{(3)})$, right? Makes type checking yet more complicated. I’m not going to handle that.
Yes, but the latter is just sugar for $(x_{(1)(1)},x_{(1)(2)},x_{(2)})$ or $(x_{(1)},x_{(2)(1)},x_{(2)(2)})$, and I also didn’t try to make that desugaring precise.
Right. I was thinking about whether it’d be convenient to make it not sugar. Not convenient enough for me.
The $(x_{(1)},x_{(3)})$ could be sugar for (the same thing as) $(x_{(1)},x_{(3)} | \epsilon(x_{(2)}))$. Sugar doesn’t need to have good metatheoretical properties.
Good news: I finished a drafty set of rules for $\otimes$, $1$, and $\multimap$ with tensor projections. Don’t set your hopes too high; it’s kind of a mess. Now I just have to figure out how to typeset this on a new page…
OK, I put it up at tensor projection. Please take a look and tell me what you think.
1 to 92 of 92