Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeOct 28th 2009

    I incorporated some of my spiel from the blog into the page type theory.

    • CommentRowNumber2.
    • CommentAuthorTobyBartels
    • CommentTimeOct 28th 2009

    I don't like JA's propositions as types page either, but I don't think that we should let it destroy the links in the Lab. He's already written that he expects it to be replaced, so it would be nice if wiki magic makes links automatically work when we do that.

    • CommentRowNumber3.
    • CommentAuthorJonAwbrey
    • CommentTimeOct 28th 2009
    Sorry, did I miss some discussion somewhere?
    • CommentRowNumber4.
    • CommentAuthorTobyBartels
    • CommentTimeOct 28th 2009

    You probably just missed Mike's removing the link to propositions as types from type theory.

    And I shouldn't really say that I ‘don't like’ your page, only that I don't like having it as the page on propositions as types —which I think you anticipated when you wrote ‘if it’s necessary to adjust the title after a while, that’s okay, but this name will do for the time being’ there. So someday I'll at least add some introductory material explaining what I think the propositions as types paradigm is about (or maybe Mike will), and then maybe we'll agree to move your stuff to a subsidiary page. But I'm not trying to start a discussion about that early; I'm just trying to save the link.

    • CommentRowNumber5.
    • CommentAuthorJonAwbrey
    • CommentTimeOct 28th 2009

    I'm thinking that propositions as types done right might be just the thing.

    I started doing this ambidextrous thing back in the days when I had group theory profs with opposing prefs — later on I was encouraged to see that a small but valiant band of catgorevores like Ernie Manes made of point of applying operators on the right.

    So now you know who to blame …

    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeOct 28th 2009

    I don't want type theory to link to "propositions as types" for as long as doing so will take people to Jon's page, or, to be honest, any page with Jon's stuff on it. Especially since type theory has been linked from the ncafe and there has recently been discussion about "propositions as types," and I think Jon's page will do nothing more than confuse people and turn them off to the nlab. The idea of propositions as types is confusing enough already. But I didn't feel competent to guess a better name for Jon's page myself, since it doesn't make any sense to me, so instead I removed the link. Would it have been better to rename that page to, say, "Jon Awbrey's propositions as types" and let him rename it again if he has a better name in mind?

    • CommentRowNumber7.
    • CommentAuthorDavid_Corfield
    • CommentTimeOct 28th 2009
    It never ceases to amaze me how nice you guys are. Where many very able thinkers are placing great ideas in their personal webs as they fear they are yet too tentative for nLab, Jon is happy to fill endless nLab pages with his theories that nobody else understands, and which have never been shown to have anything to do with n-categories. It's just squatting.

    No doubt I will be accused of wanting to create a self-congratulatory country club.
    • CommentRowNumber8.
    • CommentAuthorTobyBartels
    • CommentTimeOct 28th 2009

    Well, Jon just moved his page, so the danger is passed.

    • CommentRowNumber9.
    • CommentAuthorUrs
    • CommentTimeOct 28th 2009
    • (edited Oct 28th 2009)

    I said this before: if a page makes no sense to sombody who feels like he should be in position to follow it, he should at least leave a brief query box indicating this.

    This is in the interest of all parties involved:

    • the author of a page wants to be alerted if people who should be able to follow the page, don't. So that it can be improved on if possible.

    • the non-expert reader wants to be alerted to material that is regarded as controversial even among the experts. That's only fair.

    I mean, that's the point of us putting material here instead of on our private hard-disk. To get the feedback. And critical feedback.

    • CommentRowNumber10.
    • CommentAuthorTodd_Trimble
    • CommentTimeOct 28th 2009

    Urs, it's increasingly difficult to believe that Jon Awbrey as author of a page wants to be alerted if people who should be able to follow his page, don't. Regardless, I agree query boxes should be applied where skepticism is warranted. Applied perhaps liberally.

    Having Jon Awbrey retitle the page in this way is completely inappropriate and antisocial and against the collaborative spirit of this enterprise, and my patience with him is now right at the breaking point.

    • CommentRowNumber11.
    • CommentAuthorTobyBartels
    • CommentTimeOct 28th 2009
    • (edited Oct 28th 2009)

    Regarding Urs's two listed reasons above:

    • I ignored the first, since I knew that Jon knew that already.
    • I should have done it for the second.

    Todd wrote:

    Having Jon Awbrey retitle the page in this way is completely inappropriate and antisocial

    I completely disagree. He wrote all of that, none of it is what Mike or I is likely to want to see on that page, and he even wrote in his first draft that he expected that the name would need to be changed. He changed it now to please us! He was trying to be nice.

    Although, it would probably be better if he put all of this stuff on a personal web in the first place, since it is idiosyncratic and doesn't particularly mesh with the idiosyncracies of the rest of us.

    • CommentRowNumber12.
    • CommentAuthorTodd_Trimble
    • CommentTimeOct 28th 2009

    Look at the title. It's Jon thumbing his nose at his critics. I find it puerile and antisocial. It just doesn't belong on the nLab IMO.

    I agree with you strongly that it would definitely be better relegated to his personal web.

    • CommentRowNumber13.
    • CommentAuthorTobyBartels
    • CommentTimeOct 28th 2009
    • (edited Oct 29th 2009)

    I think that the title is another of his silly puns, referring to his usage of reverse Polish notation; see the end of the middle line of his comment #5 here.

    • CommentRowNumber14.
    • CommentAuthorTodd_Trimble
    • CommentTimeOct 29th 2009
    • (edited Oct 29th 2009)

    Maybe. That's a generous interpretation. If that is really the explanation, then at a minimum the explanation belongs right at the top of the page. But at this point, I am in no mood to play games.

    • CommentRowNumber15.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2009

    It never ceases to amaze me how nice you guys are.

    For me it is more a combination of laziness and cowardice, a desire to avoid unnecessary confrontation. However, I'm getting more fed up and I think I'm now with David and Todd. Whether or not the title is a pun, it is inappropriate. Jon has provided no evidence that his theories are related to the collaborative enterprise the rest of us are engaged in, and so they don't belong on the main nlab web.

    Jon, if you were given a private web, would you accept to have all of your pages moved there?

    • CommentRowNumber16.
    • CommentAuthorDavid_Corfield
    • CommentTimeOct 30th 2009
    What the hell is going on here! Why does the 'proposition as types' link from type theory point to Jon's page. I thought this had been changed. Has the change been reversed?
    • CommentRowNumber17.
    • CommentAuthorTobyBartels
    • CommentTimeOct 30th 2009

    It looks like Jon moved it back. I put some stuff up top to orient the reader, and I think that it would be perfectly reasonable for anybody to move Jon's stuff to another page as I suggested there.

    • CommentRowNumber18.
    • CommentAuthorDavid_Corfield
    • CommentTimeOct 30th 2009
    'Move' you mean 'delete' surely!
    • CommentRowNumber19.
    • CommentAuthorTobyBartels
    • CommentTimeOct 30th 2009

    Some other movements of these pages were obscured for a while by the cache bug; all should be clear now.

    • CommentRowNumber20.
    • CommentAuthorUrs
    • CommentTimeSep 17th 2013

    Maybe somebody can help me with sorting out reference data concerning the following:

    I know of this citationn

    • Per Martin-Löf, An intuitionistic theory of types: predicative part, In Logic Colloquium (1973), ed. H. E. Rose and J. C. Shepherdson (North-Holland, 1974), 73-118.

    and I know of this online available pdf of Martin-Löf of essentially, but not quite, the same title.

    I have trouble seeing if the reference data corresponds to that pdf. Can anyone help?

    • CommentRowNumber21.
    • CommentAuthorUlrik
    • CommentTimeSep 17th 2013

    The pdf you found looks like an early version, compared to the one available at the doi 10.1016/S0049-237X(08)71945-1 – which at least has a longer introduction (there may be other changes as well). Many passages are word-for-word the same, though.

    • CommentRowNumber22.
    • CommentAuthorUrs
    • CommentTimeSep 17th 2013

    Okay, thanks for checking!!

    • CommentRowNumber23.
    • CommentAuthorUrs
    • CommentTimeJan 9th 2015
    • (edited Jan 9th 2015)

    Some trivia:

    Added at type theory a pointer to data type. There I added a pointer to Sale 77, “Primitive Data Types” (and nothing else yet).

    That article shows up when one searches for the phrase “A type is a concept”. I happened to wonder which other sources might explicitly introduce types under the name “concept”, “notion” (maybe pre-fixed with “mathematical”).

    • CommentRowNumber24.
    • CommentAuthorUrs
    • CommentTimeFeb 27th 2015

    Thomas Holder kindly points out further relevant articles on the pre-history of type theory, have added them to the References there (right at the beginning of the list).

    • CommentRowNumber25.
    • CommentAuthorDavidRoberts
    • CommentTimeFeb 28th 2015

    I added a link to an online version of Appendix B ’The doctrine of types’ by Russell

    • CommentRowNumber26.
    • CommentAuthorMike Shulman
    • CommentTimeMar 17th 2016

    Responding to a question by email, I added to type theory a remark that when we construct a “term model” by arbitrarily choosing truth values for the undecided statements, we no longer get a model that proves the completeness theorem.

  1. fixed a tiny typo: represeting -> representing

    anqurvanillapy

    diff, v118, current

  2. fixed a tiny typo: dangling right parenthesis after f(x) in B(x)

    anqurvanillapy

    diff, v119, current

  3. Thanks for these fixes!

    • CommentRowNumber30.
    • CommentAuthorDavid_Corfield
    • CommentTimeNov 10th 2019

    Added Mike’s recent paper to the references

    diff, v120, current

    • CommentRowNumber31.
    • CommentAuthorDavid_Corfield
    • CommentTimeNov 10th 2019

    Mike has chosen to allow composition along multiple types at once rather than only along one type at a time, corresponding to a prop rather than a polycategory. To the extent that this type theory is aimed at noncartesian monoidal categories and so perhaps of interest in quantum physics, what difference does the prop/polycategory choice represent there?

    • CommentRowNumber32.
    • CommentAuthorDavid_Corfield
    • CommentTimeNov 10th 2019

    We should add it to the table at relation between type theory and category theory, but what to call it?

    • CommentRowNumber33.
    • CommentAuthorMike Shulman
    • CommentTimeNov 11th 2019

    Despite the temptation to speculate that the funny disjunction \parr has something to do with entanglement, as far as I know there is no known application of polycategories to quantum physics. So this new type theory seems more likely to be applicable therein than standard classical linear logic. But I’d be ecstatic to be proven wrong…

    • CommentRowNumber34.
    • CommentAuthoratmacen
    • CommentTimeNov 12th 2019

    I’m reading Mike’s paper and having some fun with the examples, but I don’t actually understand what’s so bad about using MILL for symmetric monoidal categories.

    • CommentRowNumber35.
    • CommentAuthorMike Shulman
    • CommentTimeNov 12th 2019

    Re QM and CLL, a bit of googling turns up this and this.

    Matt, try writing the examples in MILL and compare them.

    • CommentRowNumber36.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 12th 2019

    I thought MILL stood for a fragment of linear logic involving both \otimes and \multimap, or at least I feel certain I’ve seen the phrase/acronym used this way in the past.

    • CommentRowNumber37.
    • CommentAuthorMike Shulman
    • CommentTimeNov 12th 2019

    @Todd: yes, that’s what the page linear logic says too. I didn’t realize multiplicative conjunction disagreed; I agree MILL should be ,1,\otimes,\mathbf{1},\multimap since those are all the multiplicative and intuitionistic operations.

    • CommentRowNumber38.
    • CommentAuthoratmacen
    • CommentTimeNov 12th 2019

    My bad, regarding MILL. I just forgot about \multimap.

    Some typos:

    In the step of tr(fg) = tr(gf) that has two “redexes”, it look like g is applied to the wrong label.

    In self-duality of a Frobenius monoid, I think you swapped the components of the definition of eta, based on how you unfold it later.

    • CommentRowNumber39.
    • CommentAuthoratmacen
    • CommentTimeNov 13th 2019
    • (edited Nov 13th 2019)

    Re #35:

    So far I’ve redone the Frobenius monoids section with unit and tensor types. I also did a version of the “centered” Frobenius axiom proof using unit (of the monoid) rather than counit. I see that the destructuring let bindings are verbose. Is there any other problem I’m supposed to be seeing?

    • CommentRowNumber40.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2019

    Thanks for the typos, I’m impressed you read it carefully enough to see that! Yes, the verbosity of matching is what this syntax avoids, staying closer to the intuition of types as having “elements” and the elements of a (tensor) product being tuples with “components”.

    • CommentRowNumber41.
    • CommentAuthoratmacen
    • CommentTimeNov 13th 2019

    Is the matching syntax not also based on the intuition of elements? It just gives temporary names to the components.

    I don’t have a problem with having projections; I have two minor problems with this type system, that I don’t have with linear logic. One is that without let binding, terms tend to grow exponentially. Maybe you already know about this sort of thing. For example, to express this term in your system, given generators f : (X,X) -> X and g : X -> (X,X):

    x:X
    |-
    let (x1,x2) = g(x) in
    let (x3,x4) = g(f(x1,x2)) in
    let (x5,x6) = g(f(x3,x4)) in
    f(x5,x6) : X

    This is, in some sense, not serious, since it’s straightforward to fix with simple metalanguage. But then for performance reasons, you really want to typecheck the metalanguage, so you don’t need to do the exponential expansion. It seems pretty typical to leave this sort of performance issue in papers though. So my problem is just that punting performance issues to a metalanguage is unlike programming language design.

    The other problem is just that I personally find the typing rules very unfamiliar, and I don’t know how to extend the system with type constructors. (Without totally changing at least the derivations.) (Actually, I bet you broke the world record for most complicated individual typing rule!)

    • CommentRowNumber42.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2019

    It sounds like my goals are different from yours. I’m not designing a programming language; I’m designing a syntax for reasoning about monoidal categories. If desired one could certainly add a let-binding syntax, but in all the actual examples I’ve looked at so far it is unnecessary. The fact is that, in my experience, MILL isn’t used much in practice by people actually working with monoidal categories, as compared to say string diagrams, and I think it’s at least worth exploring whether a different kind of type theory would be more useful. If we never try things that seem unfamiliar at first, we’ll stay stuck in a rut forever.

    • CommentRowNumber43.
    • CommentAuthoratmacen
    • CommentTimeNov 14th 2019
    I don’t currently have goals involving linear logic. My problems were about implementing (for a proof assistant) and extending your system, respectively. I guess I don’t understand your goals.
    • CommentRowNumber44.
    • CommentAuthorMike Shulman
    • CommentTimeNov 14th 2019

    If you’re interested in implementing it, adding let-bindings, or adding more type constructors, I think that would be awesome! I can certainly see that the distinctive features of this system make all of those things more difficult, or at least more interesting, than in many existing type theories. But those distinctive features are the whole point, so if you get rid of them then you wouldn’t be implementing or extending this system.

    • CommentRowNumber45.
    • CommentAuthoratmacen
    • CommentTimeNov 15th 2019

    I can certainly see that the distinctive features of this system make all of those things more difficult… But those distinctive features are the whole point,

    What exactly is the point?

    As long as you get projections, do you actually care if it’s based on props or polycategories? Like, what if the projections worked on elements of tensors instead?

    How attached are you to unique derivtions for the projections rather than translating to a matching representation, which would be the subject of derivations? Relatedly, are you OK with needing equations to commute nested matches?

    Probably the weirdest thing about your system is that you need to compare terms to do type checking, although there’s no type dependency on terms. This really seems like something to avoid. What if all function calls are labeled, and the arguments are only on one of the appearances? This would also avoid the exponential blowup.

    Would fixing anything I don’t like defeat the point for you? This sounds like complaining but it’s not. Serious questions here.

    • CommentRowNumber46.
    • CommentAuthoratmacen
    • CommentTimeNov 15th 2019

    Also, in the introduction, you have a problem with “covariables”. I haven’t read any of those papers. How similar are C#’s out parameters to this idea of covariable? (And what if out parameters were single-assignment?)

    So you say the problem with covariables is essentially that set theory doesn’t have them. But programming languages do (maybe/potentially). Type theory is not all a set theory imitation club, ya know.

    • CommentRowNumber47.
    • CommentAuthorMike Shulman
    • CommentTimeNov 15th 2019

    If you’d like to write up a specific alternative theory, then we could write out some examples in it and compare. It’s hard for me to tell what the theories you’re envisioning would look like.

    • CommentRowNumber48.
    • CommentAuthorMike Shulman
    • CommentTimeNov 15th 2019

    Also, can’t type theory be a big tent? If you want to use a type theory with covariables and matches, but I don’t, can’t we both respect each other’s preferences, and even see that each choice has its own objective advantages and disadvantages? I tried to make that point relating to string diagrams in the paper, but maybe I should explicitly say the same thing about other type theories too.

    • CommentRowNumber49.
    • CommentAuthoratmacen
    • CommentTimeNov 16th 2019

    Re #47: I would rather write down examples first, if they aren’t clear, before I bother to work out rules. #45 was actually multiple different proposals, and I’m asking which you like. I guess I should expand on #45.

    Re #48: I’m trying to find out if the advantage(s) you see with your new system can be retained while removing some of the aspects I see as disadvantages. Of course the tent gets bigger whenever people can’t agree. But then newcomers face a tougher decision.

    (For reference, to me, any disagreement about how to use type theory for monoidal categories is much less concerning than disagreement about intensional vs extensional dependent type theory. (That is, whether the judgmental equality is extensional.))

    Anyway, what about out parameters?

    • CommentRowNumber50.
    • CommentAuthoratmacen
    • CommentTimeNov 16th 2019

    First of all, notice that instead of (M1,M2|N1,N2)(M1,M2\,|\,N1,N2), you could write N1; N2; (M1,M2). The semicolon forms could be considered alternative formatting for your result tuple. But it can also be considered the unit elimination form.

    Re #45:

    As long as you get projections, do you actually care if it’s based on props or polycategories? Like, what if the projections worked on elements of tensors instead?

    This is about whether to consider the (M1,M2) returned as a special result tuple, or the introduction form of tensor. I propose allowing (M1,M2,M3) as sugar for ((M1,M2),M3), because we will consistently treat tensor as left associative. (Or, of course, we could consistently treat it as right associative.)

    Instead of M (1)M_{(1)} and M (2)M_{(2)}, I will write the more keyboard-friendly M.1 and M.2. These are a permitted form (possibly in addition to matching) of elimination for tensor. Of course, if you use M.1, you have to use M.2, and you cannot otherwise use M.

    For conceptual symmetry with the sugar (M1,M2,M3), if M has type A1A2A3A1 \otimes A2 \otimes A3 ((A1A2)A3\equiv (A1 \otimes A2) \otimes A3), then M.1 is sugar for M.1.1, M.2 for M.1.2, and M.3 for M.2.

    Of course, I’m not proposing that the sugar for left nested tensors actually be limited to two levels. I’m just showing the representative example.

    Note that this use of projections renders your implicit use of comultiplication ambiguous.

    …translating to a matching representation, which would be the subject of derivations? Relatedly, are you OK with needing equations to commute nested matches?

    The idea here is that projections are sugar entirely. So

    p:A * B |- (p.2,p.1) : B * A

    would get desugared to

    p:A * B |- let (p1,p2) = p in (p2,p1) : B * A

    . Consequently, there would be extra terms that don’t have sugared forms, but are equal to sugared forms by commuting matches. Like in the cyclicity proof:

    eps(eta{u}.2,f(g(eta{u}.1)))
    =
    let (u1,u2) = eta{u} in
    eps(eta{v}.2,g(u1));
    eps(u2,f(eta{v}.1))
    =
    eps(eta{u}.2,f(eta{v}.1));
    eps(eta{v}.2,g(eta{u}.1))
    =
    let (v1,v2) = eta{v} in
    eps(eta{u}.2,f(v1));
    eps(v2,g(eta{u}.1))
    =
    eps(eta{v}.2,g(f(eta{v}.1)))

    You know? I don’t think I like that idea anymore.

    …you need to compare terms to do type checking, although there’s no type dependency on terms. This really seems like something to avoid. What if all function calls are labeled, and the arguments are only on one of the appearances?

    This would keep projections primitive, but change the rules to avoid repeating argument tuples. The statement of the Frobenius axiom (with center) would be:

    x:M,y:M |-
    (cp{u}(x).1,cp{u}.2 * y) =
    (cp{v}(x * y).1,cp{v}.2) =
    (x * cp{w}(y).1,cp{w}.2) : M * M

    It seems relatively easy to add sugar allowing a “default” label, so that none of the labels would be needed in that example. So just (cp(x * y).1,cp.2).

    Alternatively, we could think of labels more like variables and say ((v=cp(x * y)).1,v.2).

    • CommentRowNumber51.
    • CommentAuthorMike Shulman
    • CommentTimeNov 17th 2019

    Did you read my discussion of tensor product types with projections on p11? It looks like your first suggestion is just syntactic sugar for that. I’m not sure exactly what it buys you to write f(M).1 and f(M).2 instead of f (1)(M)f_{(1)}(M) and f (2)(M)f_{(2)}(M). (This has nothing to do with polycategories, as far as I can see.)

    With your permission, I’ll ignore the idea that you don’t like any more either.

    I have trouble reading your third idea; can you please write it as mathematics instead of code?

    • CommentRowNumber52.
    • CommentAuthorMike Shulman
    • CommentTimeNov 17th 2019

    Oh, and yes of course I could write N1; N2; (M1,M2), but I don’t want to, because part of the problem of many other syntaxes for linear logic that I’ve seen is that they “sequentialize” things that should really be regarded as simultaneous. If you’re thinking of in terms of “unit elimination” then N1; N2; (M1,M2) sounds like “first eliminate N1, then N2, then do (M1,M2)” which requires a commuting conversion to identify with “first eliminate N2, then N1, then do (M1,M2)”, whereas simply permuting the scalar terms is a directly definable and intuitive equivalence relation.

    • CommentRowNumber53.
    • CommentAuthoratmacen
    • CommentTimeNov 17th 2019

    Re #51:

    Did you read… It looks like your first suggestion is just syntactic sugar for that.

    Yes. They are different. Never mind polycategories for now, since everything in #50 works in multicategories.

    Morphisms have a single output; you can’t literally project a function call in the proposals of #50. f(M).1 projects from f(M), which should have a type formed with \otimes. You might want to write it as f(M) (1)f(M)_{(1)}. The point is that this convention is very similar to what you do, but doesn’t need props. In particular, it can conceivably be made compatible with polycategory-based features.

    f (1)(M)f_{(1)}(M) seems very misleading for this. OTOH mathematicians are apparently quirky enough to write sin 2(x)sin^2(x) for (sin(x)) 2(sin(x))^2. :)

    With your permission,

    Granted!

    Re #52:

    It doesn’t have to be a semicolon, if that seems misleading.

    Are commuting conversions a big problem? I admit I don’t know much about them. Random point: I think ((N1; M1),(N2; M2)) would be allowed too, in my proposals, and that commuting conversions handle that too. It might seem like excessive notational flexibility, and maybe it is.

    • CommentRowNumber54.
    • CommentAuthoratmacen
    • CommentTimeNov 17th 2019
    • (edited Nov 17th 2019)

    Re #51:

    I have trouble reading your third idea; can you please write it as mathematics instead of code?

    Frobenius axiom with center:

    x:M,y:M(Δ u(x) (1),Δ (2) uy)=(Δ v(xy) (1),Δ (2) v)=(xΔ w(y) (1),Δ (2) w):MMx:M,y:M \vdash (\Delta^{u}(x)_{(1)},\Delta^{u}_{(2)} y) = (\Delta^{v}(x y)_{(1)},\Delta^{v}_{(2)}) = (x \Delta^{w}(y)_{(1)},\Delta^{w}_{(2)})\;:\;M \otimes M

    Using “default” label:

    x:M,y:M(Δ(x) (1),Δ (2)y)=(Δ(xy) (1),Δ (2))=(xΔ(y) (1),Δ (2)):MMx:M,y:M \vdash (\Delta(x)_{(1)},\Delta_{(2)} y) = (\Delta(x y)_{(1)},\Delta_{(2)}) = (x \Delta(y)_{(1)},\Delta_{(2)})\;:\;M \otimes M

    Variable-like labels:

    x:M,y:M((uΔ(x)) (1),u (2)y)=((vΔ(xy)) (1),v (2))=(x(wΔ(y)) (1),w (2)):MMx:M,y:M \vdash ((u \coloneqq \Delta(x))_{(1)},u_{(2)} y) = ((v \coloneqq \Delta(x y))_{(1)},v_{(2)}) = (x (w \coloneqq \Delta(y))_{(1)},w_{(2)})\;:\;M \otimes M

    Hrm, TeX changes things, huh? The variable-like labels looked really good in fixed-width ASCII.

    • CommentRowNumber55.
    • CommentAuthorMike Shulman
    • CommentTimeNov 18th 2019

    Of course nothing in my type theory needs props either; it could be done with symmetric monoidal categories directly, just as ordinary cartesian type theory can be (and often is) done with finite-product categories rather than cartesian multicategories. The props are just a technical convenience, a semantic notion that’s closer to the syntax and provides a bridge between it and the “real” semantics in symmetric monoidal categories. I’m not sure why you want to write f(M) (1)f(M)_{(1)} instead of f (1)(M)f_{(1)}(M) or π (1)(f(M))\pi_{(1)}(f(M)), and I’m still failing to see how this proposal differs substantively from mine at the level of syntax.

    I don’t like commuting conversions, and I think they’re somewhat icky type-theoretically too, but I’m probably not the best person to explain why. Ask another type theorist. (-:

    And I have to say I find all the options in #54 to be pretty ugly and unintuitive compared to my (x (1),x (2)y)=((xy) (1),(xy) (2))=(xy (1),y (2))(x_{(1)}, x_{(2)}y) = ((x y)_{(1)},(x y)_{(2)}) = (x y_{(1)},y_{(2)}). (And the fixed-width ASCII was even uglier.)

    • CommentRowNumber56.
    • CommentAuthoratmacen
    • CommentTimeNov 18th 2019

    I was using “prop”, “multicategory”, and “polycateogry” as names for the judgment structure, not the semantics. My proposals deliberately did not add any truly new semantics.

    Yeah the implicit comultiplication really helps with a lot of your examples.

    I guess I’m not successfully communicating my concerns. Perhaps they’re unfounded.

    Your paper mentioned an IntInt embedding of traced monoidal categories into compact closed categories. Is there an embedding of symmetric closed monoidal categories into compact closed categories? Maybe it somehow turns out your system doesn’t need extensions. Otherwise adding \multimap seems like a good test.

    • CommentRowNumber57.
    • CommentAuthoratmacen
    • CommentTimeNov 18th 2019

    I was using “prop”, “multicategory”, and “polycateogry” as names for the judgment structure,

    In other words, I have in mind a type system where either there’s only one consequent or multiple consequents are disjunctive.

    • CommentRowNumber58.
    • CommentAuthoratmacen
    • CommentTimeNov 18th 2019

    OK here’s a thought:

    What if xy=yxx y = y x? Then you might expect ((xy) (1),(yx) (2))((x y)_{(1)},(y x)_{(2)}) to be well-typed. But it’s not, according to your rules. So in particular, you can’t use ((xy) (1),(xy) (2))=((xy) (1),(yx) (2))((x y)_{(1)},(x y)_{(2)}) = ((x y)_{(1)},(y x)_{(2)}) because it’s nonsense. I know this is not a real problem, but it seems unintuitive.

    Do you know any way of avoiding the syntactic equality constraints that you’d accept?

    • CommentRowNumber59.
    • CommentAuthorMike Shulman
    • CommentTimeNov 18th 2019

    If what you want is to write x:A,y:B(f(x,y) (1),f(x,y) (2)):CDx:A, y:B \vdash (f(x,y)_{(1)},f(x,y)_{(2)}) : C\otimes D instead of x:A,y:B(f (1)(x,y),f (2)(x,y)):(C,D)x:A, y:B \vdash (f_{(1)}(x,y),f_{(2)}(x,y)) : (C,D), with an \otimes on the right instead of a comma and a variation in syntax, then I don’t really see the point, and it seems offhand as though it would be harder to set up the rules formally. But maybe not; if you want to give it a try I’d be happy to see what you come up with.

    Any full monoidal subcategory of a compact closed category is traced, and not every closed symmetric monoidal category is traced, so not every closed symmetric monoidal category can be monoidally embedded in a compact closed one (related). I did think a bit about adding \multimap, but it’s tricky. If you can think of a good way to make that easier while not sacrificing the advantages of the current system, I’d love to hear it!

    I wouldn’t expect ((xy) (1),(yx) (2))((x y)_{(1)},(y x)_{(2)}) to be well-typed; if you duplicate xyx y you’ll get (xy) (1)(x y)_{(1)} and (xy) (2)(x y)_{(2)}, and if you then substitute along the equality xy=yxx y = y x you have to do it everywhere, getting (yx) (1)(y x)_{(1)} and (yx) (2)(y x)_{(2)}. You just have to think of the entire list of terms as a unit.

    • CommentRowNumber60.
    • CommentAuthoratmacen
    • CommentTimeNov 18th 2019

    If what you want is to write… then I don’t really see the point,

    Well, the point is that I expect it to tolerate extensions better…

    I did think a bit about adding \multimap, but it’s tricky.

    …Like that one. Of course I don’t know yet. It just seems kind of like you lucked out because generators are the only non-base-case term former. With your rules, you derive mutiple terms at once, in some sense. So I figure you get an all-base-cases rule, and an everything-else rule. When you add connectives, I figure the complexity will get crazy without some reorganization.

    \multimap seems particularly tricky, assuming the intro form is a binder, but the switch to ABA \otimes B instead of your (A,B)(A,B) judgment structure is based on my tentative conclusion that the mileage you get out of props doesn’t scale.

    so not every closed symmetric monoidal category can be monoidally embedded in a compact closed one

    Cool. How resourceful of you to already know that! Maybe you already knew: I asked because I guess such an embedding would let you handle \multimap without adding it as primitive, like you did for traces.

    If you can think of a good way to make that easier while not sacrificing the advantages of the current system, I’d love to hear it!

    That’s why I’m asking what you think are the important advantages. I’m still trying to find out if you’ll let go of repeated argument tuples. From a type checking perspective, those equality constraints seem really silly. I don’t think they’re a source of deep difficulty, but they shouldn’t be there at all. They make the terms larger (asymptotically, at least, even if your use cases are OK) and give the type checker a lot of extra work.

    I’m probably going to think about rules, but I’d really rather think about labels than argument tuples.

    if you then substitute along the equality xy=yxx y = y x you have to do it everywhere,

    I understand. But in the cartesian case, that isn’t so. That includes set theory, so this is not in accordance with the set theoretic intuition. My thought was that the intuitive mismatch is minimized by forbidding this repetition. (So substituting everywhere means substituting in one place.)

    • CommentRowNumber61.
    • CommentAuthorMike Shulman
    • CommentTimeNov 18th 2019

    If by “let go of repeated argument tuples” you mean moving to a syntax like those in #54, then no, I don’t want to do that. I don’t think it would be a good idea to allow writing ((xy) (1),(yx) (2))((x y)_{(1)},(y x)_{(2)}), because it’d be hard to figure out (even for a human reader) which terms “match”. I guess your point is that you don’t want to have to write xyx y twice at all, but to me it’s an important aspect of the syntax that it’s the same thing xyx y that’s appearing (“duplicated”) in both places.

    • CommentRowNumber62.
    • CommentAuthorMike Shulman
    • CommentTimeNov 18th 2019
    • (edited Nov 18th 2019)

    From a type checking perspective, those equality constraints seem really silly. I don’t think they’re a source of deep difficulty, but they shouldn’t be there at all. They make the terms larger (asymptotically, at least, even if your use cases are OK) and give the type checker a lot of extra work.

    To me this betrays a fundamental misunderstanding. The goal of a type theory is not to make things easier for the type checker! The goal of a type theory is to make things easier for the human!

    I can agree that there are probably cases where a human would appreciate having a match/let syntax as an option. In principle I wouldn’t object to using a theory with only match/let as an intermediate language and compiling the nice syntax down to it (maybe this is the idea that got discarded in #50?) — although that seems like it would only matter when implementing it in a proof assistant, since mathematically the process of compiling the nice syntax down to the intermediate language would, I expect, already be doing the same work as interpreting it directly into the desired semantics (you’d essentially be interpreting it into the syntactic monoidal category generated by MILL).

    • CommentRowNumber63.
    • CommentAuthorMike Shulman
    • CommentTimeNov 18th 2019

    I guess one way to handle \multimap would be to just apply the ordinary MILL rule for abstractions when the codomain is unary, and rely on \otimes to tuple up non-unary codomains if necessary.

    • CommentRowNumber64.
    • CommentAuthoratmacen
    • CommentTimeNov 19th 2019

    Re #61: Oh all right. I wasn’t proposing to support ((xy) (1),(yx) (2))((x y)_{(1)},(y x)_{(2)}), by the way.

    Re #62: If the terms are larger, that gives the human and the type checker extra work. But I can see that you want to keep it as an option.

    (maybe this is the idea that got discarded in #50?)

    The discarded idea allowed the user to use either matching or projections. The translation between them did not seem intuitive enough to expose to the user.

    Re #63: Are you saying you were thinking of having \multimap types with multiple codomains?

    • CommentRowNumber65.
    • CommentAuthorMike Shulman
    • CommentTimeNov 19th 2019

    Are you saying you were thinking of having \multimap types with multiple codomains?

    No, that’s not what I was saying, but that’s another option one might consider.

    • CommentRowNumber66.
    • CommentAuthoratmacen
    • CommentTimeNov 19th 2019
    Well then I don’t know what #63 is saying.
    • CommentRowNumber67.
    • CommentAuthorMike Shulman
    • CommentTimeNov 19th 2019

    I just mean have a rule

    Γ,x:A(M):(B)Γ(λx.M):(AB)\frac{\Gamma, x:A \vdash (M) : (B)}{\Gamma \vdash (\lambda x.M) : (A\multimap B)}

    that only applies when the codomain is unary.

    • CommentRowNumber68.
    • CommentAuthoratmacen
    • CommentTimeNov 20th 2019
    But doesn’t that mess up tensoring?
    • CommentRowNumber69.
    • CommentAuthorMike Shulman
    • CommentTimeNov 20th 2019

    How so?

    • CommentRowNumber70.
    • CommentAuthoratmacen
    • CommentTimeNov 20th 2019
    Well how would you derive a pair of lambdas?
    • CommentRowNumber71.
    • CommentAuthorMike Shulman
    • CommentTimeNov 20th 2019

    Hmmm… I suppose you would want to do that. Well, I guess you could try to incorporate it in the generator rule somehow… but probably that rule is about as overextended as possible already and the best thing to do would be to retreat from terms-have-unique derivations. Maybe it would work…

    • CommentRowNumber72.
    • CommentAuthoratmacen
    • CommentTimeNov 21st 2019

    I think unique derivations could be regained with a different style of rules motivated by type checking. That’s what I’m planning. There’s a nice way to do linear type checking by passing a state that keeps track of which variables were used. I think it can be modified to handle tensor projections too.

    • CommentRowNumber73.
    • CommentAuthorMike Shulman
    • CommentTimeNov 30th 2019

    I just read Hasegawa’s thesis, in which among other things he gives a multiple-conclusion type theory where both commas represent the same tensor product. (Actually he writes the right-hand commas as \otimes, but he considers them to be definitionally associative, so they act like a comma.) His setup allows contraction and weakening, but not naturally: semantically it’s like a linear/nonlinear adjunction but without the right adjoint. He also has a sort of multiple-domain multiple-codomain internal-hom too, although it’s somewhat different because it always lands in the nonlinear world.

    • CommentRowNumber74.
    • CommentAuthoratmacen
    • CommentTimeDec 1st 2019

    It doesn’t have tensor projections, like your system does. The non-natural contraction seems different from implicit comultiplication because the copies aren’t distinguished. So I guess there’s not really much to learn for extending your system?

    I suppose it’s a thought to have completely implicit, non-natural weakening, instead of your counit notation. Does that really work? What do you think of that style? Oh wait, that seems ambiguous with your sugar for multiple comultiplications. Or maybe not, because of the counital and coassiciative axioms!

    I got distracted from extending your system. I tried looking up a ready-made solution but didn’t find it. I think I could sit down and work out something probably right, but I haven’t. I’m not that excited about it; I can’t tell if you are.

    • CommentRowNumber75.
    • CommentAuthorMike Shulman
    • CommentTimeDec 2nd 2019

    I think that semantically, having unnatural contraction and weakening is equivalent to saying that every type is equipped with a specified cocommutative comultiplication. So there’s some overlap in the intended semantics, but my system is more flexible because it doesn’t require any or all types to have comultiplications, let alone cocommutative ones.

    I can’t imagine how implicit weakening could be unnatural. How do you distinguish implicitly between weakening a variable xx and weakening the value f(x)f(x)?

    I’d be rather interested if you can think of a good way to add internal-homs to my system, but I’m not planning to spend any time on such an extension myself. (I am, however, planning to run a group at the ACT 2020 summer school aiming to extend this system to monoidal bicategories.)

    • CommentRowNumber76.
    • CommentAuthoratmacen
    • CommentTimeDec 2nd 2019

    How do you distinguish implicitly between weakening a variable xx and weakening the value f(x)f(x)?

    Good point. I guess by not implicitly doing anything else, like call ff. So you’d still need explicit counit.

    In other words x:C()=(|ε(x))(|ε(f(x))):()x:C \vdash () = (| \epsilon(x)) \neq (| \epsilon(f(x)))\;:\;().

    • CommentRowNumber77.
    • CommentAuthorMike Shulman
    • CommentTimeDec 3rd 2019

    Then that’s not stable under substitution.

    • CommentRowNumber78.
    • CommentAuthoratmacen
    • CommentTimeDec 3rd 2019
    I see.
    • CommentRowNumber79.
    • CommentAuthoratmacen
    • CommentTimeDec 5th 2019

    To handle lollipop (internal-hom), do you want unidirectional inference with annotations on lambdas, or bidirectional typing with an optional annotation/cut rule. Do you want to require eta-long terms?

    • CommentRowNumber80.
    • CommentAuthorMike Shulman
    • CommentTimeDec 5th 2019

    Well, in principle both would be nice to have – one for simplicity of exposition, the other for convenience of practical use. I would say start with whichever is easier for you.

    • CommentRowNumber81.
    • CommentAuthoratmacen
    • CommentTimeDec 5th 2019

    For your entertainment, I’ll tell you about a tricky situation I ran into involving your tensor projections and generators, combined with &\&-intro.

    For both examples,

    x:Af:A(B 1,B 2)x:A \qquad f:A \to (B_1,B_2)

    Example 1:

    g 1:B 1Cg 2:B 1Dh:B 2Eg_1:B_1 \to C \qquad g_2:B_1 \to D \qquad h:B_2 \to E

    ([g 1(f (1)(x)),g 2(f (1)(x))],h(f (2)(x))):(C&D)E([g_1(f_{(1)}(x)),g_2(f_{(1)}(x))],h(f_{(2)}(x)))\;:\;(C \& D) \otimes E

    Example 2:

    g:(B 1,B 2)Ch:ADg:(B_1,B_2) \to C \qquad h:A \to D

    [g(f (1)(x),f (2)(x)),h(x)]:C&D[g(f_{(1)}(x),f_{(2)}(x)),h(x)]\;:\;C \& D

    The problem is whether you use the generator rule on ff inside or outside the &\&-intro. In example 1, you have to use it outside; in example 2, you have to use it inside the left component. But processing both examples left to right, the ff is first encountered in the left component, because the \otimes-elim is implicit.

    I’m not sure if I’m going to handle this.

    I guess there’s basically the same problem with \oplus-elim.

    • CommentRowNumber82.
    • CommentAuthoratmacen
    • CommentTimeDec 6th 2019

    Actually, I’m not sure about the semantics of a similar example, where the generator can be inside or outside the pair:

    g 1:(B 1,B 2)Cg 2:(B 1,B 2)Dg_1:(B_1,B_2) \to C \qquad g_2:(B_1,B_2) \to D

    [g 1(f (1)(x),f (2)(x)),g 2(f (1)(x),f (2)(x))]:C&D[g_1(f_{(1)}(x),f_{(2)}(x)),g_2(f_{(1)}(x),f_{(2)}(x))]\;:\;C \& D

    Dispensing with tensor projections, this could be written in two ways:

    let(b 1,b 2)=f(x)in[g 1(b 1,b 2),g 2(b 1,b 2)]let\;(b_1,b_2) = f(x)\;in\;[g_1(b_1,b_2),g_2(b_1,b_2)]

    [let(b 1,b 2)=f(x)ing 1(b 1,b 2),let(b 1,b 2)=f(x)ing 2(b 1,b 2)][let\;(b_1,b_2) = f(x)\;in g_1(b_1,b_2),let\;(b_1,b_2) = f(x)\;in g_2(b_1,b_2)]

    Are these actually equal? Maybe sometimes implicit \otimes-elim is just plain wrong, not tricky.

    • CommentRowNumber83.
    • CommentAuthorMike Shulman
    • CommentTimeDec 6th 2019

    Maybe I’m misunderstanding your notation, but doesn’t that equality simply follow from the naturality of the universal property of the cartesian product?

    • CommentRowNumber84.
    • CommentAuthoratmacen
    • CommentTimeDec 6th 2019

    Probably? If I knew I wouldn’t have to ask. This is [g 1,g 2]f[g_1,g_2] \circ f versus [g 1f,g 2f][g_1 \circ f,g_2 \circ f]. Where brackets are pairing for &\&, f:AB 1B 2f:A \to B_1 \otimes B_2, yada yada. I know they’re the same in a cartesian category where =&=×\otimes = \& = \times, but I don’t know why, in categorical terms.

    So you’re saying they should be equal. OK.

    • CommentRowNumber85.
    • CommentAuthorMike Shulman
    • CommentTimeDec 7th 2019

    By definition, [g 1f,g 2f][g_1\circ f, g_2\circ f] is the unique morphism that produces g 1fg_1\circ f and g 2fg_2\circ f when composed with the two projections π 1\pi_1 and π 2\pi_2 respectively. But [g 1,g 2]f[g_1,g_2]\circ f also has this property, since [g 1,g 2][g_1,g_2] produces (by definition) g 1g_1 and g 2g_2 when composed with π 1\pi_1 and π 2\pi_2 respectively, and composition is associative. Therefore, they are equal.

    • CommentRowNumber86.
    • CommentAuthoratmacen
    • CommentTimeDec 7th 2019

    I see it clearly now. I was guessing they were equal, I just wasn’t sure. You see? This is why I don’t like category theory much. I’m bad at it. :)

    I’m going to put off the additive connectives at first. I have a plan, but the bookkeeping is more complicated, so I should work out the details of MILL first.

    • CommentRowNumber87.
    • CommentAuthorMike Shulman
    • CommentTimeDec 7th 2019

    As I always say to my calculus students, the more you practice it, the better you’ll get at it, and the more you’ll like it.

    • CommentRowNumber88.
    • CommentAuthoratmacen
    • CommentTimeDec 8th 2019

    Odd: if CC is a comonoid, in the context x:Cx:C, you’d allow (x (1),x (2))(x_{(1)},x_{(2)}) and (x (1),x (2),x (3))(x_{(1)},x_{(2)},x_{(3)}) but not (x (1),x (3))(x_{(1)},x_{(3)}), right? Makes type checking yet more complicated. I’m not going to handle that.

    • CommentRowNumber89.
    • CommentAuthorMike Shulman
    • CommentTimeDec 9th 2019

    Yes, but the latter is just sugar for (x (1)(1),x (1)(2),x (2))(x_{(1)(1)},x_{(1)(2)},x_{(2)}) or (x (1),x (2)(1),x (2)(2))(x_{(1)},x_{(2)(1)},x_{(2)(2)}), and I also didn’t try to make that desugaring precise.

    • CommentRowNumber90.
    • CommentAuthoratmacen
    • CommentTimeDec 9th 2019

    Right. I was thinking about whether it’d be convenient to make it not sugar. Not convenient enough for me.

    The (x (1),x (3))(x_{(1)},x_{(3)}) could be sugar for (the same thing as) (x (1),x (3)|ε(x (2)))(x_{(1)},x_{(3)} | \epsilon(x_{(2)})). Sugar doesn’t need to have good metatheoretical properties.

    • CommentRowNumber91.
    • CommentAuthoratmacen
    • CommentTimeDec 12th 2019

    Good news: I finished a drafty set of rules for \otimes, 11, and \multimap with tensor projections. Don’t set your hopes too high; it’s kind of a mess. Now I just have to figure out how to typeset this on a new page…

    • CommentRowNumber92.
    • CommentAuthoratmacen
    • CommentTimeDec 12th 2019

    OK, I put it up at tensor projection. Please take a look and tell me what you think.

    • CommentRowNumber93.
    • CommentAuthorUrs
    • CommentTimeMay 12th 2021

    added pointer to:

    diff, v124, current

    • CommentRowNumber94.
    • CommentAuthorUrs
    • CommentTimeSep 25th 2022

    I have added pointer to:

    which may be the earliest reference that I have seen which actually states the J-rule in recognizable form

    diff, v131, current

    • CommentRowNumber95.
    • CommentAuthorMike Shulman
    • CommentTimeSep 26th 2022

    Martin-Lof’s version isn’t recognizable?

    • CommentRowNumber96.
    • CommentAuthorUrs
    • CommentTimeSep 26th 2022

    Where does he state it, clearly? In “An intuitionistic theory of types: predicative part” it appears in mere prose in Sec 1.7, and in some baroque form on p. 109.

    • CommentRowNumber97.
    • CommentAuthorMike Shulman
    • CommentTimeSep 26th 2022

    Page 94, “I-elimination”.

    • CommentRowNumber98.
    • CommentAuthorUrs
    • CommentTimeSep 26th 2022

    I see, thanks. That’s indeed recognizable, with some parsing effort. Did he ever state it in more streamlined form?

    • CommentRowNumber99.
    • CommentAuthorMike Shulman
    • CommentTimeSep 26th 2022

    Probably, but I don’t know where.

    • CommentRowNumber100.
    • CommentAuthorUrs
    • CommentTimeDec 22nd 2022

    I noticed that the list of references here was lacking some glue where it jumped from Russel right to identity types. I have now filled in (here) pointer to:

    But there is lots of room left for the eager type theorist to further expand and beautify the discussion of the literature in this entry…

    diff, v133, current