Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 27 of 27
There is a lot of controversy and confusion surrounding the “initiality theorem/conjecture/principle” of dependent type theory. In another thread Richard suggested getting a group together to work out the details of one such proof and write it up on the nLab. Even though eventually we want a general theorem rather than particular cases, working out a particular case publically, using modern language and tools, would probably help more people understand the issues and choices involved, and potentially help with the sociological/communication/advertisement problems that the controversy produces. And if we get enough people working on it, each individual workload would be more manageable (a worry since part of the problem is that such proofs are extremely long and tedious).
I’m envisioning this kind of like a “virtual journal club” with Streicher’s proof for CoC as a primary reference. We would set up regular online meetings, maybe on Zoom (I have a Zoom pro account now, so I can host arbitrarily large and long group meetings and seminars), as well as using one or more nForum threads for discussion, and with the end goal of producing a complete proof on a collection of nLab pages. We could start with a fairly minimal type theory as a target, and then if and when we succeed at that we could start adding on bells and whistles. I think having such a proof written out on the nLab would be valuable, pedagogically, even after someone proves a general version of the theorem. (In particular, we could use hyperlinks and maybe transclusions intelligently to provide multiple levels of understanding of the proof in a way that’s still hard in a paper or book, allowing a reader to see the entire high-level overview and then expand or drill down into details selectively.)
BUT this will only work if we have enough people participating! Richard expressed some interest, and I’d be willing to work on it too and even be the “organizer” if need be, but I’m not going to do all the work. Is anyone else interested? Don’t worry, you don’t need to understand much about the subject yet; I figure we’ll start by getting ourselves all on the same page and figuring out what exactly has to be done.
Probably you should advertize this on the HoTT list for reaching more potential participants.
I imagine the chance of actual participation would be increased if you just started an $n$Lab page as usual, to be edited jointly and incrementally by whoever feels to join in, with discussion here on the nForum, as usual, instead of aiming for a regularly meeting online seminar. That would also be more useful for the bystanders who won’t actively participate but be interested in what you guys are discussing.
I would be interested in working out a thorough initiality proof, as an excuse to get more familiar with models of dependent type theory.
How minimal of a type theory are you thinking?
I’d be interested too, though I have no clear idea of what exactly needs to be done.
I’d also be interested in contributing.
Yes, I was certainly going to advertise it on the HoTT list and probably the nCafe too, but before doing that I wanted to find out whether there would be a critical mass of people to make it actually happen. It sounds like the answer is yes! So I’ll put this on the queue to get started, although it may be a little while before it happens, since I have a semester starting next week and then the Voevodsky memorial conference.
Re #3, I was thinking of starting with a theory having maybe $\Pi$-types and nothing else. But I figure making that decision would be one of the first things we’d do as a group.
Exciting! Just confirming my willingness to contribute.
Mike discusses scones as a good way to establish meta-theoretic properties. Could they be employed here?
Well, the techniques used there involving scones rely on the initiality of syntax.
Yeah I don’t think we need anything complicated like logical relations to prove this: an initiality theorem is the same as a soundness/completeness theorem.
Soundness says the syntax can be interpreted in any of the specified kind of model. Completeness says that the syntax itself forms a model in such a way that the interpretation you use in the soundness theorem extends uniquely to a morphism of models.
Note that in #12, all occurrences of [[A]]
, [[B]]
etc come out as (unsatisfied) nLab links: A etc
I’ve read through #12, and it looks very reasonable! The idea about partial environments seems nice. It’s a little unclear to me where the
bad decisions about treatment of variables and substitution in the syntax
come in, though. Do you mean in the way that you formalised (1) and (2)?
What if we were to take some very simple type theory, and some very simple collection of judgements. Can we work out (1) - (11) explicitly and see where the need for good decisions would enter?
Also, regarding the role of CwAs, what if in the first instance, for simplicity, we just assumed that we had a sufficiently nice category with strict pullbacks? I would think that it is not important initially whether or not such a category actually exists, and that we would still be able to explore whether the overall approach is likely to work out well.
I’m not sure what “strict pullbacks” means, but I think we should stick with a standard notion of categorical for type theory, so that we really are proving the same kind of theorem that the questions are being asked about. My own preference is for CwFs, but the choice among them is not very important.
I think we should stick with a standard notion of categorical for type theory, so that we really are proving the same kind of theorem that the questions are being asked about. My own preference is for CwFs, but the choice among them is not very important.
I agree. It just seemed to me that (6) - (11) should go through more or less independently of the choice, so it might be helpful (for some at least) to have in mind, instead of or in addition to a CwA or CwF, a setting where one does not have coherence issues with regard to substitution, even if such a setting may not exist! I.e. suppose that any choices of pullback are equal, not only isomorphic, so one gets strict associativity, etc. But feel free to ignore!
On a different note, is it completely obvious why (1) - (11) fulfill all aspects of initiality if they are carried out? I think that they principally concern how to interpret the proposed initial gadget. Is the uniqueness aspect obvious? And is it obvious how to put the required structure (CwA/CwF/whatever) on the initial gadget? The answer to these two questions may well be “yes”, I have not thought about it much, just raising them for the purposes of discussion.
I would call a CwA/CwF also a setting where one does not have coherence issues with regard to substitution; they are structures in which substitution is by definition strict. The coherence issue is in getting from an ordinary category to one of those.
is it completely obvious why (1) - (11) fulfill all aspects of initiality if they are carried out?
Peter may have something to add, but I would say that no, it is not completely obvious; the other parts will also require proof. Even Streicher doesn’t actually prove uniqueness, he just says something like “it can be easily shown” (and I’m not sure that he even takes the time to assemble the interpretation of syntax into a functor as such). I do think it is fairly clear (in any particular case, at least) how to put the required structure on the putatively-initial gadget, although I suppose there is room for disagreement there, and we would certainly want to write this down carefully as part of the theorem as well.
I’d be interested in this virtual journal club.
I guess one of the early decisions we’ll have to make is whether to use named variables or de Brujin stuff in the syntax.
Re Mike Shulman #20:
I guess one of the early decisions we’ll have to make is whether to use named variables or de Brujin stuff in the syntax.
Not necessarily; one can be agnostic between them, by taking as a parameter something like “a monoidal category V with an ‘extend’ operation, and with a monoidal-plus-extension functor to (Sets, + , 0, +1)”. The idea is that V is the category of what I called “contexts-of-variables” in my long sketch above; it axiomatises everything one uses about how contexts work, and you recover de Bruijn indices by taking ob V = N, or named variables by taking ob V = P_fin(Name), with a reasonably-evident coproduct monoidal structure in each case.
Andrej and Philipp and I are trying that out a bit in our formalisation of our general type theories. It’s nice in some ways, but it also adds a fair bit of extra overhead in both formalism and terminology. So I wouldn’t necessarily recommend taking this approach; but it’s at least interesting to think about.
Personally, I’m intrigued by the possibility of using this representation of variable binding, but that might be too exotic for our purposes here.
a monoidal category V with an ‘extend’ operation, and with a monoidal-plus-extension functor to (Sets, + , 0, +1)
Is there an initial such V? If so, is it familiar?
Combinatory logic certainly does exist for simply typed lambda calculus. I’ve never heard of a dependent combinatory logic; that would be interesting! But I think that would be a whole other project to explore… and in the end, we do want to relate the results back to the variables that people use in practice.
It looks like it has been explored already in this paper. I have found this article sumarising it nicely.
It seems you can have a dependently typed lambda calculus pretty easily with only 4 combinators. You have your S and K as usual, then we have L and X (Xi in the paper) combinators. X is a sort of type annotation where Xab means you take a type a and get a type b. And La means that a is a type.
Re @MikeShulman #23:
a monoidal category V with an ‘extend’ operation, and with a monoidal-plus-extension functor to (Sets, + , 0, +1)
Is there an initial such V? If so, is it familiar?
There is, but it depends on how exactly you axiomatise the “extend” operation (and indeed the “monoidal” part of the axiomatisation is negotiable as well), and I’m not quite sure what is exactly the ‘right’ structure to assume.
That’s why I’m ambivalent about recommending this approach: I feel like if one found the right axiomatisation, it would be well worthwhile, but none of the things I’ve tried seem like they quite hit the spot yet, and I think that’s somehow why the payoff for them isn’t quite justifying the overhead.
But in all the versions I’ve considered, the initial one should be some small (set-)category equivalent (as a category) to FinSet, with its usual inclusion into Set. (And FinSet itself should be an example of such a structure, as well.)
1 to 27 of 27