Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory internal-categories k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeAug 28th 2018

    There is a lot of controversy and confusion surrounding the “initiality theorem/conjecture/principle” of dependent type theory. In another thread Richard suggested getting a group together to work out the details of one such proof and write it up on the nLab. Even though eventually we want a general theorem rather than particular cases, working out a particular case publically, using modern language and tools, would probably help more people understand the issues and choices involved, and potentially help with the sociological/communication/advertisement problems that the controversy produces. And if we get enough people working on it, each individual workload would be more manageable (a worry since part of the problem is that such proofs are extremely long and tedious).

    I’m envisioning this kind of like a “virtual journal club” with Streicher’s proof for CoC as a primary reference. We would set up regular online meetings, maybe on Zoom (I have a Zoom pro account now, so I can host arbitrarily large and long group meetings and seminars), as well as using one or more nForum threads for discussion, and with the end goal of producing a complete proof on a collection of nLab pages. We could start with a fairly minimal type theory as a target, and then if and when we succeed at that we could start adding on bells and whistles. I think having such a proof written out on the nLab would be valuable, pedagogically, even after someone proves a general version of the theorem. (In particular, we could use hyperlinks and maybe transclusions intelligently to provide multiple levels of understanding of the proof in a way that’s still hard in a paper or book, allowing a reader to see the entire high-level overview and then expand or drill down into details selectively.)

    BUT this will only work if we have enough people participating! Richard expressed some interest, and I’d be willing to work on it too and even be the “organizer” if need be, but I’m not going to do all the work. Is anyone else interested? Don’t worry, you don’t need to understand much about the subject yet; I figure we’ll start by getting ourselves all on the same page and figuring out what exactly has to be done.

    • CommentRowNumber2.
    • CommentAuthorUrs
    • CommentTimeAug 28th 2018

    Probably you should advertize this on the HoTT list for reaching more potential participants.

    I imagine the chance of actual participation would be increased if you just started an nnLab page as usual, to be edited jointly and incrementally by whoever feels to join in, with discussion here on the nForum, as usual, instead of aiming for a regularly meeting online seminar. That would also be more useful for the bystanders who won’t actively participate but be interested in what you guys are discussing.

    • CommentRowNumber3.
    • CommentAuthormaxsnew
    • CommentTimeAug 28th 2018

    I would be interested in working out a thorough initiality proof, as an excuse to get more familiar with models of dependent type theory.

    How minimal of a type theory are you thinking?

    • CommentRowNumber4.
    • CommentAuthorDavid_Corfield
    • CommentTimeAug 28th 2018

    I’d be interested too, though I have no clear idea of what exactly needs to be done.

    • CommentRowNumber5.
    • CommentAuthorJohn Dougherty
    • CommentTimeAug 28th 2018

    I’d also be interested in contributing.

    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeAug 28th 2018

    Yes, I was certainly going to advertise it on the HoTT list and probably the nCafe too, but before doing that I wanted to find out whether there would be a critical mass of people to make it actually happen. It sounds like the answer is yes! So I’ll put this on the queue to get started, although it may be a little while before it happens, since I have a semester starting next week and then the Voevodsky memorial conference.

    • CommentRowNumber7.
    • CommentAuthorMike Shulman
    • CommentTimeAug 28th 2018

    Re #3, I was thinking of starting with a theory having maybe Π\Pi-types and nothing else. But I figure making that decision would be one of the first things we’d do as a group.

  1. Exciting! Just confirming my willingness to contribute.

    • CommentRowNumber9.
    • CommentAuthorDavid_Corfield
    • CommentTimeAug 30th 2018

    Mike discusses scones as a good way to establish meta-theoretic properties. Could they be employed here?

    • CommentRowNumber10.
    • CommentAuthorMike Shulman
    • CommentTimeAug 30th 2018

    Well, the techniques used there involving scones rely on the initiality of syntax.

    • CommentRowNumber11.
    • CommentAuthormaxsnew
    • CommentTimeAug 30th 2018

    Yeah I don’t think we need anything complicated like logical relations to prove this: an initiality theorem is the same as a soundness/completeness theorem.

    Soundness says the syntax can be interpreted in any of the specified kind of model. Completeness says that the syntax itself forms a model in such a way that the interpretation you use in the soundness theorem extends uniquely to a morphism of models.

    • CommentRowNumber12.
    • CommentAuthorp_l_lumsdaine
    • CommentTimeSep 2nd 2018
    This sounds like an excellent idea to me! Depending on when it happens, I may well be too busy to participate actively, but to get some participation in upfront, below is a slightly tidied-up copy-paste of a fairly detailed sketch I wrote last year (originally in an email discussion with Voevodsky and a few other people who’d been thinking about it — hopefully they may contribute here as well!) based on an attempt to formalise it a few years earlier.

    ----

    Here’s a sketch of an approach to initiality, based on my notes from an attempt I made a few years ago to formalise it. It was for a quite small type theory, but I was quite careful not to rely on the specifics of the constructors included — most of the modifications compared to Streicher’s proof are motivated by that.

    I abandoned the attempt partly because I got distracted by other projects and partly because I had made some bad early decisions about the treatment of variables in the definition of syntax/substitution, which it was clearly going to be quite painful either to fix or to work with. But apart from that, this approach to the induction seemed to be going well, and I’m quite hopeful that it works robustly for most/all standard rules/constructors — though I won’t believe that with confidence until I or someone else has formalised it :-)


    Briefly speaking, the key difference from Streicher’s setting is:

    - all inductions here are structural induction over the syntax — there’s no auxiliary height/depth function involved. In particular, in Streicher’s approach, the partial interpretation function takes an entire judgement as argument; the partial interpreation function here has just a single expression as its main argument (so it can be a purely structural induction), but assumes a “semantic context” is already given; so for e.g. a type expression A, it assumes a “semantic context” Г is given, and then attempts to evaluate A as a semantic type A in that context.

    For the impatient, this is step (7) below.


    Here’s the sketch, starting by fixing some relevant choices about the setup of syntax:

    (1) raw syntax is defined as an inductive family parametrised by (a) syntactic classes, i.e. the two-element type { tm, ty }, and (b) “contexts-of-variables”. Here “contexts-of-variables” could be taken to mean “all subsets of N”, if we want to use named variables, or just “natural numbers” if using de Bruijn indices (where n:N would represent the context of length n, with variables { 0 , …, n–1 }).

    So for instance, if we use de Bruijn indices, the Π constructor in the raw syntax would have type

    Π {n : N} (A : Expr ty n) (B : Expr ty (n+1)) : Expr ty n

    (This is rather related to approaches to syntax like those of e.g. Fiore and collaborators, although still a bit more concrete.)

    (2) Going back to the abstraction of contexts-of-variables, for any cxt-of-vbles γ, write (vars γ) for its actual type of variables — typically a finite set. E.g. if using de Bruijn indices, (vars n) = Fin n = { 0, …, n–1 }

    Now we can construct substitution in two steps, each involving a single induction over expressions. First define “renaming of variables”: for any expression e in context-of-variables γ, and a function f: γ -> γ', get an expression (rename f e), by induction on e. Then define simultaneous substution: for an expression e over γ, and for each variable i of γ, a term expression (t i) over γ', we get an expression (subst t e) over γ'. This again is a single induction on e; for going under binders, we use the renaming-of-variables operation already defined.

    All the normal lemmas about substitution can be nicely proven using this definition (usually in 2 steps, with the first being a version of the lemma for just renaming).

    (3) I’ll assume the system does not include a judgemental equality judgement for contexts. (This is essentially unnecessary: any contexts that *would* be judgementally equal in the system with that judgement still end up canonically end up judgementally isomorphic, by the substitution that consists of not renaming variables.)

    (4) The interpretation will be in CwA’s (aka split type-categories). But I will also make use of the presheaf Tm, defined by Tm(X) = Σ (A : Ty(X)), (sections π_A). Of course we could alternatively start from a CwF, with Tm given as primitive. The CwA could be a contextual category/C-system, but we don’t need to assume that.

    (5) Besides partial maps, we’ll also talk about *partial elements* of types. Indeed a partial map X -> Y can just be defined as a map from X to *partial elements of Y*. (This is known in topos theory as the “partial map classifier”. It’s quite fun to program with: it’s a monad, and one can work with it in a Haskell-like monad style — I’ll do that informally below.)

    (6) Fix a CwA C for the remainder. Now we define:

    - for a cxt-of-vbles γ, a *partial/total environment* for γ on an object X : C is a partial/total map from (vars γ) to Tm X.

    The idea is that this is what one needs to know about the interpretation of a context: for each variable, one can can extract its type, and the variable itself as a term of that type.

    (7) Now we get to the induction. For each (e : Expr cl γ), where cl is a syntactic class and γ a context-of-variables, we’ll give:

    - in case cl=ty: for each X:C and partial environment for γ on X, a partial element of Ty X
    - in case cl=tm: for each X:C and partial environment for γ on X, a partial element of Tm X

    With this set up as the goal, the inductive definition is basically straightforward. E.g. for Π γ A B, it’s defined as:

    - given X and a γ-environment E, first attempt to interpret A over X and E; if it succeeds, write the result as A : Ty X
    - now consider X.A, and the environment E.A over (γ+1) on X.A consisting of pulling back E along π_A, together with mapping the extra variable to ( π_A^* A, v_A ), where v_A is the canonical term for A.
    - now attempt to interpret B over X.A, E.A. If it succeeds, write the result as B : Ty (X.A)
    - now we can take Π A B, in the Π-structure on C, and return this as our result.

    (Note how this “attempt to interpret A; if it succeeds, call the result A” is exactly like “A <- interp A” in Haskell’s do-notation…)

    Other constructors are similar. Variables are interpreted exactly by what the environment gives.

    (7') An alternative at this step: we could assume when interpreting terms that their type is already interpreted, i.e. define the goal type of the induction as

    - in case cl=tm: for each X:C, partial environment for γ on X, and A in Ty Г, a partial element of (Tm X A)

    This seems perhaps conceptually nicer than the version above, but that version seemed to work out slightly more easily in practice.

    (8) Having defined the partial interpretation, one can immediately prove a reindexing lemma — not yet about substitution in the syntax, but about reindexing under maps in the CwA, showing that the partial interpretations above are all natural in their (X,E) arguments. This is again a structural induction over expressions. (Alternatively, one could have put that naturality into the goals of the original induction.)

    (9) Now one can extend the partial interpretation from individual type/term expressions to contexts, context morphisms, and full judgements. (This is direct, not by induction.)

    (10) Now one can prove — again by a purely structural induction on expressions — that the partial interpretation respects substitution into expressions.

    (11) Now one is ready to state and prove the totality of the interpretation, by induction on derivations.

    In my earlier attempt, (10) is where I got stuck because of my bad decisions about treatment of variables and substitution in the syntax. So I’d formalised (1)–(9), plus the cases in the proof of totality (11) that didn’t require the substitution lemma from (10). I would very much like to go back to the attempt soon, now that I think I understand how to set up syntax more cleanly (Andrej Bauer and I have a setup of the syntax which I think should work well for it)…


    So, going back to the comparison with Streicher’s proof: the key difference is step (7), the type of the partial interpretation function. The explicit use of *environments* means that the induction is over individual expressions, not entire judgment expressions with a context included, and so can be directly a structural induction, with no auxiliary well-ordering required.
    • CommentRowNumber13.
    • CommentAuthorDavidRoberts
    • CommentTimeSep 3rd 2018

    Note that in #12, all occurrences of [[A]], [[B]] etc come out as (unsatisfied) nLab links: A etc

  2. I’ve read through #12, and it looks very reasonable! The idea about partial environments seems nice. It’s a little unclear to me where the

    bad decisions about treatment of variables and substitution in the syntax

    come in, though. Do you mean in the way that you formalised (1) and (2)?

    What if we were to take some very simple type theory, and some very simple collection of judgements. Can we work out (1) - (11) explicitly and see where the need for good decisions would enter?

    Also, regarding the role of CwAs, what if in the first instance, for simplicity, we just assumed that we had a sufficiently nice category with strict pullbacks? I would think that it is not important initially whether or not such a category actually exists, and that we would still be able to explore whether the overall approach is likely to work out well.

    • CommentRowNumber15.
    • CommentAuthorMike Shulman
    • CommentTimeSep 4th 2018

    I’m not sure what “strict pullbacks” means, but I think we should stick with a standard notion of categorical for type theory, so that we really are proving the same kind of theorem that the questions are being asked about. My own preference is for CwFs, but the choice among them is not very important.

    • CommentRowNumber16.
    • CommentAuthorRichard Williamson
    • CommentTimeSep 4th 2018
    • (edited Sep 4th 2018)

    I think we should stick with a standard notion of categorical for type theory, so that we really are proving the same kind of theorem that the questions are being asked about. My own preference is for CwFs, but the choice among them is not very important.

    I agree. It just seemed to me that (6) - (11) should go through more or less independently of the choice, so it might be helpful (for some at least) to have in mind, instead of or in addition to a CwA or CwF, a setting where one does not have coherence issues with regard to substitution, even if such a setting may not exist! I.e. suppose that any choices of pullback are equal, not only isomorphic, so one gets strict associativity, etc. But feel free to ignore!

    On a different note, is it completely obvious why (1) - (11) fulfill all aspects of initiality if they are carried out? I think that they principally concern how to interpret the proposed initial gadget. Is the uniqueness aspect obvious? And is it obvious how to put the required structure (CwA/CwF/whatever) on the initial gadget? The answer to these two questions may well be “yes”, I have not thought about it much, just raising them for the purposes of discussion.

    • CommentRowNumber17.
    • CommentAuthorMike Shulman
    • CommentTimeSep 4th 2018

    I would call a CwA/CwF also a setting where one does not have coherence issues with regard to substitution; they are structures in which substitution is by definition strict. The coherence issue is in getting from an ordinary category to one of those.

    is it completely obvious why (1) - (11) fulfill all aspects of initiality if they are carried out?

    Peter may have something to add, but I would say that no, it is not completely obvious; the other parts will also require proof. Even Streicher doesn’t actually prove uniqueness, he just says something like “it can be easily shown” (and I’m not sure that he even takes the time to assemble the interpretation of syntax into a functor as such). I do think it is fairly clear (in any particular case, at least) how to put the required structure on the putatively-initial gadget, although I suppose there is room for disagreement there, and we would certainly want to write this down carefully as part of the theorem as well.

    • CommentRowNumber18.
    • CommentAuthorAli Caglayan
    • CommentTimeSep 4th 2018

    I’d be interested in this virtual journal club.

    • CommentRowNumber19.
    • CommentAuthorp_l_lumsdaine
    • CommentTimeSep 6th 2018
    Like Mike in #17, I feel CwA’s/CwF’s/etc are meant to be “just like the syntax” in terms of what strictness conditions they do/don’t satisfy. So giving initiality with these should involve little if any difficulty with coherence, and the subtler work of making the actual desired models strict can be done entirely on the categorical side, not getting entangled with the syntactic bureaucracy.

    Regarding “bad decisions about treatment of variables and substitution in the syntax”: that was in some of the choices I elided/black-boxed here, and was long enough ago now that I don’t roughly recall the details. But it was something to do with the interaction between using named variables + alpha-equivalence in setting up the syntax, on the one hand, and on the other hand using setoid-based categorical structures as the target of the interpretation.

    Regarding “is it completely obvious why (1) - (11) fulfill all aspects of initiality if they are carried out?”: Not quite, but I think (1)–(11) is “the hard part”, in that for the remaining components, the most natural direct approach seems to work, if I’m not missing anything.
    • CommentRowNumber20.
    • CommentAuthorMike Shulman
    • CommentTimeSep 6th 2018

    I guess one of the early decisions we’ll have to make is whether to use named variables or de Brujin stuff in the syntax.

    • CommentRowNumber21.
    • CommentAuthorp_l_lumsdaine
    • CommentTimeSep 8th 2018

    Re Mike Shulman #20:

    I guess one of the early decisions we’ll have to make is whether to use named variables or de Brujin stuff in the syntax.

    Not necessarily; one can be agnostic between them, by taking as a parameter something like “a monoidal category V with an ‘extend’ operation, and with a monoidal-plus-extension functor to (Sets, + , 0, +1)”. The idea is that V is the category of what I called “contexts-of-variables” in my long sketch above; it axiomatises everything one uses about how contexts work, and you recover de Bruijn indices by taking ob V = N, or named variables by taking ob V = P_fin(Name), with a reasonably-evident coproduct monoidal structure in each case.

    Andrej and Philipp and I are trying that out a bit in our formalisation of our general type theories. It’s nice in some ways, but it also adds a fair bit of extra overhead in both formalism and terminology. So I wouldn’t necessarily recommend taking this approach; but it’s at least interesting to think about.

    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeSep 8th 2018

    Personally, I’m intrigued by the possibility of using this representation of variable binding, but that might be too exotic for our purposes here.

    • CommentRowNumber23.
    • CommentAuthorMike Shulman
    • CommentTimeSep 8th 2018

    a monoidal category V with an ‘extend’ operation, and with a monoidal-plus-extension functor to (Sets, + , 0, +1)

    Is there an initial such V? If so, is it familiar?

    • CommentRowNumber24.
    • CommentAuthorAli Caglayan
    • CommentTimeSep 9th 2018
    In untyped lambda calculus you can avoid alpha conversion and de Brujin indicies by using SKI combinators. I am not sure if a similar thing even exists for a typed lambda calculus. There could be combinators for each type but this would cause problems once we merge type and term levels with dependent types.
    • CommentRowNumber25.
    • CommentAuthorMike Shulman
    • CommentTimeSep 9th 2018

    Combinatory logic certainly does exist for simply typed lambda calculus. I’ve never heard of a dependent combinatory logic; that would be interesting! But I think that would be a whole other project to explore… and in the end, we do want to relate the results back to the variables that people use in practice.

    • CommentRowNumber26.
    • CommentAuthorAli Caglayan
    • CommentTimeSep 9th 2018
    • (edited Sep 9th 2018)

    It looks like it has been explored already in this paper. I have found this article sumarising it nicely.

    It seems you can have a dependently typed lambda calculus pretty easily with only 4 combinators. You have your S and K as usual, then we have L and X (Xi in the paper) combinators. X is a sort of type annotation where Xab means you take a type a and get a type b. And La means that a is a type.

    • CommentRowNumber27.
    • CommentAuthorp_l_lumsdaine
    • CommentTimeSep 14th 2018

    Re @MikeShulman #23:

    a monoidal category V with an ‘extend’ operation, and with a monoidal-plus-extension functor to (Sets, + , 0, +1)

    Is there an initial such V? If so, is it familiar?

    There is, but it depends on how exactly you axiomatise the “extend” operation (and indeed the “monoidal” part of the axiomatisation is negotiable as well), and I’m not quite sure what is exactly the ‘right’ structure to assume.

    That’s why I’m ambivalent about recommending this approach: I feel like if one found the right axiomatisation, it would be well worthwhile, but none of the things I’ve tried seem like they quite hit the spot yet, and I think that’s somehow why the payoff for them isn’t quite justifying the overhead.

    But in all the versions I’ve considered, the initial one should be some small (set-)category equivalent (as a category) to FinSet, with its usual inclusion into Set. (And FinSet itself should be an example of such a structure, as well.)