Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber201.
    • CommentAuthoratmacen
    • CommentTimeNov 13th 2018

    What if we use this transitivity?

    ΓNAΓΓMN:AAΓΓNM:AAΓΓMM:AA\frac{\Gamma \vdash N \Leftarrow A \qquad \Gamma \equiv \Gamma \vdash M \equiv N\,:\,A \equiv A \qquad \Gamma \equiv \Gamma' \vdash N \equiv M'\,:\,A \equiv A'} {\Gamma \equiv \Gamma' \vdash M \equiv M'\,:\,A \equiv A'}

    (But I still don’t like doubled contexts.)

    • CommentRowNumber202.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2018

    I agree that if the type synthesis judgments in a set-theoretic presentation don’t respect α\alpha, then writing down the same judgments in a HoTT presentation will define a “different” relation. But the two relations will define the same partial functor (viewing α\alpha-equivalences as isomorphisms in a (thin) groupoid of terms – in HoTT this would be just a map of types). Back on #193, I think this is what I had in mind:

    be careful to be sure that the checking judgement is admissibly closed under α\alpha at the end of the day.

    Because the only rule for checking is to switch modes:

    ΓMAΓABtypeΓMB \frac{\Gamma \vdash M \Rightarrow A \qquad \Gamma \vdash A \equiv B \, type}{\Gamma\vdash M \Leftarrow B}

    it suffices to show that α\alpha-equivalence implies judgmental equality, and this should be a straightforward induction, especially with the doubled congruence rules matching the heterogeneous α\alpha-equivalence. Basically what Matt said in #197, although only for the RHS, since there is no conversion rule for LHSs. Respect for α\alpha-equivalence on the LHS would I guess be a separate induction over the type synthesis rules (which in a HoTT version would be just apap of the type synthesis function).

    • CommentRowNumber203.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2018

    With intensional type theory (ITT), it’s impossible to express constructions that don’t respect isomorphism, because there are HoTT models where such constructions are impossible. With a general CoV, there are models, like de Bruijn indexes, where operations that don’t respect α\alpha equivalence are impossible, but with the abstraction they come back? Oh, because the operations on terms are not part of the abstraction?

    I would say it like this. If we define a general CoV in ITT, it’ll be impossible to express constructions that don’t respect isomorphism, because there are HoTT models where such constructions are impossible. If we define a general CoV in set theory, it is possible to express operations on it that respect α\alpha-equivalence in some particular CoVs (de Bruijn indices) but not in other particular CoVs (named variables). This is just like in category theory in general: if all you know is that you have an arbitrary category (in a set-theoretic foundation), you can’t define any operation on its objects that always fails to respect isomorphism, because for all you know the category might happen to be skeletal. But you can define operations like “is equal to aa” which, for some particular categories (namely, non-skeletal ones) will fail to respect isomorphism.

    • CommentRowNumber204.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2018

    Regarding equality reflection for heterogeneous equality, I would expect that just as the context for a typing judgment should be “fully general”, the context for an equality judgment should be “fully general”, meaning in this case an arbitrary doubled form ΓΓ\Gamma \equiv \Gamma'. But I think I should also modify what I said in #184 to add ΓΓAAtype\Gamma\equiv \Gamma' \vdash A\equiv A'\, type as a presupposition for the judgment ΓΓMM:AA\Gamma\equiv \Gamma'\vdash M\equiv M' \,:\, A\equiv A'. Then I think it makes sense to write equality reflection as

    ΓPEq A(M,M)ΓΓMM:AA\frac{\Gamma \vdash P\Leftarrow Eq_A(M,M') }{\Gamma\equiv \Gamma' \vdash M\equiv M' \,:\, A\equiv A'}

    where the presuppositions ΓΓctx\Gamma\equiv \Gamma' \,ctx and ΓΓAAtype\Gamma\equiv \Gamma'\vdash A\equiv A' \,type mean that the choices of Γ\Gamma and AA (as opposed to Γ\Gamma' and AA') in the premise are unimportant.

    However, despite having suggested totally heterogeneous judgmental equality myself, at the moment I’m not really happy with it. For one thing, it would make the rules of our type theory look significantly less familiar (and significantly more verbose) to the intended audience, which I think is an important consideration. Also I don’t like any of the transitivity options.

    • CommentRowNumber205.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2018

    Hmm, the thing to be careful about here is: what is the evidence that xVx \in V for an object VV of 𝔽\mathbb{F}? If well-scoped terms include a clause:

    • xx is a term with free variables VV if xVx \in V

    then it needs to be the case that the proof of xVx \in V isn’t itself counted as part of the data of the term (which it would be, in the naive inductive family reading of that), or else the variable is really both the name and the de bruijn index (or other proof that xVx \in V).

    What exactly are you worried about here? Are you saying that if the “data” of a “variable-use term” somehow “includes” the de Bruijn index as well as the name, then we aren’t “really” using named variables as opposed to de Bruijn indices?

    I don’t really find this worrying. As soon as we have ordered contexts (which we do as soon as we get to the rules for dependent type theory, unless we go Andromeda-style), every variable name uniquely determines a de Bruijn index anyway. It doesn’t seem to me to matter much whether that de Bruijn index is “part of the data” or simply “automatically computable from the data” – in the former case, a user could omit to write it anyway since it can be automatically inferred, so in practice there doesn’t seem much difference.

    Also, I think whether the de Bruijn index is really “part of the data” depends on what you mean by “part of the data” and how you present set-membership. In a set-theoretic metatheory, of course, a “proof of xVx\in V” isn’t any kind of “data” at all. In a type-theoretic metatheory, if V:VarPropV:Var\to Prop and fresh extensions are defined by W(y)=(y=x)V(y)W(y) = (y=x) \vee V(y), then the term witnessing V(x)V(x) will indeed syntactically include a sequence of inrinr’s that counts out the de Bruijn index, but it will inhabit a proposition so it’s not clear to me whether that should really be considered “data”. And there are other ways to present it even in type theory, such as V:VarboolV:Var\to bool with W(y)=if(y=?x)thentrueelseV(y)W(y) = if\, (y\stackrel{?}{=}x)\, then\, true\, else\, V(y), for which the term witnessing V(x)V(x) (i.e. V(x)=trueV(x)=true) will be just refl truerefl_{true}.

    • CommentRowNumber206.
    • CommentAuthoratmacen
    • CommentTimeNov 13th 2018

    Re #204:

    ΓPEq A(M,M)ΓΓMM:AA\frac{\Gamma \vdash P\Leftarrow Eq_A(M,M') }{\Gamma\equiv \Gamma' \vdash M\equiv M' \,:\, A\equiv A'}

    I thought the MM' is used in the wrong scope. You have to rename it. But otherwise yes, that makes sense, and I like it better.

    But needing the renaming seems to indicate that we’re actually relying on the fact that it’s possible to work homogeneously. The heterogeneous style seems to make everything except congruence harder, and there was nothing really wrong with homogeneous congruence. So I completely agree that totally heterogeneous equality is a bad choice. (I know, I keep saying that.)

    • CommentRowNumber207.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2018

    The heterogeneous style seems to make everything except congruence harder, and there was nothing really wrong with homogeneous congruence.

    Right. And the fact that we reed a renaming here I think underscores the point: the only thing “wrong” with homogeneous congruence was that we needed a renaming.

    • CommentRowNumber208.
    • CommentAuthoratmacen
    • CommentTimeNov 13th 2018

    Re #203, so I take that as a “yes”.

    Oh, because the operations on terms are not part of the abstraction?

    …But you can define operations like “is equal to aa” which, for some particular categories (namely, non-skeletal ones) will fail to respect isomorphism.

    Right. Because the operations on objects are functions outside the category.

    I think I see now that we can use the style of rule I had in mind for synthesis judgments, and that it may or may not result in a relation that respects α\alpha, but in any case it should work.

    • CommentRowNumber209.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2018

    Okay, to summarize what I think we’ve decided, and what I’m thinking about and deciding between right now. I still don’t want to use de Bruijn indices only, so the two options I have in mind are working with a general CoV or with well-scoped Barendregt-named syntax. The main difference between the two in practice is that in the former case, referring to a variable other than the innermost-bound one requires an explicit weakening: λx.λy.x[y]\lambda x. \lambda y. x[y] and λx.λy.λz.x[y][z]\lambda x. \lambda y. \lambda z. x[y][z], the number of brackets of course being the de Bruijn index so that we are in a sense formally using both names and indices. This difference seems unlikely to impact the proof of initiality much, since I expect rarely will we be working with more than one particular variable at a time: only a few operators actually bind more than one variable in the same subterm (e.g. natrecnatrec binds both the predecessor and the result of the recursive call in its inductive step, and similarly the induction principle for positive Σ\Sigma-types binds both projections at once).

    With either of these choices, we have the same definition of heterogeneous α\alpha-equivalence, weakening/renaming, and necessarily-capture-avoiding substitution. Some of the typing and equality judgments involve renaming in the premises, specifically because the “inputs” of their conclusions must be fully general with respect to the variables they bind, but in the premises some of these variables must be identified. It’s possible this renaming might need to involve a universal quantification to make the inductions work (although my inclination would be to try it without that first and see if we run into trouble). This should make these judgments α\alpha-invariant in these arguments by a straightforward induction.

    The type synthesis judgment could make a specific choice about what variables to bind in the “output” argument of its conclusion, in which case it would not be α\alpha-invariant in that argument. This may not be a problem, although it would mean that the judgment would mean something different if written down in HoTT with a univalent CoV where there is an implicit transport operation along α\alpha-equivalence. Or type synthesis could “nondeterministically” choose an arbitrary variable (/ fresh extension) to bind in its output, making it α\alpha-invariant in that argument as well by the same kind of induction. (This “nondeterminism” doesn’t seem too problematic to me; it reminds me of Conor’s rule “post” which says that if a term synthesizes a type, it also synthesizes any reduct of that type. In both cases, a synthesized type is not literally syntactically unique, but is unique up to some stricter notion of sameness than judgmental equality.)

    I think I prefer a homogeneous equality judgment, which will have to do some renaming in its premises as well. It treats the terms or types being compared as inputs, and is α\alpha-invariant in them for the same reasons. The type argument to term equality is kind of “un-moded” and can have its bound variables chosen arbitrarily in the conclusion, because term equality has an explicit conversion rule enabling us to replace the type by any judgmentally equal type, hence a fortiori any α\alpha-equivalent one. But we could also allow it to choose its bound variables non-deterministically, for consistency.

    The main thing I think is missing is some syntactic characterization of rules that are automatically α\alpha-invariant, so that we wouldn’t have to prove by hand that all of the judgments are such. Anything else?

    • CommentRowNumber210.
    • CommentAuthoratmacen
    • CommentTimeNov 13th 2018

    The main thing I think is missing is some syntactic characterization of rules that are automatically α\alpha-invariant, so that we wouldn’t have to prove by hand that all of the judgments are such. Anything else?

    I think that characterization of rules should be a type whose elements are descriptions of “hygienic” rules. This same notion of rule should not only let us prove α\alpha-invariance generically, but also the admissible structural rules. I think this notion of rule is still pretty permissive, and that α\alpha-invariance and the structural rules both have to do with the responsible use of variables.

    It will probably be a similar thing to Conor’s patterns/terms for inputs in the conclusion/premises, and terms/patterns for the outputs. I think that’s mainly to get α\alpha-invariance, which sounds harder than structural rules would’ve been with de Bruijn indexes.

    • CommentRowNumber211.
    • CommentAuthorMike Shulman
    • CommentTimeNov 13th 2018

    I’ve been thinking about it in a semantic way. The mutually defined judgments are an inductive family indexed over something like (Ctx× VTy)+(Ctx× VTy× VTm)+(Ctx\times_V Ty) + (Ctx\times_V Ty \times_V Tm) + \dots, with one summand for each judgment. Each of these sets Ctx,Ty,TmCtx, Ty, Tm is actually a groupoid, and the inductive type can be defined as an indexed W-type in CatCat, automatically obtaining a notion of (iso)morphism between derivations indexed over (iso)morphisms between their conclusions. (The forgetful functor from CatCat to SetSet should preserve this W-type, since it preserves both limits and colimits, having both adjoints, and the Π\Pi-type in the indexed W-type is over a map with discrete fibers; so this really does consist of isomorphisms between the derivations we already had.) Now if the dependent polynomial that defines the indexed W-type preserves isofibrations, then its initial algebra will also be an isofibration, which means that derivations can be transported across α\alpha-equivalences. This preservation of isofibrations should follow roughly from knowing that the conclusions of every rule are isofibrations, i.e. can be transported across α\alpha-equivalences. So for instance a rule of the form Γλx.MΠ(x:A).B\Gamma \vdash \lambda x.M \Leftarrow \Pi(x:A).B is not an isofibration, because an arbitrary α\alpha-equivalence could break the identification between the two copies of the variable xx, whereas a modified version Γλx.MΠ(y:A).B\Gamma \vdash \lambda x.M \Leftarrow \Pi(y:A).B would be.

    • CommentRowNumber212.
    • CommentAuthorDavidRoberts
    • CommentTimeNov 14th 2018

    Nice!

    • CommentRowNumber213.
    • CommentAuthoratmacen
    • CommentTimeNov 14th 2018

    Re #211, OK, interpreting all that liberally, I think it’s pretty much compatible with what I had in mind. What the pattern/term business would be doing is additionally providing a syntactic check for whether <something> is an isofibration.

    I have some clarification requests:

    The mutually defined judgments are an inductive family indexed over something like (Ctx× VTy)+(Ctx× VTy× VTm)+(Ctx\times_V Ty) + (Ctx\times_V Ty \times_V Tm) + \dots, with one summand for each judgment.

    You mean one summand for each judgment form, right? So for example, that first one is for ΓAtype\Gamma \vdash A type?

    Those fibered products are saying that the different subjects of the judgment form use the same scoping context?

    So the idea of those types being groupoids is that they’re all fibered over the CoV, their objects are various syntactic structures (for example the objects of CtxCtx are raw (well-scoped) contexts), and their morphisms are proofs of α\alpha equivalence over some bijection of the variables? And the functor to the CoV just keeps the bijection? And the rest of the proof is unique?

    Now if the dependent polynomial that defines the indexed W-type preserves isofibrations, then its initial algebra will also be an isofibration, which means that derivations can be transported across α\alpha-equivalences.

    What is it exactly that you want to know is an isofibration? A functor from a groupoid to the CoV? Or from a groupoid of derivations to a groupoid of judgments? (For example ValidTy(Ctx× VTy)ValidTy \to (Ctx \times_V Ty)) The latter would be involving α\alpha equivalence for derivations??

    This preservation of isofibrations should follow roughly from knowing that the conclusions of every rule are isofibrations, i.e. can be transported across α\alpha-equivalences. So for instance a rule of the form Γλx.MΠ(x:A).B\Gamma \vdash \lambda x.M \Leftarrow \Pi(x:A).B is not an isofibration, because an arbitrary α\alpha-equivalence could break the identification between the two copies of the variable xx, whereas a modified version Γλx.MΠ(y:A).B\Gamma \vdash \lambda x.M \Leftarrow \Pi(y:A).B would be.

    What is it actually saying to say that Γλx.MΠ(y:A).B\Gamma \vdash \lambda x.M \Leftarrow \Pi(y:A).B is an isofibration? The functor is from a subobject of Ctx× VTm× VTyCtx \times_V Tm \times_V Ty and extracts the values of the metavariables Γ\Gamma, xx, yy, MM, AA, and BB? (Sounds like a job for pattern matching.)

    How is the output mode handled? Do you do something unclever about the type in a term equality?

    • CommentRowNumber214.
    • CommentAuthorMike Shulman
    • CommentTimeNov 14th 2018

    The answer to nearly all those questions is “yes”. The thing I want to be an isofibration is the forgetful functor from a groupoid of derivations to a groupoid of judgments, and yes you could call the morphisms in the former “α\alpha-equivalences of derivations”.

    What is it actually saying to say that Γλx.MΠ(y:A).B\Gamma \vdash \lambda x.M \Leftarrow \Pi(y:A).B is an isofibration? The functor is from a subobject of Ctx× VTm× VTyCtx \times_V Tm \times_V Ty and extracts the values of the metavariables Γ\Gamma, xx, yy, MM, AA, and BB?

    No, the opposite: the functor is from the groupoid whose objects are dependent sextuples (Γ,x,y,M,A,B)(\Gamma,x,y,M,A,B) and it is to Ctx× VTm× VTyCtx\times_V Tm\times_V Ty, and it sends such a sextuple to the triple (Γ,λx.M,Π(y:A).B)(\Gamma, \lambda x.M, \Pi(y:A) .B). This functor should be an inclusion, so saying that it is an isofibration is saying that the subgroupoid of judgments of this form is a replete subcategory.

    I’m not sure there is any sensible way to treat output modes specially in general. If we decide that output-moded arguments should also respect α\alpha-equivalence (by nondeterministically choosing bound variables), then I think no special treatment should be required.

    • CommentRowNumber215.
    • CommentAuthorMike Shulman
    • CommentTimeNov 14th 2018

    I’m not sure exactly what you had in mind for patterns, but one thought I had for how to formalize the “allowable conclusions” is to introduce a new judgment form for each operator that essentially pattern-matches against that operator, and require that all rules be rewritable with a totally general conclusion by adding such pattern-matching judgments to the premises. So for instance there would be 5- and 4-place judgments “MislambdaxABMM \, islambda \, x \, A \, B \, M” and “CispixABC \, ispi \, x \, A \, B” with rules

    (λ(x:A.B).M)islambdayAB[xy]M[xy](Π(x:A).B)ispiyAB[xy] \frac{ }{(\lambda (x:A.B).M) \, islambda \, y \, A \, B[x\leftrightarrow y] \, M[x\leftrightarrow y]} \qquad \frac{ }{(\Pi(x:A).B) \, ispi \, y \, A \, B[x\leftrightarrow y]}

    and the typing judgment for λ\lambda would be allowable because it can be rewritten as

    NislambdaxABMCispixABΓ,x:AMBΓNC \frac{ N \, islambda \, x \,A \, B\, M \qquad C \, ispi \, x \, A \, B \qquad \Gamma, x:A \vdash M \Leftarrow B }{\Gamma \vdash N \Rightarrow C}

    So all the renaming is folded into the pattern-matching judgments, which we can then prove once and for all to be α\alpha-invariant (this should be doable just once at the abstract level of a set of operators with signatures).

    • CommentRowNumber216.
    • CommentAuthoratmacen
    • CommentTimeNov 14th 2018

    Re #214, I need to think some more about that isofibration I got backwards.

    I suspect you’re right that your approach based on groupoids of derivations will not handle output positions, or “don’t care” positions specially. More accurately, it doesn’t seem to handle thinking of judgment forms as relations where only some positions are guaranteed to respect α\alpha.

    I’m not sure whether that’s good or bad though. I think my tricks are interesting, and I was hoping to find out if they work. But certainly they aren’t needed. Especially if we think about things the way you seem to like.

    Re #215, this is not the kind of pattern matching I had in mind. Your pattern matching incorporates renaming. So it’s some kind of “nominal pattern matching”. It does seem like it handles α\alpha invariance once and for all, if you use it, and have rule conclusions use only metavariables.

    Some thoughts:

    In #211, you used a hypothetical lambda rule that’s not from our system. I figured that was just because it made a better example. Is this lambda rule (in #215) actually the rule you’re proposing? Algorithmically speaking, it’s odd to use pattern matching for the output position (CC), although I imagine this fits your semantic picture. Also, if I understand your pattern matching correctly, xx is not actually the fresh extension used in NN or CC, it’s “nondeterministically chosen”.

    Neither of these seem like problems, they’re just at odds with the discussion we’ve been having, thinking about rules in algorithmic terms. If not algorithmically-motivated, why don’t you just explicitly compose with α\alpha equivalence?

    N id(λ(x:A.B).M)C id(Π(x:A).B)Γ,x:AMBΓNC\frac{N \sim_id (\lambda(x:A.B).M) \qquad C \sim_id (\Pi(x:A).B) \qquad \Gamma,x:A \vdash M \Leftarrow B}{\Gamma \vdash N \Rightarrow C}

    This being the “standard trick” to get Martin-Löf inductive families from Dybjer inductive families using some appropriate equality.

    The pattern matching I had in mind would not do any renaming. It would just be ordinary first-order, linear matching, as found in your favorite functional programming language. My motivation was that the hygienic rule descriptions should be able to “compile down” to the kinds of clever, algorithmically-motivated rules we discussed. I think we could still prove the appropriate α\alpha invariance properties generically, including the twist for the output mode. But this would probably be more complicated.

    Something to keep in mind is how easy it is to work with this generic presentation of rules for the rest of the proofs by induction on typing derivations. Since my plan was for the generic rules to “compile down” to something more like what you’d write by hand, I was hoping it wouldn’t get in the way of later proofs. But with a more “interpretive” treatment of the generic definition, your more high-level approach might not be a problem. You’d have some fancier induction principle that hides the mess, somehow. Maybe look at nominal Isabelle for ideas.

    One thing I like about your approach is that it seems closer to an explanation of why you can get away with the informal style of name-based rules, which doesn’t fuss so much over metavariables for binding sites.

    • CommentRowNumber217.
    • CommentAuthorMike Shulman
    • CommentTimeNov 14th 2018

    In #211, you used a hypothetical lambda rule that’s not from our system. I figured that was just because it made a better example.

    Yes, I was following on the example from #172.

    I agree there is a mismatch between viewing rules algorithmically and semantically; I don’t quite know how to make sense of the algorithmic ones semantically. Your rule that explicitly composes with α\alpha-equivalence does indeed seem to be a simpler version of what I had.

    • CommentRowNumber218.
    • CommentAuthorMike Shulman
    • CommentTimeNov 15th 2018

    A more syntactic way of thinking about this would be with a sort of “well-scoped extrinsic logical framework”. Consider a dependent type theory containing as axioms

    • a type vctxvctx (the “contexts of variables”, e.g. the objects of 𝔽\mathbb{F}),
    • a type family V:vctxfresh(V)typeV:vctx \vdash fresh(V)\, type, the type of fresh extensions,
    • an “extend” or “codomain” operation V:vctx,x:fresh(V)ext(V,x):vctxV:vctx, x:fresh(V) \vdash ext(V,x):vctx, and
    • a “type of variables” V:vctxvars(V)typeV:vctx \vdash vars(V) \, type.

    (Although it seems unintuitive, I don’t think we even need any connection between extext and varsvars; but I could be wrong.) Then the raw terms and types can be defined in this DTT as vctxvctx-indexed families V:vctxtm(V)typeV:vctx \vdash tm(V) \, type and V:vctxty(V)typeV:vctx \vdash ty(V) \, type inductively generated by

    • V:vctx,x:vars(V)var(x):tm(V)V:vctx, x:vars(V) \vdash var(x) : tm(V)

    and another constructor for each operator, e.g.

    • V:vctx,x:fresh(V),A:ty(V),B:ty(ext(V,x))Π(x,A,B):ty(V)V:vctx, x:fresh(V), A:ty(V), B:ty(ext(V,x)) \vdash \Pi(x,A,B) : ty(V)
    • V:vctx,x:fresh(V),A:ty(V),B:ty(ext(V,x)),M:tm(ext(V,x))λ(x,A,B,M):tm(V)V:vctx, x:fresh(V), A:ty(V), B:ty(ext(V,x)), M:tm(ext(V,x)) \vdash \lambda(x,A,B,M) : tm(V).

    In a logical framework using HOAS, the type families can’t be actually inductive inside the LF, but here they can be. This DTT has de Bruijn models in SetSet, and named models in SetSet, but also named models in GpdGpd in which α\alpha-equivalence is tracked automatically.

    Then the judgments and their rules can be defined as further inductive families in this DTT, e.g. V:vctx,A:ty(V)validty(V,A)typeV:vctx, A:ty(V) \vdash validty(V,A)\, type and V:vctx,A:ty(V),M:tm(V)ofty(V,A,M)typeV:vctx, A:ty(V), M:tm(V) \vdash ofty(V,A,M)\,type with constructors like

    • V:vctx,x:fresh(V),A:ty(V),B:ty(ext(V,x)),M:tm(ext(V,x))lambdaofpi(V,x,A,B,M):ofty(V,Π(x,A,B),λ(x,A,B,M))V:vctx, x:fresh(V), A:ty(V), B:ty(ext(V,x)), M:tm(ext(V,x)) \vdash lambdaofpi(V,x,A,B,M) : ofty(V,\Pi(x,A,B),\lambda(x,A,B,M))

    In this version, the α\alpha-invariance is not automatic (i.e. the dependent polynomial functor in GpdGpd doesn’t preserve isofibrations) because the indices in the output type of the constructors are not general; I think this is what you’re calling a “Dybjer inductive family”? And we could convert it to a “Martin-Lof inductive family” by including identity-types at the LF level (these being interpreted as α\alpha-equivalence in the GpdGpd models).

    One reason I mention this is that it gives a way of distinguishing the “functional” judgments with output modes, as being actual (partial) functions defined at the LF level, using the recursion principle of the inductive type families tmtm and tyty, rather than mere inductively defined relations. So for instance type synthesis could be a function

    • V:vctx,M:tm(V)tyof(M):Par(ty(V))V:vctx, M:tm(V) \vdash tyof(M) : Par(ty(V))

    living in the monad of partial elements. However, I suppose such a function would have to be defined inductive-recursively along with the inductive checking and equality judgments (since the latter, in particular, is not algorithmic in general), which requires rather a lot of this LF.

    Moreover, actually interpreting such a dependently typed LF into SetSet or GpdGpd is, of course, an initiality-theorem problem, so invoking it in our description of raw syntax is a rather chicken-and-egg thing to do. So this is probably not actually helpful in any formal way, but I wanted to mention it because it helped me a bit conceptually. Maybe someone else can make something more useful of it.

    • CommentRowNumber219.
    • CommentAuthorMike Shulman
    • CommentTimeNov 15th 2018

    Hmm, actually there’s a lot missing there… I didn’t say anything about how such a function would be defined on variables, and so clearly we do need some relation between freshfresh and extext. Getting a bit overenthusiastic…

    • CommentRowNumber220.
    • CommentAuthorKarol Szumiło
    • CommentTimeNov 15th 2018

    And, perhaps I should also say, the terms/ABTs form the free fat closed cartesian multicategory having the sorts and operators as generating objects and morphisms, respectively.

    Does this actually work as written? If I read your definitions correctly, the objects of the multicategory you construct are the sorts of the signature, but the free closed cartesian multicategory should have new internal hom objects. In particular, any operator that actually binds something should have hom objects in its domain. So its seems to me that we obtain a cartesian multicategory that is not closed, but is instead equipped with operations that implement “composition with binding operators” but they are not represented by composition with actual morphisms.

    Does that make sense? If so, what is the universal property that we are really after? It seems that the free closed cartesian multicategory would have a whole lot of new objects resulting from iterating internal homs that are mostly irrelevant to the syntax that we are describing. Maybe we are looking at the free cartesian multicategory equipped with some weaker structure.

    • CommentRowNumber221.
    • CommentAuthorMike Shulman
    • CommentTimeNov 15th 2018

    You’re right that I was a little too glib. I think what I meant to say is something like that the terms are the full sub-multicategory of that free fat closed cartesian multicategory determined by the generating objects (the sorts). The proof of this is, I think, roughly the adequacy theorem for a logical framework encoding of syntax.

    • CommentRowNumber222.
    • CommentAuthorMike Shulman
    • CommentTimeNov 15th 2018

    Okay, brace yourselves – I’m considering changing my mind once again. (No hobgoblins of little minds here…)

    After exploring lots of options, it seems that no matter what tack we take, directly proving invariance under α\alpha-equivalence is going to be either technically intricate, tediously annoying, or both. I’m gradually coming to feel that this difficulty outweighs the advantages I saw in using named or CoV syntax. Moreover, while CoV and related ideas are worth pursuing somewhere, the goal of this project should not be to revolutionize the world of syntax, but to take a geodesic route to an initiality theorem that is reasonably modular, though not striving for excessive generality.

    As anyone who’s worked with me before knows, I have a tendency to get carried away with exciting new ideas. This is maybe not a very good attribute in the coordinator of a project like this, whose goal shouldn’t be developing really new technology but only applying the existing technology, with a few tweaks to make it more modern, comprehensible, and modular. And as I’ve said before, those goals suggest choosing presentations of syntax and semantics that are as close as possible to each other, leaving the translations to other presentations to happen purely in one realm or the other.

    All of that now inclines me to recant my opposition to de Bruijn indices. As Dan said in #163,

    if the idea is to make the two sides of the theorem as close as possible, then it seems like using de Bruijn for this part, and relying on a prior translation from a named representation to de Bruijn, would be acceptable.

    My only response to this was that I didn’t want to actually use de Bruijn syntax when writing out the proof (and I didn’t want, and still don’t, to write down named syntax and pretend that it is de Bruijn syntax — if the syntax is actually de Bruijn, we should be honest and write it that way). But after thinking about whether it would be practical to use CoV for the proof, I observed (#209)

    This difference seems unlikely to impact the proof of initiality much, since I expect rarely will we be working with more than one particular variable at a time: only a few operators actually bind more than one variable in the same subterm

    Insofar as this is true, it seems equally to apply to de Bruijn indices. In fact, at the moment it’s not clear to me that in the course of the initiality proof we will ever have to actually write down a particular term containing any variable usage as a subterm: in the clause for the “variable” rule, the variable usage is the term itself, while in all other clauses the subterms are metavariables. So the indices may end up being barely visible at all.

    I know that Matt and Dan have already said they would like to use de Bruijn indices, so I don’t have to convince them that this is a good idea. But writing out the reasoning was helpful to get it clear in my own mind, and may help any category theorist lurkers to feel less betrayed if we do go this route — and give them an opportunity to make further arguments in favor of named/CoV syntax that I’ve missed. I’m still turning things over in my own mind, and probably will be for another week or two; but I wanted to share the direction my thoughts are now taking.

    • CommentRowNumber223.
    • CommentAuthoratmacen
    • CommentTimeNov 15th 2018

    (Warning: Haven’t read #222 yet, but posting anyway.)

    Re #217:

    I agree there is a mismatch between viewing rules algorithmically and semantically; I don’t quite know how to make sense of the algorithmic ones semantically.

    (I’m not really sure why you call this semantics. It’s just reasoning about α\alpha equivalence with fancy category theory, right?)

    I think you figured out the key idea later on, in #218:

    One reason I mention this is that it gives a way of distinguishing the “functional” judgments with output modes, as being actual (partial) functions defined at the LF level, using the recursion principle of the inductive type families tmtm and tyty, rather than mere inductively defined relations.

    Never mind LF or recursion; as long as your “semantics” is thinking of the judgment form as a partial function, you will avoid the (conjecturally) unnecessary α\alpha respect for output positions. It doesn’t matter whether you then “implement” this with recursion or inductive families.

    This is what I was trying to say back when I came up with it in #182:

    For type synthesizing rules, there seems to be an interesting possibility: Since the type is an output, the rule gets to pick the name of a binder. It doesn’t need to respect α\alpha equivalence in the usual sense. The idea is that we convert a function that respects α\alpha into a relation, rather than simulating converting a function on equivalence classes into a relation on equivalence classes by some general machinery like nominal logic or quotients. I think this will be easier, if it works.

    Your approach from #211 seems more like “general machinery”.

    Re #218:

    I can’t figure out what you’re getting at with all that code. But from #219, it sounds like it doesn’t work yet.

    By the way, if we include explicit α\alpha equivalence all over the place, like in #216, I think we could prove a sort of quotient induction principle: Doing induction, you wouldn’t worry about each particular equivalence on each constructor, you’d just prove overall that your motive respects α\alpha. This is probably the right induction principle for that approach, not something from nominal logic. Got confused. This is not the “usual” sort of quotient induction from QITs, because the derivations themselves aren’t quotiented. Actually, it doesn’t matter whether the derivations are quotiented, since we only need a non-dependent elimination form.

    I think this is what you’re calling a “Dybjer inductive family”? And we could convert it to a “Martin-Lof inductive family” by including identity-types at the LF level (these being interpreted as α\alpha-equivalence in the GpdGpd models).

    Dybjer families are inductive families with only “Protestant” indexes. Martin-Löf families have “Catholic” indexes. That religious terminology is apparently some weird joke. I don’t get it though; ask Dan. Protestant indexes have to be completely general in the conclusion, but can vary in premises. I think Protestant indexes are also called pseudoparameters. In Coq, the syntax is the same as for parameters.

    Dybjer families alone cannot define identity types. LF signatures correspond to inductive-inductive families with Catholic indexes though, so I’m not sure what you mean about including identity types in LF. You can just declare identity.

    Pseudoparameters are ostensibly perfect for input positions, if the rules really follow the mode discipline. For the pattern matching, you use actual large eliminations. But the large elimination somehow ends up being a pain in the butt, in Coq. (For example, you have to prove the induction principle yourself. And simpl gets all eager, unless you tell it not to. And the default implicit arguments it chooses is too conservative.)

    • CommentRowNumber224.
    • CommentAuthoratmacen
    • CommentTimeNov 20th 2018

    By the way, if we include explicit α\alpha equivalence all over the place, like in #216, I think we could prove a sort of quotient induction principle: Doing induction, you wouldn’t worry about each particular equivalence on each constructor, you’d just prove overall that your motive respects α\alpha. This is probably the right induction principle for that approach…

    Come to think of it, this proposal is basically just spelling out what it means to work with relations with quotiented indexes. Since informal math seems to have quotients, it seems better to work with syntax quotiented by α\alpha equivalence than to imitate it in the completely unimaginative way of this proposal.

    • CommentRowNumber225.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 6th 2018
    • (edited Dec 6th 2018)

    Should judgement forms be part of the syntax?

    So ,, Ty, Tm,Type\Rightarrow, \Leftarrow, \equiv_{Ty}, \equiv_{Tm}, Type would all be operators. Their sort could be “judgement”? The sorts of their arguments are easy to work out. That way contexts are simply the bound variables associated with the operators in the binding tree?

    Or if that doesn’t work, have a sort of “contexts”, then an operator : which would take a variable and a type and form a “context”. An operator , for concantenating contexts together. Finally the turnstyle could take in our previous judgement and a context and make the (not so basic) kind of judgement?

    Just some ideas.

    • CommentRowNumber226.
    • CommentAuthoratmacen
    • CommentTimeDec 6th 2018

    I think that would be very confusing, given the rest of the setup, where we’re not using judgments-as-types to handle syntax and typing.

    (I think it would be very smart to use a logical framework, and judgments-as-types, to handle initiality. But Mike seems against it for this project. I’m not sure Mike has considered all the options though. In addition to fully-dependent judgments-as-types based on LF, there’s also judgments-as-propositions based on various stripped down predicate logics. For extrinsic judgments-as-types, you only use the judgments as propositions because nothing depends on derivations; only expressions. Isabelle and λProlog work like that, using HOL. I think Delphin uses a variant of FOL.)

    • CommentRowNumber227.
    • CommentAuthorMike Shulman
    • CommentTimeDec 6th 2018

    I’m very much in favor of logical frameworks in general; I think that may be the right way to go about proving a general initiality theorem. I’ve thought seriously about doing this with both dependent LF-like approaches (which would presumably depend on an initiality theorem proven in some other way for the dependent LF theory itself) and FOL versions (essentially abstracting the partial-interpretation approach, probably starting by interpreting the raw syntax using an initiality theorem for simply typed HOAS as I mentioned here and then building some kind of logic on top of that). Both of these are on my list of projects to do someday if no one else does them first (and if anyone is interested in working on them together, I’d be open to that too).

    The main reason I don’t want to use a logical framework for this project is that this project is not supposed to be doing something new. The goal here is to clear up confusion about the existing Streicher-style methods for initiality proofs (and streamline them a little). Along the way we are generating and bringing up lots of good ideas for how to improve things in the future, and that’s great, but we need to rein in our enthusiasm and stick to the point for now. (I now include my own temporary obsession with “categories of variables”, and resistance to de Bruijn indices, in that comment. It only lasted for so long because since I’m the organizer, no one else had the authority to tell me to shut up about it; instead you all had to wait for me to give up on it on my own.)

    • CommentRowNumber228.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 6th 2018
    • (edited Dec 6th 2018)

    From what I am reading, logical frameworks require a dependently typed calculus as a meta language, which means for initiality of dependent type theory it would be silly to use it as a meta language anyway.

    I would agree with Mike for this project. But in general, I see a disparity between type theorists on what exactly a judgement is. Is it in the theory or metatheory. I don’t think there is a correct answer, or at least I haven’t found a convincing argument for either other than ease of use.

    • CommentRowNumber229.
    • CommentAuthoratmacen
    • CommentTimeDec 6th 2018

    From what I am reading, logical frameworks require a dependently typed calculus as a meta language…

    I tried to explain in my big parenthetical remark in #226 that this is not the case for extrinsic-style presentations of deductive systems.

    But in general, I see a disparity between type theorists on what exactly a judgement is. Is it in the theory or metatheory. I don’t think there is a correct answer, or at least I haven’t found a convincing argument for either other than ease of use.

    It seems to me that there’s agreement that the primary role of judgments is always to be something external relative to types. To a large extent, type constructors end up reflecting external structure internally, but not everything is reflected. For example, unless you have equality reflection, there’s a difference between the equality judgment form and an equality-like type constructor. There’s almost always a difference between the typing judgment form and any type constructor. If not, you’re in a very special case of type theory known as material set theory. ;) Universes reflect the “is a type” judgment form into a special case of the typing judgment, but unless you like inconsistency, the reflection is incomplete. Function types give you internal Homs. Blah blah blah…

    Type theorists evidently disagree about what additional properties judgments should have, beyond the little that’s needed for them to serve their primary role of providing a setting to specify type constructors.

    • CommentRowNumber230.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 20th 2018

    What exactly is meant by constant here? What are rules for constants?

    • CommentRowNumber231.
    • CommentAuthoratmacen
    • CommentTimeDec 21st 2018

    The constants and their rules are a parameter of the whole theorem. The idea is that you assume some dependent functions and equations about them, except they’re operators, not functions, and the equations are judgmental equality. I think.

    See also: Initiality Project - Overview#axioms

    • CommentRowNumber232.
    • CommentAuthorAli Caglayan
    • CommentTimeMar 30th 2019
    • (edited Mar 30th 2019)

    Do we really need telescopes? In that Harper’s treatement of judgements should make them behave correctly no?

    • CommentRowNumber233.
    • CommentAuthoratmacen
    • CommentTimeMar 30th 2019
    • (edited Mar 30th 2019)

    Broadly speaking, to formalize type theory you don’t need telescopes. If you decide to carry out the development using intrinsically-well-scoped terms though, it’s natural to separate possibly-open contexts into closed contexts, and telescopes, which are basically contexts that are well-scoped in some context.

    Edit: More intuitively, telescopes are like contexts that are well-typed (not just well-scoped) in some context. If you only make the scoping intrinsic though, then the difference in scoping is technically the only difference between contexts and telescopes. If scoping is not handled intrinsically either, then contexts and telescopes are technically the same. (E.g. lists of (variable,type expressions) pairs.)