Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeOct 26th 2018

    A skeleton

    v1, current

    • CommentRowNumber2.
    • CommentAuthoratmacen
    • CommentTimeOct 27th 2018

    Reorganized and revised to improve the flow (as I perceive it) of the Bidirectionality section.

    diff, v4, current

    • CommentRowNumber3.
    • CommentAuthoratmacen
    • CommentTimeOct 27th 2018
    I liked this approach to bidirectionality when I first saw it on the overview page. But now I'm thinking it seems pretty ad-hoc to have bidirectional typing rules, while all the corresponding congruence rules are unidirectional/non-directional. Is there some reason why it's a good idea for one and not the other? Other than that it's not clear how to direct the other equality rules? And what about PER-style rules? Wouldn't it be smarter if we could avoid the typing rules, because the congruence rules already guarantee totality/whatever of the partial interpretation functions on well-typed terms? And is it really so hard to appeal to uniqueness of interpretations throughout the proof? Full of uncertainty, here.
    • CommentRowNumber4.
    • CommentAuthoratmacen
    • CommentTimeOct 27th 2018
    Also, the includes aren't working right. :(
    • CommentRowNumber5.
    • CommentAuthorMike Shulman
    • CommentTimeOct 27th 2018

    I thought I explained why it was a good idea for one and not the other. I think it’s a good idea for the typing rules because the information flow matches what happens in the semantics. But it’s not possible to orient all the equality rules in an arbitrary type theory, so it seems the most natural to me to have just one equality judgment. (The difference semantically is that we’re looking (for now) at semantics in 1-categorical structures where equality is a mere property.) I’m unclear what kind of system could have some equality rules directed but not others, do you have something specific in mind?

    Discussion about the includes is here.

    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeOct 28th 2018

    Make context validity an “input”.

    diff, v5, current

    • CommentRowNumber7.
    • CommentAuthorMike Shulman
    • CommentTimeOct 28th 2018

    I think the variables should be distinct already in a raw context.

    diff, v5, current

    • CommentRowNumber8.
    • CommentAuthoratmacen
    • CommentTimeOct 28th 2018

    I think it’s a good idea for the typing rules because the information flow matches what happens in the semantics.

    I think I misunderstood you. You mean in the CwF itself? Not just the interpretation? I don’t understand how the CwF itself has information flow.

    But it’s not possible to orient all the equality rules in an arbitrary type theory, so it seems the most natural to me to have just one equality judgment. (The difference semantically is that we’re looking (for now) at semantics in 1-categorical structures where equality is a mere property.)

    I don’t understand this. How do you know? I didn’t think this had much to do with nature of the model itself.

    Come to think of it, I don’t see why we can’t just give all equality rules, besides conversion, checking premises and a synthesizing conclusion, like with typing. I haven’t thought it through, but since our current use of bidirectionality doesn’t really have anything to do with algorithms, I don’t see any warning signs.

    For this proof, I assumed the question was what information flow we should use, to make things easiest. So about PER-style rules, I’m thinking that getting rid of the typing judgments altogether makes things easier, whether we can figure out bidirectional equality judgments or not.

    Bas Spitters mentioned a paper about semantics of extensional type theory using PER-style rules. (This same paper was also mentioned on the HoTT list at some point.) That’s what I’m talking about.

    I’m unclear what kind of system could have some equality rules directed but not others, do you have something specific in mind?

    I might know a trick. I’m not sure whether it counts. But I didn’t mean to recommend directing only some of the rules. I meant that because I didn’t know how to direct all the rules, maybe bidirectionality wasn’t such a hot idea to begin with. But now you should think about the way of directing all equality rules that I proposed above.

    • CommentRowNumber9.
    • CommentAuthorMike Shulman
    • CommentTimeOct 28th 2018

    I mean the interpretation, not the CwF.

    I don’t really understand your new proposal. Are you suggesting that we have two equality judgments ΓSTA\Gamma \vdash S\equiv T \Leftarrow A and ΓSTA\Gamma \vdash S\equiv T \Rightarrow A, and that the equality rules look like for instance

    ΓRSAΓSTAΓRTA \frac{ \Gamma \vdash R \equiv S \Leftarrow A \qquad \Gamma \vdash S \equiv T \Leftarrow A}{\Gamma \vdash R \equiv T \Rightarrow A}

    ? That doesn’t seem to work as written, since if AA is an output on the bottom then where do you get it from in order to send it as an input to the premises? I suppose you could choose one of the terms to synthesize a type for in order to compute AA:

    ΓRAΓRSAΓSTAΓRTA \frac{\Gamma \vdash R \Rightarrow A \qquad \Gamma \vdash R \equiv S \Leftarrow A \qquad \Gamma \vdash S \equiv T \Leftarrow A}{\Gamma \vdash R \equiv T \Rightarrow A}

    but that seems like needless work when equality only gets “called” by the typechecking modeswitching judgment in “checking mode” where we already know the types. Also there is of course no way at all in transitivity to “compute” SS, it has to be guessed. So I suppose AA could similarly be guessed. But I don’t see any advantage to this; it seems like making everything more verbose to no purpose. The difference is that when defining the partial interpretation, there is a difference between a function that outputs a type and a function that inputs a type, but in calculating an equality there is no difference whether the type is an “input” or an “output”. Put differently, the “mode-switching rule for equality”

    ΓSTAΓABtypeΓSTB \frac{\Gamma \vdash S \equiv T \Rightarrow A \qquad \Gamma \vdash A \equiv B \, type}{\Gamma \vdash S\equiv T \Leftarrow B}

    is semantically trivial: the equalizer of two maps into TmTm is the same regardless of whether we know in advance that the two maps agree in TyTy.

    I don’t like the idea of using PER-style rules, if for no other reason than that they won’t be very comprehensible to mathematicians. I want to keep the input type theory as close as possible to the style used in the HoTT Book, for instance. (Which is certainly a disadvantage of any bidirectionality, but directing just the typing rules is so pleasingly parallel to the partial interpretation clauses, and so little different from the undirected style, that I still think it’s worth it. I don’t see why an inability to direct the equality rules would argue against directing the type-checking rules.)

    • CommentRowNumber10.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018

    A semi-formal comparison between our bidirectional theory and a corresponding unidirectional one.

    diff, v9, current

    • CommentRowNumber11.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    I mean the interpretation, not the CwF.

    So then I don’t understand what it means to say that the current rules seem closest to the interpretation. We haven’t done the interpretation yet. The way we should do it will depend on the rules.

    I don’t really understand your new proposal. Are you suggesting that we have two equality judgments ΓSTA\Gamma \vdash S\equiv T \Leftarrow A and ΓSTA\Gamma \vdash S\equiv T \Rightarrow A

    Yes, two equality judgments for elements. My proposal might be flawed though. The transitivity rule I was thinking of was:

    ΓAtypeΓRSAΓSTAΓRTA\frac{\Gamma \vdash A type \qquad \Gamma \vdash R \equiv S \Leftarrow A \qquad \Gamma \vdash S \equiv T \Leftarrow A}{\Gamma \vdash R \equiv T \Rightarrow A}

    AA doesn’t need to come from anywhere besides the derivation itself. We know the judgments aren’t algorithmic. The business about needing to guess values of metavariables is a way to explain why not, but we knew it. The directions still correspond to flow of validity. That’s why I need the ΓAtype\Gamma \vdash A type.

    But this transitivity rule may be too strong. I would need to wrap my head around what it’s really saying, in our situation.

    So I suppose AA could similarly be guessed. But I don’t see any advantage to this; it seems like making everything more verbose to no purpose.

    Huh? What’s verbose? You mean having an extra judgment form? Or the ΓAtype\Gamma \vdash A type?

    Put differently, the “mode-switching rule for equality” […] is semantically trivial: the equalizer of two maps into TmTm is the same regardless of whether we know in advance that the two maps agree in TyTy.

    In the CwF, yes. For the totality proof, for example, the rule may not be quite so trivial, depending on the induction motive. In the PER way of thinking, that mode-switching rule is a generalization of the non-equality one.

    I don’t like the idea of using PER-style rules, if for no other reason than that they won’t be very comprehensible to mathematicians.

    Oh, that’s too bad. In that case, maybe I should forget about it after all. My reason to believe that the equality judgments ought to be directed was that you want two partial interpretation functions, and the congruence rules are generalizations of the typing rules, in PER style.

    But wouldn’t it be nice to cut the number of cases almost in half?

    Which is certainly a disadvantage of any bidirectionality, but directing just the typing rules is so pleasingly parallel to the partial interpretation clauses, and so little different from the undirected style, that I still think it’s worth it.

    This too, I don’t understand. You chose to have two partial interpretation functions (for elements), corresponding to bidirectionality. You could’ve just had one. And then you wouldn’t be able to say that bidirectional judgments are pleasingly parallel.

    There must be some other reason for both of them, right? But you are not acknowledging it at all, in this thread, for some reason.

    I don’t see why an inability to direct the equality rules would argue against directing the type-checking rules.

    It’s not like directing only the typing rules is broken. It’s just that I figured that the minor difficulties we’re trying to avoid by using bidirectional typing would just come up anyway with unidirectional equality.

    • CommentRowNumber12.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018
    • (edited Oct 29th 2018)

    We haven’t completed the interpretation yet, but we have a proposal for its general outline, from Peter’s quoted email in the original nForum discussion, which I elaborated and tweaked in Initiality Project - Overview and have now further refined at Initiality Project - Partial Interpretation and Initiality Project - Totality. In particular, Peter’s proposal to take the “environment” of variables as an input to the partial interpretation function, rather than an output, is suggested not by an a priori desire to analogously make context validity an input to the typing judgments, but rather by a desire to make the recursion in the partial interpretation function a structural recursion over raw terms, rather than a less straightforward induction on an auxiliary “height” function as done by Streicher. I’m saying that this independently motivated choice about the interpretation then suggests that it would be better to make choices about the syntax that correspond to it as closely as possible.

    Similarly, while having two partial interpretation functions is indeed technically a choice, my claim is that if you look closely at what’s happening in the interpretation (even of a unidirectional theory), the two functions are already there implicitly. I wrote about this back here. I’m saying that since these two “stages” both have to be there in the interpretation anyaway, defining the interpretation will be easier to understand if we separate them into two different judgments on the syntactic side too. Among other things, that should reduce the redundancy in the inductive clauses, since otherwise we would essentially have to inline the definition of the “checking” interpretation for every premise of every rule that in a bidirectional theory would be a checked judgment.

    In contrast, having two judgments for equality seems like it would unnecessarily duplicate work in proving the interpretation, since on the semantic side both are interpreted by the same thing.

    Finally, I’m not convinced that PER rules would reduce the number of cases, because semantically, defining objects is different from proving that they are equal.

    • CommentRowNumber13.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018

    Oh, hang on, I need to think some more about PER judgments. I was assuming that in that case the partial interpretation functions would also have to be PER-ish, but maybe that’s not right: maybe we still have the same three partial interpretation functions I wrote down before, but then there are fewer things to prove by mutual induction in the totality theorem: the statement corresponding to a PER judgment ΓAB\Gamma \vdash A\equiv B (for instance) would be that the interpretations of AA and of BB are both defined and equal on the interpretation of Γ\Gamma. Maybe that would noticeably reduce the number of cases, in which case it may be worth considering (and therefore also having to parallelly orient or non-orient both the typing and equality judgments). Let me think some more.

    • CommentRowNumber14.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    Right, unless I’m way off track, the partial interpretation functions don’t look at derivations at all.

    Also, based on your edits to the page, I think we’re thinking completely different things about what we’re supposed to be getting out of bidirectionality. It’s not nice because undirected conversion makes typing derivations non-unique. We’re using the partial interpretation functions. The judgments should be understood as mere propositional. Non-unique derivations doesn’t matter at all.

    • CommentRowNumber15.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018
    Something less consequential to worry about when you're ready: You say the undirected element equality has the type as an input. But I thought the usual rules treat it as an output. This slipped past me until recently, unfortunately. Unless you already thought it through, something may need to change about this.
    • CommentRowNumber16.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018
    From #12, it looks like we agree after all on what's good about bidirectionality for the interpretation functions. I figure the benefit of bidirectional judgments is not exactly the same, but analogous, for proofs by induction on them. But I'm still not sure if bidirectional equality works.
    • CommentRowNumber17.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018

    Perhaps that whole first section on bidirectionality should be scrapped and replaced with the real motivation from the semantic/interpretation side as in #12.

    Re #15, why do you say the usual rules treat it as an output?

    • CommentRowNumber18.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018
    • (edited Oct 29th 2018)

    Low priority: Going back and forth between bidirectional and unidirectional derivations is a little trickier than that, but what’s described has the right intuition. Added a FIXME.

    Also explained that the mode-switch case is using an inversion lemma.

    diff, v12, current

    • CommentRowNumber19.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    Never mind that “inversion lemma” part. That’s actually just the definition of the checking judgment.

    diff, v12, current

    • CommentRowNumber20.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018

    Regarding the fixme, it depends on how we formulate reflexivity, doesn’t it? If the reflexivity rule is

    ΓTT:A\frac{ }{\Gamma \vdash T \equiv T : A}

    then we don’t need anything to derive it. This rule makes sense if SAS\Leftarrow A and TAT\Leftarrow A are presuppositions of the equality judgment ST:AS\equiv T : A, meaning I guess that the equality judgment is treated as checking. And if equality is bidirectional, then it should be the equality checking rule that appears in the mode-switching rule. And this reflexivity rule is also valid semantically, i.e. the partial interpretation of TT is always equal to itself on any domain.

    Conversely, although I can see that it makes sense to regard a judgment that is only derivable by one rule as being “defined” by that rule, but isn’t it still technically an inversion lemma to conclude that any derivation of that judgment must be obtained from that rule?

    • CommentRowNumber21.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    I guess someone should fix the Bidirectionality section eventually. But I’m more interested to find out the final rules.

    For these fully annotated terms, maybe the unidirectional rules can actually be interpreted as going in either direction. (Haven’t checked.) But for the usual annotations for unidirectional intensional type theory, it seems nuts not to consider the type validity an output of the equality rules, like with the typing rules.

    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018

    Why?

    • CommentRowNumber23.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    Regarding the fixme, it depends on how we formulate reflexivity, doesn’t it?

    Aren’t you thinking of the wrong equality judgment form? I’m pretty sure

    ΓAAtype\frac{}{\Gamma \vdash A \equiv A type}

    doesn’t fly.

    • CommentRowNumber24.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018

    Why not? ΓAtype\Gamma \vdash A\, type would be an analogous presupposition for it.

    • CommentRowNumber25.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    But for the usual annotations for unidirectional intensional type theory, it seems nuts not to consider the type validity an output of the equality rules, like with the typing rules.

    Why?

    Good question. Maybe it isn’t.

    • CommentRowNumber26.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    Wow, OK. I guess we could try ΓAAtype\Gamma \vdash A \equiv A type, if we don’t switch to PER-style. Interesting. But we do need to hold on to some way to say a type expression makes sense.

    • CommentRowNumber27.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    but isn’t it still technically an inversion lemma to conclude that any derivation of that judgment must be obtained from that rule?

    No, I personally wouldn’t call it one. You can change it back if you like. In Coq, you’d just destruct the derivation and get what you want. No lemma.

    • CommentRowNumber28.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018

    Right, of course in PER style there would be no general reflexivity rule at all, instead the formation and congruence rules would combine to derive reflexivity for any valid type expression. But in non-PER style, we have the separate judgment ΓAtype\Gamma \vdash A\, type, so reflexivity doesn’t need to play that role.

    I see the point about inversion, I guess. It’s called an inversion lemma when after destructing the derivation you still have to rule out spurious cases by applying injectivity of constructors?

    • CommentRowNumber29.
    • CommentAuthoratmacen
    • CommentTimeOct 29th 2018

    …But in non-PER style…reflexivity doesn’t need to play that role.

    Yes, I understand now. I think. I had just never thought of that before.

    It’s called an inversion lemma when…

    I don’t know any official rules, but basically yeah. I would expect it to rule out cases based on the form of indexes.

    • CommentRowNumber30.
    • CommentAuthorMike Shulman
    • CommentTimeOct 29th 2018

    RIght now I can’t decide whether PER-style rules would actually save us work overall.

    On the one hand, PER-style rules would eliminate all the congruence rules of equality and their associated inductive clauses in the totality theorem. But this is not actually that big of a savings, because those inductive clauses barely deserve to be called work: each of them is simply the observation that judgmental equality is interpreted by semantic equality, and all the semantic operations respect equality because it’s, well, equality. Maybe the whole initiality theorem is just bookkeeping, but it seems like these rules are really really just bookkeeping. I think (though I’m open to being proven wrong) in a published mathematical paper anyone ought to be content with proving these by “similarly”, and in a formalization they should all be blasted by a simple tactic.

    On the other hand, PER-style rules do technically also involve us in additional work. Consider for instance β\beta-reduction: App (x:A)B(λ(x:A.B).M,N)M[N/x]:B[N/x]App^{(x:A)B}(\lambda(x:A.B).M,N) \equiv M[N/x] : B[N/x]. If the part of the totality theorem corresponding to a PER equality judgment is that both sides are defined and equal on the interpretation of Γ\Gamma, then in the clause associated to β\beta-reduction we have to prove, not only that the interpretations of App (x:A)B(λ(x:A.B).M,N)App^{(x:A)B}(\lambda(x:A.B).M,N) and M[N/x]M[N/x] are equal, but that each of them is defined. In the latter case this is an instance of the substitution lemma, while in the former case it should be just putting together the inductive clauses for λ\lambda and AppApp; but technically, we can’t just say “by the clauses for λ\lambda and AppApp” unless we factor each of those clauses out as a lemma to prove before beginning the entire totality theorem induction. Am I right?

    So, it’s not clear to me that PER-style would actually save us noticeable effort, even if it works. And given that we’re not sure whether it works, and that it would look unfamiliar to a substantial amount of the audience, I think I’m back to being inclined against it.

    • CommentRowNumber31.
    • CommentAuthoratmacen
    • CommentTimeOct 30th 2018

    PER-style rules would eliminate all the congruence rules of equality and their associated inductive clauses in the totality theorem.

    Just to check: PER-style rules would keep all the congruence rules and get rid of the associated typing rules. That means all the typing rules, and also the type formation rules. Those judgments are gone. (Well actually, they’re now defined as reflexivity instances of equality judgments.)

    And this removes clauses from all inductions on derivations. I figured there’d be at least two of those, with totality being the first.

    Maybe the whole initiality theorem is just bookkeeping, but it seems like these rules are really really just bookkeeping.

    My guess is that all the rules will be mostly bookkeeping. Unfolding the definitions of things. The size of the proof will be very well correlated with the complexity of the formal judgments’ definition. (Number of rules, but also number of term formers in rules.) So maybe “similarly”, or an actual tactic would blow everything away. I dunno. That’s an argument to do whatever’s most intuitive.

    Consider for instance β\beta-reduction…

    We need the substitution lemma anyway, somewhere, for the type of application. I’d be amazed if using it there too isn’t really easy. You’re right of course that we can’t formally say “by the clauses” if they’re inlined into the induction. But to prove that they’re equal, it seems we need to appeal to cases of the partial interpretation anyway.

    Now seems like a good time to point out that in my interpreter, the partial interpretation was in two steps: an inductive (!) interpretation relation, then a uniqueness proof. (Well, also an inversion lemma. Pick your poison.) So if you already have the ingredients, proving totality is the one step of applying an interpretation rule. So I don’t think we’d need another set of lemmas to help prove totality of nested term formers in rules.

    By the way, many of my totality clauses didn’t need uniqueness of interpretations. That got used for (implicit) direction switches. If we can stick to bidirectional judgments, we can probably be even simpler. But Peter’s idea to use a partial object monad is cool too. So maybe that doesn’t matter. But I have no experience with what direction switches make you do in the later proof steps.

    I think I’m back to being inclined against it.

    OK.

    • CommentRowNumber32.
    • CommentAuthoratmacen
    • CommentTimeOct 30th 2018
    Regarding the approach of an inductively-defined interpretation relation, I guess that style makes output modes easier to handle, while the recursive definition makes input modes easier to handle. But you end up showing that you essentially have both.
    • CommentRowNumber33.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    Yes yes, technically PER-style would eliminate the typing rules and keep the congruence rules, but what I meant was that all the work that was happening on the typing rules would then get folded into the congruence rules, so it’s really the extra work of the congruence rules that would get eliminated.

    My guess is that all the rules will be mostly bookkeeping. Unfolding the definitions of things.

    Yes, that’s what I meant: all the rules will be mostly bookkeeping, but the congruence rules will be literally nothing but bookkeeping.

    It’s possible that one could write a sufficiently smart tactic to blow everything away in a formalization. But if “similarly” were acceptable to deal with most cases in a paper proof, there would be no need for this project. (Some people may think that it should be acceptable, but it’s a fact that it’s not acceptable to everyone, and the point of this project is to get a paper proof written that should be acceptable to all mathematicans.)

    • CommentRowNumber34.
    • CommentAuthoratmacen
    • CommentTimeOct 30th 2018

    …so it’s really the extra work of the congruence rules that would get eliminated.

    That still seems like a weird way to think about it. In my estimate, there’s no noticeable difference in triviality between totality cases that actually show totality, and congruence cases that avoid it, once we get the definitions and lemma statements right. But this is silly, we should not waste time arguing about how easy the easy parts are.

    But if “similarly” were acceptable to deal with most cases in a paper proof, there would be no need for this project.

    You know, I was thinking that. But you brought “similarly” up, and I didn’t want to sound like a wise guy. :)

    • CommentRowNumber35.
    • CommentAuthoratmacen
    • CommentTimeOct 30th 2018

    OK, so I would like to confirm the pending decisions that go into writing the rest of the rules.

    You (Mike) want to stick to non-PER rules, and I’m not arguing.

    I think we should stick to unidirectional element equality for now. I’m guessing that when it comes to using it, it’s going to feel slightly clunky. Maybe you guys will be able to see a way to do it bidirectionally at that point. I expect that if you can get rules working for the totality proof, they’ll also work for later inductions. So if you think of bidirectional rules that work for totality, I recommend you switch to them.

    I think you still want the unidirectional equality judgments to take all validity as inputs. The question is: are you going to try to take advantage of that in the rules, like with that

    ΓAAtype\frac{}{\Gamma \vdash A \equiv A type}

    rule?

    When proving totality, as for the induction motive for element equality, it will refer to the term checking interpretation, and assume it’s defined?

    • CommentRowNumber36.
    • CommentAuthorkyod
    • CommentTimeOct 30th 2018

    I wanted to have my hand at writing down the equality rules but realised that I had overlooked quite some details about the design choices. Let me try to reformulate the part about the flow for conversion rules to see if I understood something.

    If I look at the mode-switching rule

    ΓTAΓABtypeΓTB \frac{\Gamma \vdash T \Rightarrow A \qquad \Gamma \vdash A \equiv B \, type}{\Gamma \vdash T \Leftarrow B}

    then starting from the conclusion ΓTB\Gamma \vdash T \Leftarrow B I have as hypothesis that : 1) Γ\Gamma is well-fomed (as an input), 2) BB is a well fomed type in Γ\Gamma, 3) and TT is (syntactically ?) well formed term in Γ\Gamma.

    From the premise ΓTA\Gamma \vdash T \Rightarrow A, 1) & 2) , I should be able to conclude either an “error” that should be forwarded to the conclusion or that 4) AA is a well-formed type in Γ\Gamma.

    Now, it means that ΓAB\Gamma \vdash A \equiv B can actually take 1), 3) & 4) as input and output a “truth” value forwarded to the conclusion.

    Does that make sense ? Is that how we intend to write the equality rules ?

    However I don’t see how to provide a reasonable interpretation for transitivity if written as follows :

    ΓABΓBCΓAC \frac {\Gamma \vdash A \equiv B \qquad \Gamma \vdash B \equiv C}{\Gamma \vdash A \equiv C}

    Going bottom-up, we have that Γ\Gamma, AA and CC are well-formed but we need to ensure that BB is well-formed as well. It would work if we take the more verbose

    ΓBtypeΓABΓBCΓAC \frac {\Gamma \vdash B \,type \qquad \Gamma \vdash A \equiv B \qquad \Gamma \vdash B \equiv C}{\Gamma \vdash A \equiv C}

    Another option is of course to have a skewed/arbitrarily orientated flow for conversion : ΓAB\Gamma \vdash A \equiv B could take as input that Γ\Gamma is well-formed and AA is a well-formed type in Γ\Gamma and output that BB is a well-formed type in Γ\Gamma

    • CommentRowNumber37.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    @kyod thanks for starting to work on this! But I think we need to be careful about the phrase “well-formed”. Are raw terms “well-formed” or do they only become so when a judgment about them is derivable?

    I would say the presuppositions of ΓTB\Gamma \vdash T \Leftarrow B are:

    1. Γ\Gamma is valid, i.e. each type is derivably valid in the context of the variables to its left, “Γ| <kA ktype\Gamma|_{\lt k} \vdash A_k \, type.”
    2. BB is valid in context Γ\Gamma, i.e. ΓBtype\Gamma \vdash B\, type is derivable.
    3. TT is a raw term. (This is not really a presupposition, it just states what kind of thing TT is.)

    The only presupposition of ΓTA\Gamma \vdash T \Rightarrow A is that Γ\Gamma is valid; TT is a raw term, and AA such that ΓAtype\Gamma \vdash A\, type is an output. Thus, in the mode-switching judgment, (1) of the conclusion yields the presupposition of the first premise, at which point we have both ΓAtype\Gamma \vdash A\, type and ΓBtype\Gamma \vdash B\, type which should be the presuppositions of the equality judgment ΓABtype\Gamma \vdash A\equiv B \, type.

    Thus, I agree with you that in transitivity, ΓBtype\Gamma\vdash B\,type should be a premise. I think this is actually necessary semantically as well: the premise ΓABtype\Gamma \vdash A\equiv B \, type ensures that the interpretations of AA and BB, which are partial functions, agree when their domains of definition are intersected with each other and with the interpretation of Γ\Gamma, and similary for ΓBCtype\Gamma \vdash B\equiv C\, type. But if for instance the interpretation of BB has empty domain, this doesn’t ensure that the interpretations of AA and CC agree on the intersections of their domains with the interpretation of Γ\Gamma; we need the additional assumption that the interpretation of BB is defined on the whole interpretation of Γ\Gamma.

    • CommentRowNumber38.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    @Matt: Yes, I think we should try being as consistent as possible about using the presuppositions, including that premise-free reflexivity rule.

    When proving totality, as for the induction motive for element equality, it will refer to the term checking interpretation, and assume it’s defined?

    I don’t understand this question, can you explain it further?

    • CommentRowNumber39.
    • CommentAuthorkyod
    • CommentTimeOct 30th 2018

    Added equality rules for types and terms.

    In the case of equality for terms there is also a design choice whether ΓTS:A\Gamma \vdash T \equiv S : A assumes that ΓTA\Gamma \vdash T \Leftarrow A (and similarly for SS), or not. The current rules do not assume it.

    diff, v13, current

    • CommentRowNumber40.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018

    Never mind; I have fallen behind in reading your page revisions; the question I should’ve asked is already answered.

    I have a different question: Would you also write definitions in the internal extensional type theory of the presheaf category, when this avoids categorical definitions, please? That would make it faster for me to understand what’s going on. It might also be a helpful Rosetta stone for me and others, to get better at translating back and forth.

    • CommentRowNumber41.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018

    I think this is actually necessary semantically as well: the premise ΓABtype\Gamma \vdash A\equiv B \, type ensures that the interpretations of AA and BB, which are partial functions, agree when their domains of definition are intersected with each other and with the interpretation of Γ\Gamma, and similary for ΓBCtype\Gamma \vdash B\equiv C\, type.

    I don’t understand this. From point 4 of the Totality page, it sounds like if ⟦A⟧ is defined on an environment in ⟦Γ⟧, then ⟦B⟧ has to be as well. (And vice versa.) So I don’t see why that premise is needed, for totality. (But it still might be a good idea to have it.)

    • CommentRowNumber42.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    Re #39, I do think the term equality rules should assume that, in parallel to how the type equality rules assume ΓAtype\Gamma\vdash A\,type. I changed them.

    • CommentRowNumber43.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018

    Removed FIXME about admissibility/derivability, since the premise-free reflexivity should make them derivable.

    diff, v15, current

    • CommentRowNumber44.
    • CommentAuthorkyod
    • CommentTimeOct 31st 2018

    Added weakening, contraction, permutation and substitution as admissible rules.

    I doubt strenghening holds because of the transitivity case in equality (and it definitely wouldn’t hold in presence of equality reflection).

    • CommentRowNumber45.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018

    Added the free variable condition to the statement of the exchange admissible rule.

    diff, v17, current

    • CommentRowNumber46.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018

    Oh right. And then you don’t need the ΓBtype\Gamma \vdash B type.

    diff, v17, current

    • CommentRowNumber47.
    • CommentAuthordlicata335
    • CommentTimeNov 1st 2018

    I think the admissible rules should be stated with a bunch of well-formedness premises. Take weakening, for example. The normal flow of information for the rule (if I understand correctly from the discussion that you’re using a convention of presupposing context well-formedness) would be: the conclusion presupposes G,x:A,G’ ok, so we get to assume that. But then the premise requires G,G’ ok, which isn’t necessarily a consequence of the presupposition of the conclusion. So you have to write the theorem as

    If G ok and G |- G’ ok and G |- A type and G,G’ |- J then G,x:A,G’ |- J or maybe If G,x:A ok and G |- G’ ok and G,G’ |- J then G,x:A,G’ |- J (if you’re OK with invoking some inversion on context formation, to conclude G ok from G,x:A ok)

    but not

    If G,x:A,G’ ok and G,G’ |- J then G,x:A,G’ |- J (because the presupposition of G,G’ ok doesn’t follow from G,x:A,G’ ok, because x:A might be relevant to why G’ is well-formed, even if x doesn’t occur in G – unless you have a strengthening lemma, but I agree with your comment above that that’s either hard or impossible to prove, and should be avoided).

    Also, depending on how formal you want to be, technically G’ in these rules is a telescope (a context relative to another context, rather than a context outright), and you need to define or derive telescope formation rules for it (e.g. you could define telescope formation G |- G’ tele as G,G’ ok, but then that has a different presupposition flow from the rest of the judgements…).

    • CommentRowNumber48.
    • CommentAuthorkyod
    • CommentTimeNov 1st 2018

    Following @dlicata advice, introduced the derived notion of a valid telescope ΓΓ\Gamma \vdash \Gamma' and added a premise to weakening (even though I’m coming from a proof theory/PL background, the bidirectional interpretation is quite non-standard to me).

    @atmacen the ΓB\Gamma \vdash B is actually useful in some extensions (equality reflection…) : you could have Γ,x:ABtype\Gamma, x:A \vdash B \, type and xFV(B)x \notin FV(B) but not being able to derive ΓB\Gamma \vdash B.

    I introduced the notion of telescope but to my understanding the only diffrence with a context is that 1) it is usually open 2) you add elements to contexts on the right and to telescope on the left. I am not sure if in the current context it actually makes sense.

    diff, v18, current

    • CommentRowNumber49.
    • CommentAuthorKarol Szumiło
    • CommentTimeNov 1st 2018

    What is the difference between a premise and a presupposition? When we carry out inductive arguments over judgement derivations, presumably we need to verify both premises and presuppositions, so why is it a useful distinction?

    (Sorry for asking dumb questions again. I don’t think that I have anything to contribute at this stage, but since Mike expressed desire to make the project accessible to mathematicians, just think of me as a test case of a mathematician who would really like to understand this, but has no experience with nuts and bolts of type theory.)

    • CommentRowNumber50.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Thanks, Dan and Kenji. I was worried we needed more assumptions about contexts for the admissible rules. And yes, I was wrong to switch exchange to use only a free variable condition. Sorry about that.

    I think we should take Dan’s advice and write the admissible rules in semi-English, since they’re theorem statements.

    2) you add elements to contexts on the right and to telescope on the left.

    That doesn’t sound right. I think you need to add to the right end of a telescope too. Going to add a ΓΔok\Gamma \vdash \Delta ok judgment.

    • CommentRowNumber51.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Inductive definitions of valid contexts and telescopes. In a new section.

    diff, v19, current

    • CommentRowNumber52.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Added admissible rule subsections for inversion lemmas and a sanity check. The previously-listed rules are now in a subsection for structural rules.

    diff, v20, current

    • CommentRowNumber53.
    • CommentAuthordlicata335
    • CommentTimeNov 1st 2018
    • (edited Nov 1st 2018)

    I’m not sure if it makes sense to treat context formation as a special case of telescope formation: the presupposition of ΓΔtele\Gamma \vdash \Delta tele should be that Γctx\Gamma ctx, so then the rule that tele\cdot \vdash \cdot tele presupposes itself.

    (Also, exchange, contraction and substitution need analogous premises to the ones @kyod added for weakening, if anyone has time to do that.)

    • CommentRowNumber54.
    • CommentAuthordlicata335
    • CommentTimeNov 1st 2018
    • (edited Nov 1st 2018)

    Re presuppositions: there’s some discussion in #68 of the discussion of plans but here’s a stab at a simple example. Suppose you’re inductively defining a predicate Foo(n : Nat) by some inference rules (or think of an inductive family / inductive datatype in Coq or Agda if you’ve seen those). Now suppose that for some reason you want n to always be non-zero. You could either

    (1) Put sufficient premises on every inference rule such that Foo(n) implies n != 0. This might not mean annotating every inference rule with a premise; for example, “Foo(1) holds” and “if Foo(n) holds then Foo(2 * n) holds” would have this property, because it’s true in the base case, and inductively if it’s true for n then it’s true for 2n.
    This is what I would call adding “premises” to the definition.

    (2) Instead define a predicate Foo({n : Nat | n != 0}), i.e. in order to even say Foo(n), you already need to know that n is non-zero. This is what I’d call a presupposition of a judgement.

    (1) vs (2) changes “who” checks that n is non-zero. In (1), I think of it as checking that n is non-zero as you construct the derivation of Foo. In (2), I think of it as whoever asks you to construct a derivation of Foo(n) having the obligation to already have checked that n is non-zero.

    • CommentRowNumber55.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Dan, for semantics, I think you’re right about needing contexts strictly before telescopes. But for judgments on raw syntax, I’m pretty sure I get away with it. I don’t object to someone changing that, though. I was following Kenji, who had also defined valid contexts as valid telescopes in an empty (raw) context.

    The proof of ok\cdot \vdash \cdot ok is immediate from the rules, and this then gives ok\cdot ok. I don’t think there’s a problem with the bootstrapping, syntactically. (But maybe clever syntactic bootstrapping is considered harmful.)

    • CommentRowNumber56.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    Could we save ourselves duplication of work and avoid dealing with telescopes by proving a general context substitution lemma, covering weakening, exchange, and one-variable substitution all at once?

    Δσ:ΔΔokΔAtypeΔT:AΔT[σ]:A[σ]\frac{\Delta\vdash \sigma : \Delta' \qquad \Delta' \, ok \qquad \Delta' \vdash A\, type \qquad \Delta' \vdash T:A}{\Delta \vdash T[\sigma] : A[\sigma]}
    • CommentRowNumber57.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    The link in #54 above should be this (unfortunately there is no nice syntax for linking between nForum discussions, you have to use the full URL).

    • CommentRowNumber58.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    What is the difference between a premise and a presupposition?

    Dan already described the difference in #54, although he omitted the option (3) that’s the one I’m currently proposing to actually use here:

    (3) Allow the rules to apply to Foo(n) even when n might be 0, but then just ignore what happens when n=0 and only apply Foo to nonzero numbers. To make this make sense, you have to ensure that if the subject of Foo in the conclusion of a rule is nonzero, then so are all the subjects of Foo in the premises (otherwise talking about Foo(n) for nonzero n might involve you in talking about Foo(0) unexpectedly).

    When we carry out inductive arguments over judgement derivations, presumably we need to verify both premises and presuppositions, so why is it a useful distinction?

    Actually, we don’t need to verify presuppositions! At least, not as part of the induction. That’s the whole point. The premises are the only actual “inputs” to the inductive constructors, so we only need to include inductive clauses to deal with them. We might afterwards have to verify the presuppositions, but not as part of the induction.

    • CommentRowNumber59.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Huh, my use of “presupposition” is different from Dan’s. I consider it a presupposition whenever you only want to worry about Foo(n) when n is nonzero. “Foo(n) presupposes that n is nonzero.” Dan’s (1) and Mike’s (3) correspond to what I’ve been calling validity output and validity input, respectively. Along with Dan’s (2), these are three different ways of implementing what I call “presuppositions”.

    To try and explain the difference between Dan’s (2) and Mike’s (3): You are allowed to work with derivations of judgments with (3), even if you don’t already know the validity. You just can’t do anything really useful with them until you know some kind of validity. Dan’s (2) is to make the presupposition part of the type of the judgment. This is probably what you want for semantics, but is perhaps not so useful for syntax.

    A subtle thing about formal derivations with validity inputs (that is, about (3)) is that when you do induction on them, you tend to need, not formal validity, but some corresponding “semantic” validity, tailored for the target (“motive”) of the induction. Having formal validity too doesn’t hurt, of course. Intuitively, formal derivations with validity inputs are raw proofs, whose meaning assumes validity, although they themselves don’t.

    I keep saying “validity”, because those are the presuppositions for our judgments. But it could be anything you consider a presupposition. (In my sense.)

    • CommentRowNumber60.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018
    Re #56, I don't know. You also asked about context morphisms on the raw syntax thread; I haven't forgotten. I just haven't thought enough about it yet.
    • CommentRowNumber61.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Regarding telescopes, I don’t think we literally need them, even if we stick to small substitutions. Telescopes let you express weaker premises of the substitution principles, which makes the rules stronger. But you could probably just assume stronger premises using only context validity, and still be able to use the principles when you need to.

    Another, weirder option would be to use higher-order admissible rules, with premises like ((Γok)(\Gamma ok) —> (Γ,Δok)(\Gamma,\Delta ok)). I’ve never tried that, but it seems plausible.

    • CommentRowNumber62.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    I definitely would not call validity a “presupposition” in (1). The prefix “pre” means “before”, whereas in (1) nothing about nonzeroness of n has to be known or said before the judgment; instead in that case I would say “Foo(n) implies that n is nonzero”.

    • CommentRowNumber63.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    The reason I use “presupposition” that way is because the only thing that truly corresponds to presupposition is (2). In linguistics, a presupposition is something that must be true for the statement to be meaningful. This seems to clearly correspond to a subset constraint on the type of something in the statement. But until everyone is using type theory with true subset types (not sigmas with truncated second components), it’s also useful to imitate presuppositions with (1) and (3). You have previously referred to (3) as a “precondition”, which I’m basically OK with. But not all preconditions are imitating presuppositions, and true presuppositions are certainly not preconditions.

    • CommentRowNumber64.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    Since in (3) the intuitive meaning of the judgment ΓAtype\Gamma \vdash A\, type is “if Γ\Gamma is valid, then AA is a valid type in context Γ\Gamma” then maybe (3) should be called something like a “hypothesis”?

    • CommentRowNumber65.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    I think that would clash pretty badly with actual hypotheses when reasoning about derivations.

    • CommentRowNumber66.
    • CommentAuthordlicata335
    • CommentTimeNov 1st 2018

    More about presuppositions later, but re #56: if you try to define renaming and substitution all at once, the induction doesn’t work (or gets very complicated) because in the process of substitution, you need to do weakening (when you push the substitution under binders). So the usual strategy is to define “total renamings” (variable-for-variable substitution between the whole contexts, which includes weakening, exchange, contraction), and then substitution after that. This paper does things that way, if I recall correctly.

    The other option that works is to prove (single-variable) weakening first (easy because it doesn’t change the terms), and then prove single variable substitution for something in the middle of the context. In this style, I think you do formally need telescopes, because to talk about the “middle” of the context, you want to say Γ,x:A,Δ\Gamma,x:A,\Delta, where Γ,x:AΔtele\Gamma,x:A \vdash \Delta tele. I don’t see at as a matter of the strength of the assumptions—it’s that to state the theorem I need to talk about a Δ\Delta.

    • CommentRowNumber67.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    In this style, I think you do formally need telescopes, because to talk about the “middle” of the context, you want to say Γ,x:A,Δ\Gamma,x:A,\Delta, where Γ,x:AΔtele\Gamma,x:A \vdash \Delta tele. I don’t see at as a matter of the strength of the assumptions—it’s that to state the theorem I need to talk about a Δ\Delta.

    To clarify the point about assumption strength, I will give an example. The current weakening rule on the page is (with “tele” punctuation added):

    ΓΓteleΓ,ΓJΓ,x:A,ΓJ\frac{\Gamma \vdash \Gamma' tele \qquad \Gamma, \Gamma' \vdash J}{\Gamma, x:A, \Gamma' \vdash J}

    My point was that I think you could use:

    Γ,ΓctxΓ,ΓJΓ,x:A,ΓJ\frac{\Gamma, \Gamma' ctx \qquad \Gamma, \Gamma' \vdash J}{\Gamma, x:A, \Gamma' \vdash J}

    The assumption seems stronger than needed, but because context validity is a presupposition, we actually expect to know it whenever we need to use the rule. The point would be to avoid ever talking about validity of telescopes. Raw telescopes are raw contexts, and are thus handled already.

    I don’t have an opinion about any of this; I think substitutions are always gross.

    • CommentRowNumber68.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    A ramble about context morphisms vs. small substitutions:

    I’m very used to doing small substitutions with de Bruijn indexes. I’ve lost track of how many formalizations I’ve done that way. When I did the formalized interpreter, I wanted to keep doing that, but it turned out that I needed to think in terms of context morphisms for semantic contexts.

    So I made a library implementing all the context morphisms corresponding to my favored small substitutions. I was trying to make the semantics work like the syntax. I would say that this part of the development went well. (And it involved semantic telescopes!)

    It seems like Mike wants to go the other way, and make the syntax work like the semantics, by implementing context morphisms for raw expressions and derivations. I think you could do this, but the reason we need substitutions in the first place is that the derivation rules use them. Most of them are pretty small. The biggest one is for the variable rule, where you need to weaken a type from the context all the way out of it. (That one was a pain in the butt for me.)

    So I don’t know how nicely things go, trying to use context morphisms everywhere. Wouldn’t it be funny if we end up defining small substitutions in terms of context morphisms in terms of small substitutions?

    • CommentRowNumber69.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    I don’t completely understand what you and Dan are talking about, but I’m confused by #67, because a valid telescope is not a valid context in its own right. The presupposition that Γ,x:A,Γok\Gamma,x:A, \Gamma' \, ok does not imply that Γok\Gamma' \, ok (or that ΓΓtele\Gamma \vdash \Gamma' \, tele), because Γ\Gamma' will probably involve free variables from Γ,x:A\Gamma,x:A.

    • CommentRowNumber70.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    Let me ask this question: is it possible to formulate dependent type theory, including the usual admissible rules it satisfies, without having formal judgments for context or telescope validity? I always thought that it was, and that when such judgments were introduced it was a choice made for a particular reason; but after this discussion I’m not sure any more.

    • CommentRowNumber71.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018
    • (edited Nov 1st 2018)

    Re #70: No, I’m pretty sure you need at least context validity for the substitution principles.

    Maybe I made things all confusing by defining valid telescopes before valid contexts. Usually you don’t do that, and there are basically only two versions of context validity, corresponding to whether or not it’s an input to type validity. (Edit: No; actually whether it’s an output. Not sure if it’s ever neither.)

    • CommentRowNumber72.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    So the usual strategy is to define “total renamings” (variable-for-variable substitution between the whole contexts, which includes weakening, exchange, contraction), and then substitution after that.

    Ah, so a syntactic version of what is written right now (v1) at Initiality Project - Substitution. This is what I’d incline towards doing in the syntax too, then; I’d like to avoid messing around with telescopes if we can.

    In category theory, I’ve found that doing things in an unbiased way often makes things simpler, and at least sometimes that seems to have advantages in syntax too (e.g. the comparison between Theorems 2.3.5 and 2.3.8 in my categorical logic notes).

    • CommentRowNumber73.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Re #69: The “tele” version works because we receive from the conclusion that Γ,x:A,Γctx\Gamma,x:A,\Gamma' ctx, which means that Γctx\Gamma ctx. (At least syntactically. Does this break the semantics?) Combining that with the ΓΓtele\Gamma \vdash \Gamma' tele premise, we get Γ,Γctx\Gamma,\Gamma' ctx.

    • CommentRowNumber74.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    Is it possible that with approach (1) you don’t need a context validity judgment? If Γt:A\Gamma\vdash t:A implies Γok\Gamma\,ok, then we can state weakening for instance as

    ΓAtypeΓ,Δb:BΓ,x:A,Δb:B\frac{\Gamma \vdash A\,type \qquad \Gamma,\Delta \vdash b:B}{\Gamma,x:A,\Delta \vdash b:B}

    and the validity of Γ,Δ\Gamma,\Delta together with ΓAtype\Gamma\vdash A\,type should imply the validity of Γ,x:A,Δ\Gamma,x:A,\Delta.

    • CommentRowNumber75.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    Oh, sorry, I misread “Γ,Γctx\Gamma,\Gamma' \, ctx” as “Γctx\Gamma \,ctx and Γctx\Gamma'\,ctx” when it should have been (Γ,Γ)ctx(\Gamma,\Gamma')\,ctx. My bad.

    • CommentRowNumber76.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    That weakening rule looks right. With approach (1), context validity is usually defined mutually with the typing derivations, as we discussed before. But it came up that Barendregt got clever and avoided them completely. I didn’t think that counted.

    I’m not sure why you’re worried about context validity all of a sudden.

    • CommentRowNumber77.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    Well, as I said in #70, I always thought that having a judgment for context validity wasn’t actually necessary, so I want to know whether I’ve been misled all this time! And practically speaking, I think the fewer judgments we have, the simpler the induction should be.

    • CommentRowNumber78.
    • CommentAuthordlicata335
    • CommentTimeNov 1st 2018

    Re 67: I think I understand and mostly agree. The two things I’d say are:

    (1) Personally, I always prefer having more rules with clear invariants (even if this means having some “obvious” duplication) to doing subtle things, because (especially in an informal proof) if you have good, consistent invariants then it’s more likely that you won’t screw things up in a way that you don’t notice. I don’t really agree that raw telescopes are raw contexts (unless raw is super-raw and doesn’t even keep track of free variables) — raw contexts don’t have free variables, whereas raw telescopes are relative to a context of free variables. So I think of the Γ\Gamma' as a different kind of thing than Γ\Gamma, and in that case there should be a formation judgement for that kind of thing.

    (2) The difference between the top rule and the bottom rule is that the bottom rule re-checks Γ\Gamma, whereas with the obvious inductive definition of ΓΓtele\Gamma \vdash \Gamma' tele you don’t, which matches the convention we’re using elsewhere.

    • CommentRowNumber79.
    • CommentAuthordlicata335
    • CommentTimeNov 1st 2018
    • (edited Nov 1st 2018)

    Re #77 and previous: My understanding is that in (1), a rule like

    ΓokΓtriv:unit \frac {\Gamma ok} {\Gamma \vdash triv : unit}

    needs a context formation premise, so it has to be mutual with the other judgements. Or, if you want to push it further and say that ΓAtype\Gamma \vdash A type should imply Γok\Gamma ok, then

    ΓokΓunittype \frac{\Gamma ok} {\Gamma \vdash unit type}

    needs it. How do you get away without this?

    • CommentRowNumber80.
    • CommentAuthordlicata335
    • CommentTimeNov 1st 2018

    BTW, just as a general comment, I definitely think the Peter strategy (I think Guillaume was also doing things this way when he talked about initiality in Pittsburgh), and formulating the type theory to match it, is the right thing to try first. Proofs of syntactic properties are generally easier if you follow the natural flow of information of things through the rules.

    • CommentRowNumber81.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    Re: #79 That makes sense; I guess I’ve been wrong all this time. But it seems like (3) avoids a context validity judgment in the primitive rules, although we need to define it and incorporate it before getting to the admissible ones?

    • CommentRowNumber82.
    • CommentAuthorMike Shulman
    • CommentTimeNov 1st 2018

    Re #80, just to be clear, by this you mean (2) or (3)? Do you still prefer (2)?

    • CommentRowNumber83.
    • CommentAuthordlicata335
    • CommentTimeNov 2nd 2018
    • (edited Nov 2nd 2018)

    Re #59: Terminology aside, I mostly agree with this, especially the point about (3) needing semantic validity assumptions for most theorems (note sure about the “is perhaps not so useful for syntax” :-) ). Terminology-wise, I would say “invariant” for any of (1) (2) (3) (“it’s an invariant of the judgement that when G |- a : A, G ok”), and I liked “postcondition” for (1); and maybe “intrinsic presupposition” for (2), since it’s part of the (meta)-type; and “extrinsic presupposition” for (3), because it’s external. I think “presupposition” has been used in confusing ways in the type theory literature, which maybe include (1).

    But I’m not sure whether we’re all on completely the same page about (3) at a technical level. For concreteness, let’s say that Foo(n) is supposed to mean that n <= 7, but have the presupposition that n is nonzero. So the “inference rules” (assuming a <= function valued in bool) are

    data Foo(n: Nat) where foo : (n : Nat) -> ((n <= 7) == true) -> Foo n

    So Foo(0) is true, but for whatever reason we want it to be illegal. (I’m thinking of this as a proxy for x:A:Ax : A \vdash : A when AA is ill-formed).

    Then defining RealFoo(n):=(n0)×Foo(n)RealFoo(n) := (n \neq 0) \times Foo(n) and using that as the assumption of theorems involving Foo seems like the right way to work with (3) in the style of (2) (the motivation being avoiding induction-induction):

    If RealFoo(n) then blah

    i.e.

    If n0n \neq 0 and Foo(n)Foo(n) then blah

    because then you assume the presupposition before assuming Foo(n)Foo(n).

    On the other hand, in 71 in the other thread, I think Mike’s suggestion is to work with n0RealFoo(n)n \neq 0 \to RealFoo(n) (unless we were inconsistent about which one is the primed turnstile, in which case it’s n0Foo(n)n \neq 0 \to Foo(n), but these are equivalent). But (n0Foo(n))(n \neq 0 \to Foo(n)) doesn’t seem like a good assumption for a theorem, because it’s trivially true whenever the presupposition is violated (in this case n = 0), even if Foo(n)Foo(n) weren’t true, so the theorem would have to handle that case.

    Maybe the difference is that you were thinking of n0RealFoo(n)n \neq 0 \to RealFoo(n) as a conclusion of a theorem? in which case I agree that a typical theorem would likely assume n0n \neq 0, or something else that implies it.

    I still need to think some more about the proposed local totality theorems though.

    • CommentRowNumber84.
    • CommentAuthorMike Shulman
    • CommentTimeNov 2nd 2018

    I agree with your definition of RealFooRealFoo. What I meant was that although in fact FooFoo is the inductive property and RealFoo(n)(n0)×Foo(n)RealFoo(n) \coloneqq (n\neq 0) \times Foo(n), we think of RealFooRealFoo as the property we care about and think of Foo(n)Foo(n) as meaning (n0)RealFoo(n)(n\neq 0) \to RealFoo(n). I’m not suggesting we actually work with (n0)RealFoo(n)(n\neq 0) \to RealFoo(n), it’s just how we think about the meaning of Foo(n)Foo(n). (Of course it’s not exactly the same as Foo(n)Foo(n), because Foo(0)Foo(0) isn’t necessarily true, but the point is that we should only consider ourselves allowed to conclude something from Foo(n)Foo(n) if we also know that n0n\neq 0.)

    I think of the local totality theorems as matching this intuition.

    • CommentRowNumber85.
    • CommentAuthorMike Shulman
    • CommentTimeNov 2nd 2018

    In case it is helpful, let me give an analogy to the local totality theorems using Foo. Suppose we want to define a function ff from natural numbers to, say, lists of integers, and prove that whenever RealFoo(n)RealFoo(n) holds then Bar(f(n))Bar(f(n)) holds. With the definition RealFoo(n)(n0)×Foo(n)RealFoo(n) \coloneqq (n\neq 0) \times Foo(n), how can we do that? If we think of Foo(n)Foo(n) as meaning (n0)RealFoo(n)(n\neq 0) \to RealFoo(n), so that (n0)×Foo(n)(n\neq 0) \times Foo(n) is like (n0)×((n0)RealFoo(n))(n\neq 0) \times ((n\neq 0) \to RealFoo(n)), that suggests we find some other property in the world of lists corresponding to n0n\neq 0 as a “stage” on the way to Bar(f(n))Bar(f(n)). For instance, we could prove that (a) whenever n0n\neq 0 then f(n)f(n) has even length, and (b) whenever Foo(n)Foo(n), if f(n)f(n) has even length then Bar(f(n))Bar(f(n)). Then putting these together, we get that if (n0)×Foo(n)(n\neq 0) \times Foo(n) then (2|len(f(n)))×((2|len(f(n)))Bar(f(n))(2|len(f(n))) \times ((2|len(f(n))) \to Bar(f(n)), hence Bar(f(n))Bar(f(n)).

    Here n0n\neq 0 is syntactic context validity, Foo(n)Foo(n) is the (3)-style judgments, ff is the partial interpretation, n|len(f(n))n|len(f(n)) is semantic context validity, and Bar(f(n))Bar(f(n)) is definedness of the partial interpretation. So the local totality theorem has the form “given a derivable syntactic judgment, insofar as the semantic context is valid, the interpretation of the term is defined.” It’s not a perfect analogy by any means, but I think it conveys the important intuition I have.

    • CommentRowNumber86.
    • CommentAuthorkyod
    • CommentTimeNov 2nd 2018
    • (edited Nov 2nd 2018)

    Re #79 In my understanding of the style we are currently setting up the rules (the (3)-style ?), the rules for unit would be

    Γunittype(UnitTypeWF)\frac{}{\Gamma \vdash unittype}\qquad({UnitTypeWF})

    and

    Γtrivunittype(TrivIntro)\frac{}{\Gamma \vdash triv \Leftarrow unittype\qquad({TrivIntro})}

    Locally to the rule there is no assumption that Γ\Gamma is valid, in the sense that we can derive this for an arbitrary context (not even syntactically well scoped).

    However, given a valid context Γ\Gamma (that is a raw context Γ=(x 1:A 1,,x n:A n)\Gamma = (x_1:A_1, \ldots, x_n:A_n) with pairwise distinct variables and a list of derivation x 1:A 1,,x k:A kx: k+1:A k+1x_1:A_1, \ldots, x_k:A_k \vdash x:_{k+1} : A_{k+1} – in particular Γ\Gamma is well-scoped – that I abbreviate Γok\vdash \Gamma\, ok), we can now derive both the pair (Γok;Γunitttypetype)(\vdash \Gamma \, ok;\; \Gamma \vdash unitttype\, type) and the triple (Γok;Γunittypeok;Γtrivunittype)(\vdash \Gamma\, ok;\; \Gamma \vdash unittype\, ok;\; \Gamma \vdash triv \Rightarrow unittype) which are the actual objects of interest (In the sense that there should be a bijective correspondence between these pairs/triples and the derivation obtained by the (1)-style).

    TBH, it is the first time I’m seeing such a presentation, and I’m not sure whether it will go through, but it has 2 really nice aspect to it :

    1. it seems that it breaks the mutually recursive loop, allowing a simpler, stratified presentation of judgement (of course we still have to set up the judgements with the validity of context in mind)
    2. it seem to factor out at the root of the derivation tree (in the pair/triple…) redundant premises located at leaves in the (1)-style

    Re #53, #67 I have now doubts about the necessity of the ΓΓ\Gamma \vdash \Gamma' premise in weakening : If I am given a derivation 𝒟\mathcal{D} of Γ,Γ𝒥\Gamma, \Gamma' \vdash \mathcal{J}, I can build by structural induction on 𝒟\mathcal{D} (the induction happening mutually for the 5 kinds of judgments) a new derivation 𝒟\mathcal{D}' of Γ,x:A,Γ𝒥\Gamma, x:A, \Gamma' \vdash \mathcal{J} for xΓ,Γx \notin \Gamma, \Gamma' and any raw term AA, even if Γ,Γ\Gamma, \Gamma' is valid but Γ,x:A,Γ\Gamma, x:A, \Gamma' is not and vice-versa. The reason is that we don’t use expect the context to be valid to derive a judgement (even though at the end of the day, we are only interested in the derivables judgements whose context is valid).

    So the rule

    Γ,Γ𝒥Γ,x:A,Γ𝒥\frac{\Gamma, \Gamma' \vdash \mathcal{J}}{\Gamma, x:A, \Gamma' \vdash \mathcal{J}}

    should be admissible (where xx is distinct from any variable in Γ,Γ\Gamma, \Gamma' so that Γ,x:A,Γ\Gamma, x:A, \Gamma' is indeed a raw context). However if we want a corresponding weakening rule on pairs (Γ,Γok;Γ,ΓBtype)(\vdash \Gamma, \Gamma'\, ok;\; \Gamma, \Gamma' \vdash B\, type) for instance, we would need the more familiar

    ΓAtype(Γ,Γok;Γ,ΓBtype)(Γ,x:A,Γok;Γ,x:A,ΓBtype)\frac{\Gamma \vdash A\, type\qquad (\vdash \Gamma, \Gamma'\, ok;\; \Gamma, \Gamma' \vdash B\, type)}{(\vdash \Gamma,x:A, \Gamma'\, ok;\; \Gamma,x:A, \Gamma' \vdash B\, type)}

    I’m quite puzzled by this (in particular the fact that there is no relation between Γ,Γ\Gamma, \Gamma' being a valid context and Γ,x:A,Γ\Gamma, x:A, \Gamma' being valid in the weakening rule), and wondering whether I made a wrong step somewhere. Does that seems right/wrong ?

    • CommentRowNumber87.
    • CommentAuthorMike Shulman
    • CommentTimeNov 2nd 2018

    I think #79 was about style (1). I agree (#81) that style (3) does seem to avoid the need for a mutually defined context validity judgment, breaking this particular mutually recursive loop (although there are other such loops: the type judgment, term judgment, and equality judgments still have to be mutually defined).

    • CommentRowNumber88.
    • CommentAuthorkyod
    • CommentTimeNov 2nd 2018
    • (edited Nov 2nd 2018)

    Re #87 Right, there are other loops, and here is the current situation (with only Π\Pi-types)

    1. ΓAtype\Gamma \vdash A\, type \qquad depends on nothing
    2. ΓTA\Gamma \vdash T \Leftarrow A \qquad depends on (3),(4)(3), (4)
    3. ΓTA\Gamma \vdash T \Rightarrow A \qquad depends on (1),(2)(1), (2)
    4. ΓAB\Gamma \vdash A \equiv B \qquad depends on (1)(1)
    5. ΓTS:A\Gamma \vdash T \equiv S : A\qquad depends on (1),(2)(1), (2)
    6. Γok\vdash \Gamma\,ok (and more generally ΓΓok\Gamma \vdash \Gamma'\, ok) \qquad depends on (1)(1)

    Of course, adding universes introduce other (necessary I think) dependencies :

    • (1)(1) depends on (3)(3)
    • (4)(4) depends on (5)(5).

    So I guess, as long as we don’t have any construction binding/representing arbitrary context, we do not need to mutually define the context validity judgement.

    • CommentRowNumber89.
    • CommentAuthorMike Shulman
    • CommentTimeNov 2nd 2018

    Note that markdown automatically numbers all numbered lists starting from 1; so perhaps your references may not be displaying the way you intended?

    I also don’t quite know what you mean by “depends on”.

    • CommentRowNumber90.
    • CommentAuthorMike Shulman
    • CommentTimeNov 2nd 2018

    Oh, does “depends on” mean “occurs as the conclusion of rules whose premises contain instances of”? That makes sense if I increment all your number references.

    The additional dependencies that you mention “from universes” actually I think happen whenever you have any way to produce nontrivially dependent types, which includes universes but also for instance identity types.

    • CommentRowNumber91.
    • CommentAuthorkyod
    • CommentTimeNov 2nd 2018

    @Mike Thanks, yes, the list was supposed to start at 0 (I edited it) and “depends on” means is used as a premise. And indeed any type former actually depending on terms will introduce the last two dependencies.

    • CommentRowNumber92.
    • CommentAuthoratmacen
    • CommentTimeNov 2nd 2018
    • (edited Nov 2nd 2018)

    Shucks, Kenji’s absolutely right that our admissible structural rules don’t need context or telescope validity! I just checked my code. Sorry, Mike, I remembered wrong; you were right!

    Here are my admissible rules:

    Γ,ΓTtypeΓ,Δ,(sh|Δ|Γ)(sh|Δ||Γ|T)type\frac{\Gamma,\Gamma' \vdash T type}{\Gamma,\Delta,(sh\,{|\Delta|}\,\Gamma') \vdash (sh\,{|\Delta|}\,{|\Gamma'|}\,T)\,type} \, Γ,Γt:TΓ,Δ,(sh|Δ|Γ)(sh|Δ||Γ|t):(sh|Δ||Γ|T)\frac{\Gamma,\Gamma' \vdash t\,:\,T}{\Gamma,\Delta,(sh\,{|\Delta|}\,\Gamma') \vdash (sh\,{|\Delta|}\,{|\Gamma'|}\,t)\,:\,(sh\,{|\Delta|}\,{|\Gamma'|}\,T)} \, Γa:AΓ,A,ΓTtypeΓ,Γ[a]T[a/|Γ|]type\frac{\Gamma \vdash a : A \qquad \Gamma,A,\Gamma' \vdash T type}{\Gamma,\Gamma'[a] \vdash T[a/{|\Gamma'|}]\,type} \, Γa:AΓ,A,Γt:TΓ,Γ[a]t[a/|Γ|]:T[a/|Γ|]\frac{\Gamma \vdash a : A \qquad \Gamma,A,\Gamma' \vdash t\,:\,T}{\Gamma,\Gamma'[a] \vdash t[a/{|\Gamma'|}]\,:\,T[a/{|\Gamma'|}]}

    This is de Bruijn indexes. The (shnlt)(sh n\,l\,t) shifts up variables not under ll by nn.

    Edit: I abuse the shift and substitute notation for contexts too. For contexts, it’s as if ll were 00.

    • CommentRowNumber93.
    • CommentAuthorMike Shulman
    • CommentTimeNov 2nd 2018

    I don’t think I was right about this – I’m as surprised as you are! I guess the point is that although we need to state the primitive rules in such a way as to maintain the “presupposition invariant”, whereby the validity of the conclusion context can be used to deduce (perhaps in order) validity of the premise contexts, we don’t need to force ourselves to state the admissible rules in a similar way?

    • CommentRowNumber94.
    • CommentAuthoratmacen
    • CommentTimeNov 2nd 2018

    I don’t think I was right about this – I’m as surprised as you are!

    Huh?

    Anyway, the sanity check still needs context validity. So maybe you weren’t entirely right? But my correction was incorrect, since I was thinking the structural rules needed it.

    we don’t need to force ourselves to state the admissible rules in a similar way?

    Well admissible rules are theorems. You can state them however you like if you can prove them. It’s evidently quite subtle what you need, if you don’t think about the actual proof. For the structural rules, you don’t need context validity, for the roughly same reason raw structural operations don’t need contexts, plus some luck. But you definitely don’t always get this lucky.

    • CommentRowNumber95.
    • CommentAuthoratmacen
    • CommentTimeNov 2nd 2018

    Re #78:

    I don’t really agree that raw telescopes are raw contexts (unless raw is super-raw and doesn’t even keep track of free variables) — raw contexts don’t have free variables, whereas raw telescopes are relative to a context of free variables.

    I see. But yes, currently, our raw syntax really is that raw.

    There’s an oddity to it though, which is that raw contexts are constrained to have pairwise-distinct variable names. I don’t see any reason for that; the raw terms and types don’t track free variables in their (meta)type, so the development of raw syntax doesn’t itself rely on the constraint. And the context validity judgment enforces it redundantly.

    I like raw syntax. The typing rules cook it all the way through. Anything less is undercooked. If it needs more cooking, it might as well be raw.

    The difference between the top rule and the bottom rule is that the bottom rule re-checks Γ\Gamma, whereas with the obvious inductive definition of ΓΓtele\Gamma \vdash \Gamma' tele you don’t, which matches the convention we’re using elsewhere.

    Right. And that’s what I meant that Γ,Γctx\Gamma,\Gamma' ctx is stronger than needed. But think of programming. You have a subroutine that wants to use something you should already have, so you just pass it along. No need to take it apart on one end and put it back together on the other just because you can.

    Also, if we followed your approach of assuming the presupposition along with the judgment. We’d simply assume Γ,Γctx\Gamma,\Gamma' ctx.

    But it turns out you don’t need either. There’s no cleverness in avoiding it; you just don’t need it.

    • CommentRowNumber96.
    • CommentAuthoratmacen
    • CommentTimeNov 2nd 2018

    Re #83:
    I like “invariant” as a general term for (1), (2), or (3).

    I don’t think what you’re calling “extrinsic presupposition” is the same as (3), which I think should be called “precondition” for symmetry with (1) being called “postcondition”. With (3), you don’t do anything to try to bundle things up with their invariant(s). In your Foo(n) example, you’d just use Foo. You’d also never use {n|n0}\{n | n \neq 0\}. I think “intrinsic presuppositions” and “extrinsic presuppositions” are both variants of (2). You had me convinced already though that (2) and (3) can be very similar.

    As far as extrinsic presuppositions being a way to avoid induction-induction, first of all Foo is a bad example for this, because the intrinsic presupposition version of Foo still wouldn’t be inductive-inductive. Second, when you’re actually inductive-inductive, getting the induction-induction principle from extrinsic presupposition style is quite involved. Without UIP, I think it’s unsolved.

    • CommentRowNumber97.
    • CommentAuthoratmacen
    • CommentTimeNov 2nd 2018

    I think a unit type would go like this:

    Γunittype\frac{}{\Gamma \vdash unit\,type} \, Γtrivunit\frac{}{\Gamma \vdash triv \Rightarrow unit}

    The typing rule should be synthesizing, and I think Dan’s “unittypeunit type” was a TeXnical difficulty.

    • CommentRowNumber98.
    • CommentAuthoratmacen
    • CommentTimeNov 2nd 2018

    It looks like the exchange rule doesn’t need ΓBtype\Gamma \vdash B type. Here’s a proof, using precondition-free weakening and substitution, and an “obvious” alpha renaming, which is actually the tricky part. Do not trust this!

    Γ,x:A,y:B,ΓJ\Gamma,x:A,y:B,\Gamma' \vdash J \qquad(1 = hypothesis)
    Γ,y:B,x:A,y:B,ΓJ\Gamma,y':B,x:A,y:B,\Gamma' \vdash J \qquad(where yy' is fresh)\qquad(2 = weaken 1)
    Γ,y:B,x:AyB\Gamma,y':B,x:A \vdash y' \Rightarrow B \qquad(3 = variable rule)
    Γ,y:B,x:A,Γ[y/y]J[y/y]\Gamma,y':B,x:A,\Gamma'[y'/y] \vdash J[y'/y] \qquad(4 = substitute 3 into 2)
    Γ,y:B,x:A,Γ[y/y][y/y]J[y/y][y/y]\Gamma,y:B,x:A,\Gamma'[y'/y][y/y'] \vdash J[y'/y][y/y'] \qquad(5 = alpha rename 4)
    Γ,y:B,x:A,ΓJ\Gamma,y:B,x:A,\Gamma' \vdash J \qquad(same as 5, since yy' is fresh)

    The reason why the alpha rename doesn’t affect AA is because:

    • yFV(A)y' \notin FV(A), because it’s fresh
    • yFV(A)y \notin FV(A), because ???

    So the problem is that the context could be a bunch of nonsense, and AA could actually be just yy, why not? So I think the correct additional premise is ΓAtype\Gamma \vdash A type, which ought to imply that the free variables of AA are entries of Γ\Gamma. And we know that yy isn’t an entry in Γ\Gamma because Γ,x:A,y:B,Γ\Gamma,x:A,y:B,\Gamma' is a raw context, and it’d have duplicate entries if Γ\Gamma had yy. So maybe that constraint was a good idea?

    But you see (drumroll): we could also just have yFV(A)y \notin FV(A) as the premise! Who’d’ve thought it?

    • CommentRowNumber99.
    • CommentAuthorMike Shulman
    • CommentTimeNov 2nd 2018

    I originally had allowed raw contexts to contain duplicate variables. But then I wasn’t sure what to do about duplicate variables in defining the partial interpretation of contexts. I suppose we could just declare that if there are duplicate variables then the interpretation is empty. But that really makes it feel stupid to allow duplicate variables in the first place, if we’re throwing away that possibility “by hand” as soon as we start doing anything. And actually I think the same thing is happening with ill-scoped contexts only less explicitly: as soon as there is a type in the context containing a free variable (that doesn’t occur to the left of it), then the partial interpretation of that type has empty domain, hence the partial interpretation of any context containing it is empty.

    This inclines me towards Dan’s point of view that a raw context should at least be well-scoped. This is a simple property that can be defined on raw syntax without any inductive-inductive problems, and since my motivation for using raw things is “pragmatic” rather than philosophical I don’t have a problem imposing it.

    • CommentRowNumber100.
    • CommentAuthordlicata335
    • CommentTimeNov 3rd 2018
    • (edited Nov 3rd 2018)

    Should the substitution principle be stated with MAM \Rightarrow A or MAM \Leftarrow A? It seems like \Leftarrow to me, because that’s the “most general” judgement (and the equivalent of :: in the non-bidirectional system). If MM requires a definitional equality type coercion to have type AA, I should still be able to substitute it for a variabel of type AA.