Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeOct 26th 2018

    Copied over the formation/introduction/elimination rules, added beta and eta

    v1, current

    • CommentRowNumber2.
    • CommentAuthorMike Shulman
    • CommentTimeOct 26th 2018

    What is the best way in nLab syntax to space out a bunch of rules like this vertically a little bit, so they don’t run together and are more readable?

    • CommentRowNumber3.
    • CommentAuthorTodd_Trimble
    • CommentTimeOct 26th 2018
    • (edited Oct 26th 2018)

    Provided spacing. (All I did is put \,\backslash, between double dollar signs, to create an empty line.)

    diff, v2, current

    • CommentRowNumber4.
    • CommentAuthorMike Shulman
    • CommentTimeOct 26th 2018

    That looks good, thanks. I had tried creating an ordinary blank paragraph in between, but that made the space way too big; a blank displayed equation seems to work much better.

    • CommentRowNumber5.
    • CommentAuthoratmacen
    • CommentTimeOct 27th 2018
    Technically, that's function extensionality, not eta. They are interchangeable in extensional type theory.
    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeOct 27th 2018

    No, that is eta. Function extensionality is a statement involving the identity type, which is not mentioned here. Eta-conversion is “function extensionality for judgmental equality”, so yes they are equivalent in extensional type theory, but in intensional type theory this one is eta.

    • CommentRowNumber7.
    • CommentAuthoratmacen
    • CommentTimeOct 28th 2018
    Oh, you're right. I should've known from reading those bidirectional equality algorithms, but it didn't register.
    • CommentRowNumber8.
    • CommentAuthorkyod
    • CommentTimeOct 31st 2018

    Added congruence rules for Π,λ,App\Pi, \lambda, App.

    In the two rules for λ\lambda and AppApp, we cannot remove the premises of type equality, in the first case because Π\Pi need not to be an injective type former if we were to add equality reflection.

    An alternative (less-biaised) presentation would be to replace the types in the conclusions (Π(x:A).B\Pi(x:A).B and B[N/x]B[N/x]) by some dangling type CC and rely on the input validity assumption, for instance Γλ(x:A.B).M:C\Gamma \vdash \lambda(x:A.B).M : C to deduce that ΓCΠ(x:A).B\Gamma \vdash C \equiv \Pi(x:A).B

    diff, v4, current

    • CommentRowNumber9.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018
    Should these sub-pages really have a category marker at the end? It show's up when it's included.
    • CommentRowNumber10.
    • CommentAuthorAli Caglayan
    • CommentTimeOct 31st 2018

    @atmacen yes, that was my mistake. Mike said elsewhere we should get rid of category markers for subpages. It would be nice if subpage inclusion got rid of them automatically but sadly this isn’t the case.

    • CommentRowNumber11.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    Yes, all congruence rules have to check equality of all subterms, including the annotations. I think the “biased” way is probably better.

    Although actually, it occurs to me to wonder: since all of our fully-annotated terms synthesize their types, is there really any need for the type annotation on term equality? Could we just write Γλ(x:A.B).Mλ(x:A.B).M\Gamma \vdash \lambda(x:A.B).M \equiv \lambda(x:A'.B').M'?

    • CommentRowNumber12.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018

    I’ve just started worrying about congruence rules myself.

    We have been saying that validity is an input to equality judgments. But this is different from what Mike proposes to do for totality, where, for the induction motive for equality, totality is not assumed or established. He uses equality of partial objects. (By partial objects, I mean subsingleton subsets, which are nullary partial functions. Equality of partial functions is pointwise equality of partial objects.)

    What seems to syntactically correspond to equality of partial objects is that validity is neither an input nor output for equality. It’s not presupposed at all, since the intended semantics of equality is Kleene equality. This seems awfully over-fit to Streicher’s method.

    I get the feeling that we’re getting too clever about the rules. Mike just proposed getting rid of the type annotation on element equality judgments. In other words, switching to untyped equality. At the very least, this seems to require a different form of Π\Pi eta. I don’t know if it works at all for 11 uniqueness.

    • CommentRowNumber13.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    No, it’s not Kleene equality. Kleene equality is “if either is defined, so is the other, and they’re equal.” My proposal for totality interprets equality as “if both are defined, they they’re equal.”

    • CommentRowNumber14.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    You may be right that untyped equality won’t work with eta, though, even with fully annotated terms…

    • CommentRowNumber15.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018
    • (edited Oct 31st 2018)

    Re #13:
    I see the wording on Totality changed. Was this the intended meaning all along, and I just misunderstood it? (That would answer my question here.)

    Anyway, this doesn’t affect my larger point that what we’re doing semantically doesn’t correspond to our working intuition about presuppositions and modes.

    Edit: I’m wrong, Now I’m thinking this does fix the correspondence.

    • CommentRowNumber16.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    Yes, that was the intended meaning all along. Sorry for the confusion! And sorry for not notifying the change, I thought it was just a clarification but now I can see how the original version could have been interpreted differently. I do believe that this makes the correspondence have the same directions of information flow, indeed that’s why I chose it.

    • CommentRowNumber17.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    By the way, this phrasing of totality is not actually what Streicher did. Instead it goes along with Peter Lumsdaine’s suggestion to reorient Streicher’s proof by making validity an input semantically.

    • CommentRowNumber18.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018

    I am relieved to hear #16. Re #17, I didn’t know Peter changed equality like that. In his note, he mostly talks about the partial interpretation, not the rules he used or the statement of totality.

    • CommentRowNumber19.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    I wasn’t saying that Peter actually suggested that change to equality, only that I feel that it’s a natural change that is consistent with, and suggested by, what he did propose explicitly. Maybe he will drop by and tell us whether it’s what he had in mind. (-:

    • CommentRowNumber20.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018
    • (edited Nov 1st 2018)

    I hope so. From what I can tell, he’s one of the few people who’s already been thinking about the details of this proof. He could tell us what we’re still doing wrong. ^_^

    BTW, have you heard from Dan about his suggestion to subset the indexes of the judgment forms?

    Edit: Dan’s later comment

    • CommentRowNumber21.
    • CommentAuthorAli Caglayan
    • CommentTimeNov 29th 2018

    modifying to see why include is breaking

    diff, v7, current

    • CommentRowNumber22.
    • CommentAuthorAli Caglayan
    • CommentTimeNov 29th 2018
    • (edited Nov 29th 2018)

    I think Todd’s spacing was giving the includes a headache. I have used sections instead, please modify but I don’t think using LaTeX for spacing is a good idea. Perhaps name the rules where I have written rule?

    • CommentRowNumber23.
    • CommentAuthorMike Shulman
    • CommentTimeNov 29th 2018

    I tried putting them all together in the same displaymath using gathered and blank lines in between. This seems to work with the include!

    • CommentRowNumber24.
    • CommentAuthorAli Caglayan
    • CommentTimeMar 29th 2019
    • (edited Mar 29th 2019)
    The fifth rule down is essentially function extensionality for judgemental equality right?
    • CommentRowNumber25.
    • CommentAuthoratmacen
    • CommentTimeMar 29th 2019

    Yes, see #’s 5-7. Deep understanding of these rules has been evicted from my brain since then. I don’t remember why that rule doesn’t have premises for AA and BB.

    I do remember there were a lot of minor outstanding issues with the technical details on these initiality pages. I figured they should be corrected after decisions were made about the rigorous handling of substitutions. But then that didn’t happen.

    • CommentRowNumber26.
    • CommentAuthorAli Caglayan
    • CommentTimeMar 29th 2019

    So we have:

    1. Π\Pi-formation
    2. Π\Pi-application
    3. η\eta
    4. β\beta
    5. η\eta-\equiv
    6. AppApp - \equiv
    7. ?
    8. β\beta - \equiv
    • CommentRowNumber27.
    • CommentAuthorMike Shulman
    • CommentTimeMar 30th 2019
    1. Π\Pi-formation
    2. Π\Pi-elimination (application)
    3. Π\Pi-introduction (λ\lambda-abstraction)
    4. β\beta-reduction
    5. η\eta-conversion
    6. Π\Pi-congruence
    7. λ\lambda-congruence
    8. AppApp-congruence

    Re-starting this project is still on my to-do list. Unfortunately a lot of other stuff intervened this semester, both personal and professional.

    • CommentRowNumber28.
    • CommentAuthorAli Caglayan
    • CommentTimeMar 30th 2019

    @Mike the reason I am asking is because I am trying to write a formal exposition of dependent type theory, many of the ideas used here like Bob Harper’s syntax and bidirectional type checking are really cool and I want to show them off. This is currently the only place I know they are written down, the initiality part is not so critical for me (yet?).

    • CommentRowNumber29.
    • CommentAuthoratmacen
    • CommentTimeMar 30th 2019

    Re #28:

    By “Bob Harper’s syntax”, you mean abstract binding trees? I thought this comes from Harper’s PFPL.

    There are multiple versions of bidirectional type checking, and the rules here are not one of them, since they aren’t an algorithm. The two vanilla styles of bidirectional type checking that I know of are Coquand’s algorithm (and variants), and canonical form type systems, like canonical LF.

    The rules here are bidirectional in that the rules (mostly?) follow an input/output mode discipline, and the judgment forms for the two directions have different modes. In algorithmic rules, the modes would constrain the flow of data in the algorithm’s execution. Here, they’re intended to guide the flow of semantic types in the interpretation function.

    • CommentRowNumber30.
    • CommentAuthorAli Caglayan
    • CommentTimeMar 30th 2019

    Re #29:

    Yes, ABTs are quite useful since binding and scope are defined later on in most literature. Defining at the level of syntax makes more sense to me.

    I agree that what we have doesn’t quite give an algorithm, but it shouldn’t be too difficult to adapt Coquand’s work to this no?

    And on another note I am not quite sure if the β\beta-rule is correct. Or is this the usual β\beta merged with its congruence rule?

    • CommentRowNumber31.
    • CommentAuthoratmacen
    • CommentTimeMar 30th 2019

    I agree that what we have doesn’t quite give an algorithm, but it shouldn’t be too difficult to adapt Coquand’s work to this no?

    Well a type checking algorithm would/could be given as a set of derivation rules with a certain style. These rules deliberately have a different style. You can’t adapt Coquand’s algorithm to handle equality reflection. (Well, you sort of can, if Andromeda counts.) The rules here are anticipating being extended with equality reflection.

    This gets me wondering though whether it might be a good idea to directly interpret AML programs, or something along those lines. Any thoughts, Mike? You’d want the interpretation to only really depend on the term obtained from the AML program, but the rules would all be algorithmic in the precise sense of running AML.

    • CommentRowNumber32.
    • CommentAuthoratmacen
    • CommentTimeMar 30th 2019

    And on another note I am not quite sure if the β\beta-rule is correct. Or is this the usual β\beta merged with its congruence rule?

    Throwing in some congruence should be harmless. I think it’s more unusual that it’s missing premises for MM and NN. But that ought to work since everything is input mode.