Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeOct 26th 2018

    Created page with some basic definitions.

    v1, current

    • CommentRowNumber2.
    • CommentAuthorTodd_Trimble
    • CommentTimeOct 26th 2018

    Holy cow, Mike!! I quickly looked at all your recent pages – thanks so much!!

    I’ll probably be needing to take a harder look at the Plan and see how much I “get”. I will resist the temptation to say more now, before having that look.

    • CommentRowNumber3.
    • CommentAuthorKarol Szumiło
    • CommentTimeOct 26th 2018

    Here is a very basic question: application has the form App x:A.B(M,N)App^{x:A.B}(M,N). I’m guessing that AA and BB are supposed to be the types of MM and NN. But why do we need to mention the variable xx in such term?

    • CommentRowNumber4.
    • CommentAuthormaxsnew
    • CommentTimeOct 26th 2018

    The type annotation x:A.Bx:A.B here is to note that the type of MM is Π x:AB\Pi_{x:A} B (in a well typed term at least), so the xx is needed because it is bound in BB. For instance MM could be λx:A.refl(x)\lambda x:A. refl(x) which has type Π x:Ax=x\Pi_{x:A} x = x

    • CommentRowNumber5.
    • CommentAuthorMike Shulman
    • CommentTimeOct 26th 2018

    Right. Although actually MM would have to be λ(x:A.x=x).refl(x)\lambda(x:A.x=x) .refl(x), since as was pointed out on the other thread the codomains of abstractions also have to be annotated in general.

    • CommentRowNumber6.
    • CommentAuthorJohn Dougherty
    • CommentTimeOct 26th 2018

    Added a first attempt at alpha-equivalence and capture-avoiding substitution.

    diff, v3, current

    • CommentRowNumber7.
    • CommentAuthorKarol Szumiło
    • CommentTimeOct 27th 2018

    The type annotation x:A.Bx:A.B here is to note that the type of MM is Π x:AB\Pi_{x:A} B (in a well typed term at least), so the xx is needed because it is bound in BB.

    Oh, of course. That makes sense. When we mod out α\alpha-equivalence, will there be a notational distinction between a raw term and its equivalence class?

    • CommentRowNumber8.
    • CommentAuthoratmacen
    • CommentTimeOct 27th 2018

    No substitutions for constants. (I think.) Only for variables.

    diff, v4, current

    • CommentRowNumber9.
    • CommentAuthoratmacen
    • CommentTimeOct 27th 2018
    I see three problems with substitutions and alpha equivalence, which I might get around to fixing:

    You need multiple simultaneous substitutions. With variable names, I don't think this can always be mimicked one at a time, even throwing in alpha conversions.

    The alpha equivalence as currently defined is not a congruence. You can only rename a variable bound in the outermost term constructor.

    Most insidiously, the renaming currently used in alpha equivalence doesn't avoid capture. For example (eliding annotations), you could convert (λx.λy.Ap(x,y)) to (λx.λx.Ap(x,x)). You can only rename to a variable that isn't already free. The other possible bad renaming, to (λy.λy.Ap(y,y)), was already avoided because substitution is partial.
    • CommentRowNumber10.
    • CommentAuthorMike Shulman
    • CommentTimeOct 27th 2018

    Thanks for working on this, everyone! I wonder whether we could do this at a higher level of abstraction, so that we wouldn’t need to add new clauses to the definition of substitution and α\alpha-equivalence every time we add new type formers. For instance, I was just looking at Bob Harper’s Practical foundations of programming languages, and he defines a general notion of “abstract binding tree” that seems to support substitution and α\alpha-equivalence for “any term system one could give”, although I don’t completely understand it.

    • CommentRowNumber11.
    • CommentAuthoratmacen
    • CommentTimeOct 27th 2018

    which I might get around to fixing

    Oh, maybe I shouldn’t. I don’t want to steal John Dougherty’s job.

    • CommentRowNumber12.
    • CommentAuthorJohn Dougherty
    • CommentTimeOct 27th 2018

    Please do fix them, if you like! I put my name next to this task on the Initiality Project page because I was volunteering; I didn’t mean to claim this page as mine or anything.

    Also, thanks for pointing out these issues. I think I might have a look at Harper’s discussion of abstract binding trees before trying to fix these problems (if you don’t fix them first). It’s true that it would be a pain to redefine all this with each new type former.

    • CommentRowNumber13.
    • CommentAuthorMichael_Bachtold
    • CommentTimeOct 28th 2018
    • (edited Oct 28th 2018)

    An elementary question: are Π\Pi, λ\lambda, App\mathrm{App} and the colon :: considered to be constants? (I’m assuming parenthesis and the other punctuation marks like , and . are not constants of the raw syntax). If none of the above are constant, then what would be an example of a constant in the type theory under consideration?

    • CommentRowNumber14.
    • CommentAuthorMike Shulman
    • CommentTimeOct 28th 2018

    No, those are not constants. Π\Pi, λ\lambda, and AppApp are “constructors” of the raw syntax, i.e. labels that can be put on a node of an abstract syntax tree. The colon is like the other punctuation marks, part of how we notate abstract syntax trees and judgments in concrete syntax but has no independent meaning. In the language of n-theories, Π,λ,App\Pi, \lambda, App are part of the “doctrine”, while the constants are what specify a particular “theory” in that doctrine. For instance, if we wanted to study categories inside dependent type theory, we might consider a theory (or “signature”) containing a type constant OO, another type constant x:O,y:OA(x,y)typex:O, y:O \vdash A(x,y) \, type, a composition constant x:O,y:O,z:O,f:A(x,y),g:A(y,z)comp(x,y,z,f,g):A(x,z)x:O,y:O,z:O,f:A(x,y),g:A(y,z) \vdash comp(x,y,z,f,g) : A(x,z), and so on. We can also use constants to implement axioms, such as function extensionality or univalence.

  1. I see, thanks for clarifying.

    • CommentRowNumber16.
    • CommentAuthoratmacen
    • CommentTimeOct 28th 2018
    I've thought about Harper's abstract binding trees before. I could try to answer questions.
    • CommentRowNumber17.
    • CommentAuthorMike Shulman
    • CommentTimeOct 28th 2018

    We should include some examples somewhere. I wonder whether they should be interspersed, or collected all in one place?

    • CommentRowNumber18.
    • CommentAuthorraghu
    • CommentTimeOct 30th 2018

    I wonder if it is better to say something else, such as “preterm”, instead of “raw term”. It would enable us to avoid the non-subsective adjective “raw” in the phrase “raw term”: a raw term is not a term. I like the remark of Gerard Debreu [Theory of value, 1959, chapter 1, page 27, note 1], explaining why he prefers to say “correspondence” instead of “multi-valued function”:

    A correspondence (N. Bourbaki’s term) has often been called a multi-valued function. Locutions such as these, where the object named by a noun and an adjective is not in the class of objects named by the noun alone, have been avoided here.

    The usage “preterm” has precedents, as in Sørensen and Urzyczyn [Lectures on the Curry-Howard isomorphism, Section 1.2].

    • CommentRowNumber19.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    I think one can argue that a raw term is a term. It’s just not necessarily a well-typed term.

    • CommentRowNumber20.
    • CommentAuthorraghu
    • CommentTimeOct 30th 2018

    a raw term is a term.

    Can you please explain? According to the last paragraph of the page, a term is an α\alpha-equivalence class of raw terms. I don’t understand how a raw term is an α\alpha-equivalence class of raw terms.

    • CommentRowNumber21.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    Hmm, I didn’t write that paragraph. I’m not sure I would agree with it. To me “raw” indicates possibly-ill-typed-ness, not the presence of variable names, and I would rather not introduce a clumsy word like “preterm” merely to be pedantic about α\alpha-equivalence.

    But in any case I would rather not argue about terminology at this point in the project, when there are so many better arguments to have. (-: Are you actually confused by the mathematics, or just not liking the terminology?

    • CommentRowNumber22.
    • CommentAuthorraghu
    • CommentTimeOct 30th 2018
    • (edited Oct 30th 2018)

    Hmm, I didn’t write that paragraph. I’m not sure I would agree with it. To me “raw” indicates possibly-ill-typed-ness, not the presence of variable names

    I will wait for someone to rewrite the page according to what you are saying.

    I would rather not introduce a clumsy word like “preterm” merely to be pedantic about α\alpha-equivalence.

    But in any case I would rather not argue about terminology at this point in the project, when there are so many better arguments to have. (-: Are you actually confused by the mathematics, or just not liking the terminology?

    I don’t mean to be contrary; I am only trying to say that the overview of the project seems to start from the basics of type theory, and I think there was some mention of explaining the subject to mathematicians who don’t know anything about it. If one is explaining the basics of something to a novice, perhaps it is better to be precise even if it is pedantic. I understand it is a matter of personal taste, but I find the Bourbaki style of exposition very satisfying; I wonder if it is considered pedantic.

    In any case, I feel uneasy when people ask me to identify things, especially at an early stage of learning a subject. A germ is a function. An element of L 2(R)L^2(\R) is a function. Sure, but it takes me a while to get used to such identifications.

    In the present context, I am afraid that we will slip into identifying a raw term with an α\alpha-equivalence class of raw terms as in several books that I have tried to read, and hand-wave verifications of invariance of various operations under α\alpha-equivalence.

    As you can see, my confusion is due to a mixture of ignorance and a difficulty with the accounts of type theory that I have tried to learn from.

    Reading what I have written, I feel that I am not very clear. Anyway, it will do for now.

    Thank you for doing so much work for this project, and for making it open to everyone.

    • CommentRowNumber23.
    • CommentAuthoratmacen
    • CommentTimeOct 30th 2018

    It looks like there’s just been a misunderstanding about what’s “raw” about “raw terms”. There are three types involved:

    • What John called RawTmRawTm, which have bound variables names
    • What John called TmTm, the quotient, which prevents you from looking at bound variable names
    • Those TmTm that are valid in some sense defined by the judgments

    I would also say that raw terms are terms—the “raw” is just for emphasis—but I usually use de Bruijn indexes, so I don’t usually need to worry about whatevers that remember variable names.

    Proposal: “nominal terms”, “terms”, and “valid terms”

    Speaking of arguing about terminology, I’m actually not so happy about calling that presheaf TmTm. Those guys generally aren’t syntax. Is there a problem with ElEl?

    • CommentRowNumber24.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    I don’t think we need a special name for α\alpha-equivalence classes of terms. We can just say that we consider terms up to α\alpha-equivalence. Just like we don’t have a special word for “set up to isomorphism” when regarding sets “structurally” as objects of the category SetSet.

    And I think TmTm and TyTy are pretty standard for the presheaves in a CwF. They’re “semantic terms”. (-:

    • CommentRowNumber25.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    Separate raw terms and raw types.

    diff, v6, current

    • CommentRowNumber26.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    Correct definition of α\alpha-equivalence: the variable xx in App (x:A).B(M,P)App^{(x:A).B}(M,P) is bound in BB, not in MM and PP; and the new bound variable must not already occur freely in its scope-to-be.

    diff, v6, current

    • CommentRowNumber27.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    Correct definition of capture-avoiding substitution: when substituting for a bound variable, we still have to carry out the substitution in the parts of the term that the binder doesn’t scope over; and when substituting into AppApp we do have to do it in all the subterms (not sure why that was treated differently).

    diff, v6, current

    • CommentRowNumber28.
    • CommentAuthoratmacen
    • CommentTimeOct 30th 2018

    Just like we don’t have a special word for “set up to isomorphism” when regarding sets “structurally” as objects of the category SetSet.

    I didn’t realize that was analogous, since a “higher quotient” is normally not available, to formalize working up to isomorphism. And when it is, I didn’t think SetSet was defined as a higher quotient.

    • CommentRowNumber29.
    • CommentAuthorMike Shulman
    • CommentTimeOct 30th 2018

    The point is that we do just fine in practice without a higher quotient, and we so can similarly do without “lower quotients”.

    When higher quotients are available, as in HoTT, the category SetSet whose objects are h-sets doesn’t usually need to be quotiented (i.e. Rezk-completed) because univalence makes it already Rezk-complete. But if we defined it based on some stricter notion of set, for instance ZF-like sets built inside HoTT, then we would need to Rezk-complete it to obtain something that behaves like the usual category of sets.

    • CommentRowNumber30.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018
    • (edited Oct 31st 2018)

    Argh! The example substitution was all wrong. We can’t everybody just use de Bruijn indices????

    diff, v7, current

    • CommentRowNumber31.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018
    So are we switching to fresh-renaming-style nominal terms? (Like with Harper's ABTs?) Either way, it remains to make alpha equivalence a congruence.
    • CommentRowNumber32.
    • CommentAuthormaxsnew
    • CommentTimeOct 31st 2018

    I think we should get rid of the 3 cases with side condition y=xy = x, and just make the substitution operation undefined in those cases. Once we move to α\alpha-equivalence classes, we can always assume that yxy \neq x since it can be renamed to something different from xx. As it stands we have to prove that these 2 cases do the same thing on α\alpha-equivalent terms to show we respect the quotient, so the extra cases are just unnecessary work. You can always prove after the fact that those cases are correct and well-defined for α\alpha-equivalence classes of raw terms. This is the usual way of doing things, it’s called the “Barendregt convention”: any bound variable is always different from any free variable. If that sounds reasonable to others I’ll delete the extra cases.

    Then I would separately define the notion of renaming a single variable, which is used in the definition of α\alpha equivalence currently, to avoid intertangling the definition of α\alpha equivalence with our general substitution function.

    • CommentRowNumber33.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    Re #32 that sounds reasonable to me. I don’t understand the jargon in #31, can you explain?

    • CommentRowNumber34.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018

    Full side condition on ww in the substitution example.

    diff, v8, current

    • CommentRowNumber35.
    • CommentAuthoratmacen
    • CommentTimeOct 31st 2018
    • (edited Oct 31st 2018)

    Re #33: We are currently defining “FV” sets. This is leaves-to-root propagation of variable information, analogous to Andromeda’s context joining. The approach usually favored for formalization of nominal expressions mostly/entirely avoids “FV” and keeps track of reserved variables on the way to the leaves, analogous to the usual context propagation. Variables not currently reserved can be safely considered fresh for the current subexpression, so you don’t need “FV” in side conditions. (This requires knowing the FV of the expression at the outset, to reserve them, but you usually start from closed expressions. You know. Same argument as for root-to-leaves context propagation.) Once you have a working notion of freshness, you can rename bound variables to fresh variables (edit: and then reserve them) as you enter binders. This is what I meant by “fresh-renaming-style”. All of this root-to-leaves stuff is used by Harper’s ABTs (and multiple other developments).

    As for alpha equivalence not being a congruence, that was just a reminder of the same bug report from #9.

    • CommentRowNumber36.
    • CommentAuthorMike Shulman
    • CommentTimeOct 31st 2018

    Okay, I think I sort of see. I don’t really have an opinion, if that way is considered better then I’m fine with it.

    #9 also mentioned simultaneous substitutions; do we need those defined separately? I’m thinking right now that I want to collapse the “weakening/exchange/contraction” and “substitution” lemmas at Initiality Project - Substitution into one lemma about simultaneous substitution (i.e. along a “context morphism”), to avoid doing the same work twice.

    • CommentRowNumber37.
    • CommentAuthormaxsnew
    • CommentTimeOct 31st 2018

    Define re-naming, redefine alpha-equivalence and remove redundant cases of substitution.

    diff, v9, current

    • CommentRowNumber38.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    I’m worried about the “x=yx = y or x=zx = z” cases in the new definition of renaming. In case x=zx = z and xyx \neq y, it doesn’t seem right to say the body doesn’t need to be renamed. It should give up. I think the old definition of renaming as a special case of partial substitution did this right, and I don’t see why we need renaming to be separate.

    Thanks for finally (mostly?) fixing alpha equivalence, Max.

    • CommentRowNumber39.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Fixes (I think) for alpha equivalence.

    diff, v10, current

    • CommentRowNumber40.
    • CommentAuthormaxsnew
    • CommentTimeNov 1st 2018

    Oof yeah not sure what I was thinking with that side condition.

    • CommentRowNumber41.
    • CommentAuthoratmacen
    • CommentTimeNov 1st 2018

    Add alpha equivalence case for constants.

    diff, v10, current

    • CommentRowNumber42.
    • CommentAuthormaxsnew
    • CommentTimeNov 1st 2018

    Correct definition of alpha-equivalence, I think :).

    We define substitution as a partial function, then in the definition of substitution, we make sure that “fresh” variables are different from all free and bound variables in the term, so that it is clear that our use of substitution is well defined.

    diff, v11, current

    • CommentRowNumber43.
    • CommentAuthoratmacen
    • CommentTimeNov 4th 2018
    • (edited Nov 4th 2018)

    For substitution, “yFV(N)y \notin FV(N)” —> “xFV(N)x \notin FV(N)”. xx is the bound variable, that could potentially capture.

    diff, v12, current

    • CommentRowNumber44.
    • CommentAuthorColin Tan
    • CommentTimeNov 8th 2018

    Tried to rewrite the definition of raw terms and raw types so that the elements of RawTmRawTm and RawTyRawTy are well-founded rooted trees themselves and not a linear representation C(M 1,...,M n)C(M_1,..., M_n) that refers to the actual tree.

    diff, v13, current

    • CommentRowNumber45.
    • CommentAuthorColin Tan
    • CommentTimeNov 8th 2018
    I am trying to understand the abstract syntax representation of raw terms and raw types. It is written "for instance, C(D(x), y) represents a tree with four nodes: a root with two children, one of which itself has two children."

    Let's start from D(x). Is D(x) the tree:
    D -> x?
    So that C(D(x), y) is the tree
    y <- C -> D -> x?
    If so, then shouldn't the node D have only one child?

    Am I right to say that "C(D(x), y) represents a tree with four nodes, whose root has two children, each of which is the root of the tree representing D(x) and y respectively, so that the former child has *exactly one* child?"
    • CommentRowNumber46.
    • CommentAuthorGuest
    • CommentTimeNov 8th 2018
    Is every variable secretly just a 0-ary term constant?

    Suppose for the moment that our variables are x_1, x_2, ... Assuming we are only doing Pi-types, could we then regard TyCst = {Pi} and TmCst = {lambda, App, x_1, x_2, ...} so that a more formal definition of raw terms and raw types as rooted trees labelled by constants would go like this:

    For each i, the tree with a single node labelled by x_i is a raw term, denoted also by x_i;
    Given raw terms A and B and i = 1,2,....., the tree whose root (labeled by Pi) that has three children, which are roots of x_i, A and B, is a raw type, denoted by ;
    Given raw terms A, B, M and i = 1,2,....., the tree whose root (labelled by lambda) that has four children, which are roots of x_i, A, B and M, is a raw term, denoted by lambda(x_i:A.B).M;
    Given raw terms A, B, M, N and i = 1,2,....., the tree whose root (labelled by App) that has five children, which are roots of x_i, A, B, M and N, is a raw term, denoted by App^{x:A.B}(M, N);

    Is seems to me that in these abstract syntax trees, the children of a given node have to be (linearly) ordered.
    • CommentRowNumber47.
    • CommentAuthoratmacen
    • CommentTimeNov 8th 2018

    Is every variable secretly just a 0-ary term constant?

    No, and in fact, Π\Pi, λ\lambda, and AppApp are not constants either. A raw term node is either a variable, an instantiated term constant, or an instantiated primitive term operator. A raw type node is either an instantiated type constant or an instantiated primitive type operator.

    The definition currently “hard-codes” the primitive operators. This may change in the future. The definition is parameterized by the sets of variables, term constants, and type constants.

    One reason why it’s not a good idea to treat variables as constants is because substitution handles variables and constants differently. In general, variables and constants are understood differently.

    so that a more formal definition of raw terms and raw types as rooted trees labelled by constants would go like this…

    It’s also not a good idea to treat the binding sites as subtrees, since they’re never handled that way.

    Is seems to me that in these abstract syntax trees, the children of a given node have to be (linearly) ordered.

    Since we’re adding operators as we go, there’s technically nothing preventing us from adding an operator so that the node’s subtrees are not in any particular order. But I don’t expect that to happen.

    • CommentRowNumber48.
    • CommentAuthorMike Shulman
    • CommentTimeNov 8th 2018

    shouldn’t the node D have only one child?

    Yes, that must be a typo; I’ve fixed it.

    I don’t see what change you made in #44 though?

    • CommentRowNumber49.
    • CommentAuthorMike Shulman
    • CommentTimeNov 9th 2018

    A new proposal: ABTs with strong scoping and local Barendregt convention

    diff, v15, current

    • CommentRowNumber50.
    • CommentAuthorMike Shulman
    • CommentTimeNov 9th 2018

    Unsurprisingly, I got the variable numbers in the de Bruijn example wrong…

    diff, v15, current

    • CommentRowNumber51.
    • CommentAuthordlicata335
    • CommentTimeNov 10th 2018
    • (edited Nov 10th 2018)

    In the definition of X VX^V, should the later freshenings be with respect to the previous x ix_i (what’s written now) or the freshened versions of the previous x ix_i? (e.g. x 2 Vfr(V,x 1)x_2^{V \cup {fr(V,x_1)}} )?

    • CommentRowNumber52.
    • CommentAuthoratmacen
    • CommentTimeNov 11th 2018
    • (edited Nov 11th 2018)

    Dan’s alternative sounds right to me, unless the English description is confusing me.

    () [V](\cdot)^{[V]} \coloneqq \cdot
    (x::X) [V]letxx [V]inx::X [V{x}](x\,::\,X)^{[V]} \coloneqq let x' \coloneqq x^{[V]} in x'\,::\,X^{[V \cup \{x'\}]}

    • CommentRowNumber53.
    • CommentAuthoratmacen
    • CommentTimeNov 11th 2018

    I’m guessing the scoping relation of an operator should be decidable.

    diff, v17, current

    • CommentRowNumber54.
    • CommentAuthoratmacen
    • CommentTimeNov 11th 2018
    • (edited Nov 11th 2018)

    Changed the metavariable names in the operator case of the definition of RawRaw to match the definition of generalized arity. (And thus fixed a mismatch where it was already using that naming.)

    diff, v18, current

    • CommentRowNumber55.
    • CommentAuthorMike Shulman
    • CommentTimeNov 11th 2018

    Thanks for catching those typos.

    • CommentRowNumber56.
    • CommentAuthoratmacen
    • CommentTimeNov 12th 2018
    OK, it looks right to me. Sooner or later we’re probably going to need to prove a bunch of equations about renaming and substitution.
    • CommentRowNumber57.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 5th 2018
    • (edited Dec 5th 2018)

    The inductive definition of abt looks awkward to me. The family is indexed by VP fin(Var)V \in P_{fin} (Var) and ss so it doesn’t really make sense for the constructors to say “for each VP fin(Var)V \in P_{fin} (Var)”.

    Obviously we will later make this rigorous, but I changed this now because I was confused by this reading it.

    diff, v20, current

    • CommentRowNumber58.
    • CommentAuthorMike Shulman
    • CommentTimeDec 5th 2018

    I don’t see anything wrong with it. When defining an inductive family, each constructor has to specify what the index of the element being constructed is. In general it doesn’t have to be a variable, e.g. the constructors for the \mathbb{N}-indexed family Vec(A)Vec(A) of vectors (length-indexed lists) are

    1. There is a vector nil:Vec 0(A)nil : Vec_0(A).
    2. For any nn, v:Vec n(A)v:Vec_n(A), and a:Aa:A, there is a vector cons(a,v):Vec n+1(A)cons(a,v):Vec_{n+1}(A).

    These cannot be rewritten by putting the “for any nn” out at the beginning. It’s true that the definition of ABT is special in that every constructor applies to an arbitrary index VV (sometimes this is called a “uniform index” I think? Or maybe a “non-uniform parameter”?), but it’s certainly not wrong to write it the way I did, and it makes more sense to me that way because it matches the more general syntax. So I reverted it. What exactly is it that you find confusing?

    • CommentRowNumber59.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 5th 2018

    We generate the set when we vary the parameters of the constructor. We don’t vary VV so it seems silly to put it as a parameter of the constructor. I suppose it doesn’t make too much difference but it seems clearer the way I write it.

    We have a inductive definition indexed by VV and ss rather than an inductive definition that also parameterises over VV.

    • CommentRowNumber60.
    • CommentAuthorMike Shulman
    • CommentTimeDec 6th 2018

    I am not sure what you mean by “we don’t vary VV”. The second constructor has inputs that are indexed by different VV’s (namely VX iV\cup X_i) than the VV that indexes its output. This means that we are not defining a family of inductive types individually one for each VV (that would be when VV is a “parameter” – perhaps you are using the words “index” and “parameter” backwards from the standard?), instead we are defining the whole collection of types indexed by different VV’s all at once. In particular, each set Raw(V,s)Raw(V,s) does not have its own induction principle; only the family of all these sets together has a mutual induction principle. This is the same thing that happens for the index nn of Vec n(A)Vec_n(A), but different from what happens for the parameter AA of Vec n(A)Vec_n(A), where all the recursive inputs to each constructor involve the same value of AA as the output; the latter is the only case when I think it makes sense to say that “we don’t vary AA”.

    • CommentRowNumber61.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 6th 2018

    I have confused myself. I think I have cleared it up now. Ignore what I have said before, I think it is complete nonsense. However I still think Harper’s notion of ABT and the one written here are quite difficult to understand. I will try to play around with it a bit more.

    • CommentRowNumber62.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 6th 2018

    Also, it is mentioned in a few places that we don’t want type variables. Would it do any harm to leave them in? I think we would only need to modify the variable constructor of abts to also include a sort (which will almost always be tm).

    • CommentRowNumber63.
    • CommentAuthorMike Shulman
    • CommentTimeDec 6th 2018

    Ordinary dependent type theory, which is what we are trying to interpret, does not include type variables. Moreover, there is no way to interpret them in a CwF (although they can be interpreted in the “semantic logical framework” of the presheaf category).

    • CommentRowNumber64.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 20th 2018
    • (edited Dec 20th 2018)

    Why do we only define substitution on terms? Why not any sort?

    Also on notational convention: M[N/x]M[N/x] or M[x/N]M[x/N] the latter has the advantage that it is close to M[x:=N]M[x:=N], I’m not sure what the former is good for.

    • CommentRowNumber65.
    • CommentAuthoratmacen
    • CommentTimeDec 20th 2018

    Well it looks like we have substitution of terms for variables in expressions of any sort, which is what we want, since we have only term variables.

    I read M[N/x]M[N/x] as “MM with NN for xx”, and M[xN]M[x \coloneqq N] as “MM with xx set to NN”. I don’t personally like M[x/N]M[x/N], but of course it doesn’t really matter a whole lot.

    • CommentRowNumber66.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 20th 2018

    I suppose as long as we are consistent and it is obvious from first glance the order doesn’t matter. I seem to remember Barendregt prefering M[x:=N]M[x:=N] and Harper/HoTT book prefering M[N/x]M[N/x]. I couldn’t find anything about M[x/N]M[x/N] maybe I just made it up? I suppose I am confusing the two.

    • CommentRowNumber67.
    • CommentAuthorMike Shulman
    • CommentTimeDec 20th 2018

    I’ve seen both M[x/N]M[x/N] and M[N/x]M[N/x] used by different people, unfortunately. Some people also write [N/x]M[N/x]M. I like M[N/x]M[N/x] because it looks like “applying” a “function” MM to the “argument” NN, with a side note that xx is the variable that we consider MM to be a “function” of. As a mnemonic, I can also think of N/xN/x as NN coming down from the top to squash and replace all the xx’s.

    • CommentRowNumber68.
    • CommentAuthorUrs
    • CommentTimeDec 30th 2022

    I gather this and related pages have been abandoned; but since they are still sitting here one might as well address their lack of links and references.

    Here I went ahead and hyperlinked some of the technical terms, such as de Bruijn indices and abstract syntax tree.

    diff, v23, current