Not signed in (Sign In)

Start a new discussion

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry bundles calculus categories category category-theory chern-weil-theory cohesion cohesive-homotopy-theory cohesive-homotopy-type-theory cohomology colimits combinatorics complex-geometry computable-mathematics computer-science constructive constructive-mathematics cosmology definitions deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry differential-topology digraphs duality elliptic-cohomology enriched fibration finite foundations functional-analysis functor galois-theory gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory history homological homological-algebra homology homotopy homotopy-theory homotopy-type-theory index-theory infinity integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal-logic model model-category-theory monoidal monoidal-category-theory morphism motives motivic-cohomology multicategories newpage noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pasting philosophy physics planar pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string-theory subobject superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeSep 22nd 2018

    Created page, mainly to record a bunch of references that I am trying to collect. Additional suggested references would be welcome!

    v1, current

    • CommentRowNumber2.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 23rd 2018

    You’d think type refinement might crop up here, and I see a gesture in that direction at least at slide 45 of one of Noam’s talks.

    I see also bidirectional typing is mentioned in the abstract of Noam’s A theory of linear typings as flows on 3-valent graphs. I guess there’s interesting combinatorics here.

    • CommentRowNumber3.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 23rd 2018

    Corrected the Pierce-Turner paper length from 4 to 44 pages, and included link to pdf.

    diff, v2, current

    • CommentRowNumber4.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 23rd 2018

    Maybe there’s something in

    • William Lovas, Frank Pfenning, A Bidirectional Refinement Type System for LF, (doi)
    • CommentRowNumber5.
    • CommentAuthorMike Shulman
    • CommentTimeSep 24th 2018

    I feel like they’re a bit orthogonal. Refinement typing is, as I understand it, about assigning new, more informative, types to terms that already had types. Assigning such types, as well as assigning the less informative types in the first place, could both be done bidirectionally or not. (E.g. it looks like in the Lovas-Pfenning paper, both are bidirectional.)

  1. Hi Mike, thanks for starting this article!

    I feel like they’re a bit orthogonal. Refinement typing is, as I understand it, about assigning new, more informative, types to terms that already had types. Assigning such types, as well as assigning the less informative types in the first place, could both be done bidirectionally or not. (E.g. it looks like in the Lovas-Pfenning paper, both are bidirectional.)

    You’re right that they can be considered independently, but there are also several connections. On the one hand, bidirectional typing may be naturally seen as a refinement of ordinary typing. Indeed, in the current version of the article you wrote that “these observations correspond to bidirectional refinements of the usual typing rules for abstractions and applications”! I know you didn’t mean that in a technical sense, but in fact it does constitute an example of type refinement, which was considered (in the closely related form of “normal natural deduction”) already in Pfenning’s original paper on Refinement Types for Logical Frameworks.

    The other connection is that type checking itself (whether bidirectional or not) is an extrinsic notion, taking a term (perhaps with some annotations) and a type and deciding whether the term has that type. In bidirectional systems, the synthesis judgment is often designed to compute the principal type of a term in a given context, which is another extrinsic notion (if ΓtA\Gamma \vdash t \Rightarrow A, then AA is the most precise type that can be assigned to tt in the context Γ\Gamma).

  2. added a reference to Watkins et al

    diff, v3, current

  3. …plus a few more references on intersection and union types.

    diff, v3, current

    • CommentRowNumber9.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 25th 2018

    Great to have you back, Noam. Your final paragraph of #6 expresses well the hunch I had in #2.

    I thought you might be at SYCO 1, but you must have been away. I’m struggling with lenses at the moment, which seem to have a whiff of type refinement to them. But then functors are everywhere.

    • CommentRowNumber10.
    • CommentAuthorMike Shulman
    • CommentTimeSep 25th 2018

    Thanks Noam. I don’t quite understand what the last paragraph of #6 has to do with refinement? The word “refinement” doesn’t appear in it.

    The other paragraph confuses me a little because I would say that it’s not the types that are being refined but the judgments. But I suppose you could consider this another kind of “typing-judgment refinement”?

    • CommentRowNumber11.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeSep 25th 2018
    • (edited Sep 25th 2018)

    I don’t quite understand what the last paragraph of #6 has to do with refinement? The word “refinement” doesn’t appear in it.

    As I’ve talked about before, I believe that extrinsic typing in general is best analyzed in terms of type refinement: an extrinsic type should always refine some intrinsic type, whether or not we make the latter explicit. In categorical terms, we speak of a type refinement system as a category 𝒟\mathcal{D} of extrinsic types and typing derivations living over a category 𝒯\mathcal{T} of intrinsic types and terms. In the limiting case, 𝒯\mathcal{T} is a one-object category (or one-object multicategory = operad), but even in that case it is often useful to explicitly consider the “refinement functor” 𝒟𝒯\mathcal{D} \to \mathcal{T} (e.g., as mentioned at fibration of multicategories, a representable multicategory can be considered equivalently as a fibration over the terminal multicategory).

    The other paragraph confuses me a little because I would say that it’s not the types that are being refined but the judgments. But I suppose you could consider this another kind of “typing-judgment refinement”?

    In Pfenning’s example it is actually the types (or to be precise, families) which are being refined: one type “pf” is refined into two types “nf” and “elim”. Of course he is taking a judgments-as-types approach to representing deductive systems in a logical framework.

    • CommentRowNumber12.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 25th 2018
    • (edited Sep 25th 2018)

    Type refinement is about lifting arrows from the coarse system to the more refined system, so perhaps could be renamed as you suggest.

    We might have λx.x\lambda x.x as an arrow is some coarse system and wonder how it can be lifted. The most general presentation of the lift is the ’principal type’, so in this case XXX \to X (for any XX), rather than any more specific candidate such as BoolBoolBool \to Bool. (See slide 14 of Noam’s talk to us at Kent.)

    So I guess from the principal type, you can do type checking against a given type.

  4. David wrote:

    I thought you might be at SYCO 1, but you must have been away.

    Yes, I was very disappointed to miss it! But I had already planned a trip to the US before the dates were announced.

    I’m struggling with lenses at the moment, which seem to have a whiff of type refinement to them.

    Unfortunately I don’t know much about lenses…but I should read up about them sometime, thanks for the pointer.

  5. David #12:

    Beware that there are two different notions of “principal type” in the type-theoretic literature: both a “local” (contextual) notion of principal type of a term relative to a given context (what I was referring to in #6), and a “global” notion of principal type of a term that includes types for its free variables (sometimes called a “principal type scheme”, and what I was referring to on slide 14 of the Kent talk). I discussed a bit more about this distinction in Section 2.2.4 of these notes on type refinement. In categorical terms, I view the local notion of principal type as something akin to the pushforward of a typing context Γ\Gamma along a term tt, and the global notion as something like an initial object in an appropriate category of typings for tt. (This is something I would like to make more precise, though.)

    So I guess from the principal type, you can do type checking against a given type.

    Yes, for both notions of “principal type”. For the contextual notion of principal type used in bidirectional typing, to check that Γt:B\Gamma \vdash t : B, we synthesize a principal type ΓtA\Gamma \vdash t \Rightarrow A and check that ABA \le B. For the global notion of principal type scheme, we synthesize both a type AA and a typing context Δ\Delta for the free variables of tt, both of which can involve free type variables. Then to check that Γt:B\Gamma \vdash t : B, we have to decide whether there exists a substitution σ\sigma for the type variables such that ΓσΔ\Gamma \le \sigma\Delta and σAB\sigma A \le B (or Γ=σΔ\Gamma = \sigma\Delta and σA=B\sigma A = B in systems without subtyping).

  6. …and to make things even trickier, the addition of polymorphism to a type system can sometimes turn the global notion into the local notion: that’s the way I interpret “let-polymorphism” in the style of Milner’s original paper on A Theory of Type Polymorphism in Programming. In that system, the closed term λx.x\lambda x.x may be given polymorphic type X.XX\forall X.X \to X, which is a subtype of any other type that may be assigned to λx.x\lambda x.x in the empty context.

    • CommentRowNumber16.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 25th 2018

    Thanks, Noam.

    As for lenses, one way to get an idea is to read this brief account of how they’ve been invented 9 times, starting off with Gödel’s Dialectica interpretation.

    • CommentRowNumber17.
    • CommentAuthorMike Shulman
    • CommentTimeSep 25th 2018

    I certainly agree that it’s sometimes useful to consider categories as fibrations over 1. But I’m a little confused about how this relates to a view of extrinsic typing as type refinement. I thought type refinement was about assigning more refined types to the same terms. But then if we are to view extrinsic typing as a refinement of a “trivial” intrinsic typing, it would seem that the latter would have to have one object but all the (possibly ill-typed) terms as morphisms. Which may be fine from a syntactic point of view, but I think in most semantic models of the refined system there is no base category in which all the possibly-ill-typed terms can be interpreted as anything at all.

  7. I think it would help to have a more concrete type system/semantics to talk about. But I would suggest that you first have a look at the example from Reynolds’ The Meaning of Types: from Intrinsic to Extrinsic Semantics that Paul-André and I formalized in section 6 of our POPL paper. In this example, the base category 𝒯\mathcal{T} is indeed freely generated over a single type PP of untyped syntactic “phrases”. Then one can (as you suggest) define a semantic functor 𝒟Dom\mathcal{D} \to Dom that completely ignores 𝒯\mathcal{T} – what Reynolds called an “intrinsic semantics”. But in order to even state the coherence theorem for this semantics one needs to reference the forgetful functor 𝒟𝒯\mathcal{D} \to \mathcal{T}, and moreover, Reynolds observed that there is an elegant proof of the coherence theorem that works by first defining another “untyped semantics” 𝒯Dom\mathcal{T} \to Dom (which relies on Dom’s ability to model recursive types), and constructing a logical relation between the two interpretations.

    • CommentRowNumber19.
    • CommentAuthorMike Shulman
    • CommentTimeSep 25th 2018

    Sure, for semantics in Dom I expect you can model untyped syntax already, but that’s a pretty special case; most categorical semantics don’t have general recursion.

    • CommentRowNumber20.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 25th 2018

    Perhaps Mike’s own type theories would help understanding. Aren’t those modal type theories of the LSR program (as explained in the HoTTEST talk) great examples, where the mode theory of the base is refined to the full theory up top?

  8. Mike #19:

    Sure, for semantics in Dom I expect you can model untyped syntax already, but that’s a pretty special case; most categorical semantics don’t have general recursion.

    I’m not sure what conclusion you are drawing. In Reynolds’ example there are two different semantics, corresponding to two functors [[]] D:𝒟Dom[[-]]_D : \mathcal{D} \to Dom and [[]] T:𝒯Dom[[-]]_T : \mathcal{T} \to Dom. The interpretation [[]] D[[-]]_D does not rely on any special properties of Dom other than cartesian closure. The interpretation [[]] T[[-]]_T does rely on special properties of Dom (namely, existence of recursive types), but this is what leads to his elegant proof of the coherence of [[]] D[[-]]_D relative to the original type system. So there is no barrier to taking the target of [[]] D[[-]]_D to be another ccc (say, Set), but you would have to devise another proof of coherence. (I suspect that Reynolds’ proof could be easily adapted taking the logical relation to go between sets and domains, although I haven’t worked that out.)

    David #20:

    Aren’t those modal type theories of the LSR program (as explained in the HoTTEST talk) great examples, where the mode theory of the base is refined to the full theory up top?

    That is indeed the way Paul-André and I like to analyze the example of reasoning in substructural logics, with proofs corresponding to morphisms in 𝒟\mathcal{D} that live over “context effects” in 𝒯\mathcal{T}.

    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeSep 25th 2018

    My point is that thinking about 𝒟\mathcal{D} as fibered over 𝒯\mathcal{T} isn’t useful for the majority of semantics, which seems to be exactly what you’re saying: for semantics of 𝒟\mathcal{D} in DomDom, you can use 𝒯\mathcal{T} to prove coherence, but for general semantics you have to do something else.

    • CommentRowNumber23.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeSep 25th 2018
    • (edited Sep 25th 2018)

    My point is that thinking about 𝒟\mathcal{D} as fibered over 𝒯\mathcal{T} isn’t useful for the majority of semantics, which seems to be exactly what you’re saying

    No, that’s not what I’m saying, and I certainly think that it is useful. I explained how Reynolds’ proof uses the semantics [[]] T:𝒯Dom[[-]]_T : \mathcal{T} \to Dom in order to prove the coherence of his semantics [[]] D:𝒟Dom[[-]]_D : \mathcal{D} \to Dom, and suggested that it could probably be adapted to proving coherence of an alternative “intrinsic” semantics [[]] D[[-]]_D of typing derivations valued in some other ccc such as SetSet. Also, I made the point that knowledge of the forgetful functor 𝒟𝒯\mathcal{D} \to \mathcal{T} is implicitly required even to make the statement of the coherence theorem.

    Sorry if I am not being clear, but again I suggest you have a look at Reynolds’ paper and/or our analysis in the POPL paper.

    • CommentRowNumber24.
    • CommentAuthorMike Shulman
    • CommentTimeSep 25th 2018

    I looked at the papers, but I’m still confused. I think I don’t even understand what you mean by “coherent”. I see the statement is “the interpretation of a judgment doesn’t depend on its derivation”, but in that case I don’t see how you can have a functor 𝒟Dom\mathcal{D} \to Dom that isn’t coherent; doesn’t the construction of a syntactic category already mod out by the existence of different derivations of the same term?

    • CommentRowNumber25.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeSep 25th 2018
    • (edited Sep 25th 2018)

    doesn’t the construction of a syntactic category already mod out by the existence of different derivations of the same term?

    Do you mean different derivations of the same typing judgment, rather than of the same term? In any case, the construction of 𝒟\mathcal{D} doesn’t mod out by either. In general, we do not ask that a type refinement system 𝒟𝒯\mathcal{D} \to \mathcal{T} is a faithful functor, since we do not want to rule out the possibility of having essentially different typing derivations of the same judgment.

    You could certainly construct the quotient category 𝒟/\mathcal{D}/\sim with the same objects as 𝒟\mathcal{D} and where two morphisms of 𝒟\mathcal{D} are identified just in case they have the same domain and codomain and map to the same morphism in 𝒯\mathcal{T} (i.e., they are derivations of the same typing judgment). But then proving that a functor 𝒟𝒳\mathcal{D} \to \mathcal{X} lifts to a functor 𝒟/𝒳\mathcal{D}/\sim \to \mathcal{X} would amount to the same work as proving coherence.

  9. To quote Reynolds: “In fact, as we will see in Section 5, our intrinsic semantics is coherent. However, this fact depends critically on the details of the language design, especially of the semantics of implicit conversions and “overloaded” operators with more than one signature.”

    In other words, the coherence theorem is accomplishing something non-trivial, and I would suggest that this thing is best expressed in categorical language (here: for any two morphisms α 1,α 2\alpha_1,\alpha_2 in 𝒟\mathcal{D} with the same domain and codomain, if U(α 1)=U(α 2)U(\alpha_1) = U(\alpha_2) then [[α 1]]=[[α 2]][[\alpha_1]] = [[\alpha_2]]), rather than as a precondition for the use of that language.

    • CommentRowNumber27.
    • CommentAuthorMike Shulman
    • CommentTimeSep 25th 2018

    It’s not even clear to me that you get a category if you don’t quotient by anything, let alone a category with all the structure you expect to have. All the usual constructions of the syntactic category that I’ve seen take the morphisms to be typing judgments.

  10. Mike, I think you must be reading more into the construction than what we explain in the paper. The whole point is to have two categories 𝒟\mathcal{D} and 𝒯\mathcal{T} related by a functor U:𝒟𝒯U : \mathcal{D} \to \mathcal{T}, and thereby distinguish between “terms” (= morphisms of 𝒯\mathcal{T}), “typing judgments” (= morphisms of 𝒯\mathcal{T} paired with objects of 𝒟\mathcal{D} refining their domain and codomain), and “typing derivations” (= morphisms of 𝒟\mathcal{D}), rather than taking the usual syntactic category of a type theory which collapses the three notions. This proof-theoretic reading of a functor as a type refinement system can be applied whenever you have a functor U:𝒟𝒯U : \mathcal{D} \to \mathcal{T} (even if it is not always so evocative), because it is just a collection of elementary definitions.

    In Section 6 of the POPL paper, we explain how to construct a particular “syntactic type refinement system” U:𝒟𝒯U : \mathcal{D} \to \mathcal{T} corresponding to Reynolds’ type system by first constructing 𝒯\mathcal{T} as a freely generated category, then constructing 𝒟\mathcal{D} as another freely generated category (with a priori no relation to 𝒯\mathcal{T}), and finally defining UU to express the desired typing laws. Alternatively, this could be seen more directly as the construction of a freely generated displayed category 𝒯Prof\mathcal{T} \to Prof over the freely generated category 𝒯\mathcal{T}. I don’t think there is anything suspect here, it is just perhaps a different way of looking at type systems categorically than most people are used to.

    • CommentRowNumber29.
    • CommentAuthorMike Shulman
    • CommentTimeSep 26th 2018

    Well, you can always get a category by postulating that you have a category, but I prefer honest toil. I don’t think it’s reasonable to use words like “term” and “typing judgment” if what you’re really talking about is just a category, rather than actual syntax. Even your section 6, if I’m reading it correctly, doesn’t actually start with Reynolds’ syntax, but generates some category (thereby postulating that you have a category) subject to some rules that look kind of like the rules of Reynolds’ syntax.

  11. As explained at the beginning of the section, we give an encoding of his language + type system in the style of LF, using higher-order abstract syntax. An alternative that is closer to Reynolds’ original syntax (and which you might find more palatable) is to construct 𝒟\mathcal{D} and 𝒯\mathcal{T} as multicategories.

    In any case, part of the “honest toil” in this work was the conceptual one of realizing that many extrinsic typing concepts (such as subtyping and subsumption) are explained by passing from categories to functors, and that moreover this is in harmony with a fibrational view of substructural proof theories such as separation logic (which, as David said above, is very much in the spirit of the LSR program). Perhaps it would focus the discussion if you said what kind of (alternative?) analysis you would give for extrinsic typing?

    • CommentRowNumber31.
    • CommentAuthorMike Shulman
    • CommentTimeSep 26th 2018

    As explained at the beginning of the section, we give an encoding of his language + type system in the style of LF, using higher-order abstract syntax.

    That’s indeed what you say at the beginning of the section, but the actual definition later on says that TT is a cartesian category freely generated by an object and some morphisms, and that U:DTU:D\to T is the “cartesian logical refinement system” — which you’ve defined to be a certain kind of category and functor — freely generated by some other stuff. I don’t think it’s old-fashioned to insist that syntax (even higher-order abstract syntax) is not the same as semantics.

    I don’t mean to belittle the conceptual insight of using functors to represent aspects of extrinsic typing. But I think there’s more to be done in using this insight to actually prove theorems about syntax.

  12. I don’t think it’s old-fashioned to insist that syntax (even higher-order abstract syntax) is not the same as semantics.

    It does seem to me somewhat old-fashioned to insist that freely generated categories cannot count as syntax. Would you say that a Lawvere theory is semantics rather than syntax? In this case, I think that the difference between Reynolds’ type system and our encoding is sufficiently small for the latter to count as “syntax”, and that our categorical formalization really captures the essence of his proof.

    That said, I think I do understand your broader point, that you would like to have better tools/theorems for proving results about concrete type systems without having to think too hard about questions of adequacy. The lack of such tools is a criticism I’ve heard before about this work with Paul-André, and something that I agree should be addressed. Personally, though, I don’t think it is helpful to phrase this criticism in terms of ambiguous and loaded expressions like “syntax vs semantics”. Instead it is better to appeal to well-defined mathematical distinctions, like “category vs multicategory”, “tree vs graph”, etc. In computer science, we are used to the idea that a program can have many different representations, e.g.:

    • as a string of bits;
    • as a tree representing its hierarchical structure;
    • as a graph representing its control structure;
    • as the result of running another program;

    and so on. I wouldn’t say that any of these particular representations has the unique privilege of being counted as “syntax” (or as “semantics”), but we can nonetheless make meaningful distinctions between them and apply them to different purposes.

    • CommentRowNumber33.
    • CommentAuthorMike Shulman
    • CommentTimeSep 27th 2018

    I agree that there is a continuum between syntax and semantics, and there’s a sense in which a free category is syntax. But it’s not the syntax that one usually writes down as a type theory, because it has a primitive cut/substitution rule and is quotiented by an explicit associativity axiom. Relating this kind of “syntax” to the more usual type-theoretic syntax with implicit substitutions involves something like cut-elimination / normalization / initiality theorems. You can call your free category syntax, but then the onus is on you to relate it to what other people call syntax. Maybe it’s obvious to you, but it’s not obvious to me that the primitive cut/substitution rule and associativity quotient in your free category give you something even essentially analogous to the derivation trees in a deductive system.

  13. Relating this kind of “syntax” to the more usual type-theoretic syntax with implicit substitutions involves something like cut-elimination / normalization / initiality theorems.

    Agreed 100%. Though I would add (and perhaps you agree) that it should be possible to express such theorems directly in categorical language, rather than seeing them as a “wormhole” between strangely separated universes of type theory and category theory.

    Maybe it’s obvious to you, but it’s not obvious to me that the primitive cut/substitution rule and associativity quotient in your free category give you something even essentially analogous to the derivation trees in a deductive system.

    Yes I think it is clear in this case, but agree that it would be nice to have a theorem in the literature to refer to. For example, in this case the appropriate theorem is that equivalence classes of morphisms ABA \to B in a ccc generated freely over some collection of operations Σ\Sigma are in one-to-one correspondence with βη\beta\eta-normal simply-typed lambda terms x:A Σt:Bx:A \vdash_\Sigma t:B generated freely over the signature Σ\Sigma (hence, “higher-order abstract syntax”).

    • CommentRowNumber35.
    • CommentAuthoratmacen
    • CommentTimeSep 12th 2019

    Joshua Dunfield and Neel Krishnaswami wrote a survey article about bidirectional type checking, and Neel linked to a draft/preprint from a blog post. I’m reading it, and it seems very good and thorough. I figure this page should link to it, right?

    • CommentRowNumber36.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 13th 2019

    Yes, of course.

    • CommentRowNumber37.
    • CommentAuthoratmacen
    • CommentTimeSep 13th 2019

    Added Dunfield and Krishnaswami’s survey article to the general references.

    diff, v4, current

    • CommentRowNumber38.
    • CommentAuthorjonsterling
    • CommentTimeSep 16th 2019
    I think the time has come for someone to explain what these much-acclaimed systems with implicit substitutions are intended to be used for. To start with, I will promote in this thread a common perspective among people who study syntax for a living, but I welcome other viewpoints that defend the notion of implicit substitutions, or to be much more precise, defend the importance of working at a level of abstraction where it is possible to see the difference between implicit and explicit substitutions.

    To me the role of concrete syntax
    (here I mean either strings or trees and stuff arranged into derivation trees etc.) is basically to study mathematically the algorithms and structures which are employed in computerized proof assistants (or compilers etc). Btw I am using “concrete” as a synonym to “raw”.

    Concrete syntax to me is almost inseparable from the needs of computerized implementation, for a very important reason: the concrete syntax of a type theory is _not_ what is employed when informal but rigorous mathematics is done on a piece of paper or a blackboard. It is nearly impossible to work rigorously with concrete syntax (because it is so sensitive), but it is very easy to work rigorously with type theoretic ideas (it is just mathematics!). So this can not possibly be the reason that we are studying concrete or raw syntax.

    In fact, informal but rigorous type theory reaches directly the real objects that the type theorist is referring to, and it is really a category error to think that when I write a type theoretic object on the black board, at some place somewhere there is a well defined transformation happening of that “expression” from a sequence of characters into an abstract object. It is not different from ordinary math, where in contrast it would not be taken seriously to argue that the expressions we write or speak are passing through a translation involving the concrete syntax of the ZFC formalism.

    I recall during a seminar once, Steve Awodey made a wonderful remark to the room about how it makes no sense to write a substitution “[M/x]N” on the blackboard when N and M are mathematical objects defined by a type theorist who is conveying some construction in type theory. Steve argued correctly that this was a category error; of course this is an instance of the fact that a “variable” in the sense of syntax is not at all the same as a “variable quantity” that one might manipulate when doing mathematics.

    The state of affairs is totally different in a computer however: a computer does not have access to the mathematical objects we are grasping. A computer, unlike a human being, must be communicated with using precise languages which ultimately boil down to sequences of bits — a necessary _formalism_ which is totally orthogonal to the question of mathematical rigor, which does not require formalism when employed by human beings. So to me, staying concrete or raw syntax is something that we do in order to bridge the gap between human beings (who naturally possess a mathematical faculty) and computers, which do not.

    All this is to say that it is not enough to me that apparently “some people” are accustomed to thinking of type theory as sequences of symbols with implicit substitutions (don’t forget capture avoiding!)—even if this were in fact the dominant view, it would not convince me. In particular, I am not buying the idea that it is somehow an obligation to prove that my fancy “syntax” (defines abstractly by a universal property) is the same as some traditional formulation. In fact, if it were to ever happen that a type theory which I defined by a universal property turned out not to be equivalent to someone’s favorite traditional presentation, I can say with complete confidence that it is the latter which has a mistake in it! (That has actually happened plenty of times!) This is because universal properties are definitive and the role of concrete constructions is to _present_ them in ways that are locally expedient for doing computations.

    Fortunately I can say that in contrast to the point of view which pervades the field of formal logic, nearly all type theorists (sometimes very secretly!!) treat these concrete things as a dispensable and replaceable formal representation of some real object (such as the Free XYZ) — we generate and throw away concrete formalism and notions of substitution just as quickly and freely as we consume and emit water, but our actual objects of study (the type theories rather than their presentations, usually defined by universal properties) change much slowly over time and are more stable along the march of objective mathematical investigation.


    So when it comes to concrete syntax and deciding what aspects of it are important, I think mostly about what is needed in computerized implementations. Here the answers are very clear: implementations of type theory do _not_ need untyped substitutions to exist

    I must pause here to insist that when discussing implicit vs explicit substitution, the only difference of substance is whether the substitutions are well defined on raw terms prior to typing, or if they can only be applied via the substitution rule. If you restrict to the implicit substitutions which arise from an application of the admissible substitution rule, it is trivial to see that there can be no difference here with the explicit case. If one requires other substitutions to exist which are not captured by the substitution rule, then there is of course a very real difference which we can discuss.

    In other words, whether the substitution rule is a schema which is _implemented_ by raw substitutions or if it is the typing rule for a substitution operation which is a generator, is always invisible, and by definition, nothing can depend on it.

    Sometimes I think that we type theorists over the years may have given a really misleading impression by “compiling down” our work into formal rules and symbol pushing (which most of us had been doing only as a matter of expedient communication rather than because we were really studying strings or trees or derivations).

    I am being a bit provocative and polemical because I hope that I can spark a debate, and that people who believe that implicit substitution vs explicit substitution is a sticking point in the study of type theories can defend their positions. I don’t mean any offense, and hope that my comments are taken in the spirit of promoting healthy debate within a friendly crowd.
    • CommentRowNumber39.
    • CommentAuthorMike Shulman
    • CommentTimeSep 17th 2019

    Thanks for opening this conversation, Jon. (Although I am a bit puzzled by the choice of thread – what does it have to do with bidirectional typechecking?)

    Maybe we can start out by being a little more clear about what’s being contrasted. At some points it seems to me that you’re conflating what I see as three different axes:

    1. intrinsically typed / algebraic / QIIT versus extrinsically typed / raw terms with typing judgments
    2. explicit substitutions versus implicit substitutions
    3. de Bruijn indices versus named variables

    You didn’t say anything about the third one explicitly, but when you wrote “implicit substitutions (don’t forget capture avoiding!)” the most obvious interpretation to me is as a reference to it, since I usually think of “variable capture” as a concept pertaining only to the use of named variables. One does have to do reindexing when substituting with de Bruijn indices, but I don’t think I’ve heard that described as “capture avoidance” before – although that could just be my ignorance.

    My understanding is that in extriniscally typed syntax, one can pair either kind of substitution with either kind of variable, so the second and third distinctions are orthogonal. Whereas intrinsically typed syntax seems to require explicit substitutions and de Bruijn indices. So when you say “working at a level of abstraction where it is possible to see the difference between implicit and explicit substitutions” I would phrase it as instead “working with definitions where it is possible to use implicit substitutions and named variables.” (I wouldn’t say that intrinsically typed and extrinsically typed syntax have different amounts of abstraction.)

    Overall, it seems to me that you are arguing for what I would call intrinsically typed syntax, not for explicit substitutions. One can have a syntax with raw terms and typing judgments and explicit substitutions in the raw terms, but that’s not what you want. Right?

    • CommentRowNumber40.
    • CommentAuthorMike Shulman
    • CommentTimeSep 17th 2019

    One other small remark to begin with is that it seems a bit contradictory to me to say that

    it would not be taken seriously to argue that the expressions we write or speak are passing through a translation involving the concrete syntax of the ZFC formalism

    but then acknowledge that the argument you’re advancing is

    in contrast to the point of view which pervades the field of formal logic.

    Perhaps I am misunderstanding what you mean by one or both of these two phrases?

    • CommentRowNumber41.
    • CommentAuthorjonsterling
    • CommentTimeSep 17th 2019
    Hi Mike, I'll study your response soon, just wanted to mention that I had opened this conversation in this thread in response to the earlier discussion with Noam --- but we could continue it elsewhere if you feel it would be more productive :)

    And yes, just before I get into your comments more deeply, I would like to clarify that I am not arguing for explicit vs implicit substitutions, but arguing for working at a level of abstraction where the difference between them is not visible; but then, the most natural way to connect that very abstract thing to the concrete is through explicit substitutions, because these can be described by a generalized algebraic theory (namely, the generalized algebraic theory of the context comprehension).
    • CommentRowNumber42.
    • CommentAuthoratmacen
    • CommentTimeSep 17th 2019

    Re #38:

    I agree that type theory is not really about syntax, but I don’t agree that type theory is about directly working with mathematical objects. It cannot be, because then it would not be different from math itself.

    I believe type theory is about the presentation of mathematical objects. Math doesn’t need to be presented directly by syntax. Computational type theory presents math using computations (roughly: behavioral equivalence classes of programs), and material set theory presents math using material sets. I consider these approaches to be legitimate styles of type theory, and I lament that they’re often not treated as such. (If the presentation is not syntax, there’s the separate technical issue of accessing the presentation via syntax. That is not (necessarily) type theory.)

    Syntax is the least common denominator style of presentation. Everyone understands the gist of syntax, whether they realize it or not. Type theory is not limited to syntax, but it sure works with it. Syntactic type systems are a way to develop type theoretic ideas by example.

    So when it comes to concrete syntax and deciding what aspects of it are important, I think mostly about what is needed in computerized implementations. Here the answers are very clear: implementations of type theory do _not_ need untyped substitutions to exist

    That’s true. But it’s mostly irrelevant to implementation, because the notion of typed substitution that’s needed is the one you’re supposed to be implementing, not one that you already have available. The practical question is whether to implement the substitution of the new type system in terms of untyped operations or operations taken from some other type system. It’s quite important that untyped implementations exist, because otherwise there’d be no reason to believe that type theory can be implemented using almost any programming language.

    Possible advantages of typed implementations are worth considering, but the more exotic features of the (metalanguage!) type system they require, the less broadly applicable they are, without including an implementation of parts of the metalanguage. Eventually, there needs to be a reduction to something ubiquitous.

    If one requires other substitutions to exist which are not captured by the substitution rule, then there is of course a very real difference which we can discuss.

    OK, I’m curious. Tell us about it.

    I am being a bit provocative and polemical because I hope that I can spark a debate, and that people who believe that implicit substitution vs explicit substitution is a sticking point in the study of type theories can defend their positions.

    I admit to preferring implicit substitutions, although they’re the worse-performing implementation, and are more remote from category theory. They’re just so convenient. But I consider it an implementation detail. So no debate from me about that.

    • CommentRowNumber43.
    • CommentAuthoratmacen
    • CommentTimeSep 17th 2019

    Re #39:

    Whereas intrinsically typed syntax seems to require explicit substitutions and de Bruijn indices.

    I don’t think that’s true. I think that’s just the combination that’s been worked out because the Agda camp likes it.

    • CommentRowNumber44.
    • CommentAuthorMike Shulman
    • CommentTimeSep 17th 2019

    I guess with a “quotient inductive-recursive definition” you could write down something that could arguably be called intrinsically typed syntax with implicit substitutions? I’m generally rather leery of induction-recursion (and especially quotient/higher induction-recursion) because I’ve never seen a convincing explanation of its categorical semantics in general. In particular, one selling point of intrinsic syntax seems to be its essentially-algebraic character, and I’m unclear on whether that could be applied to the inductive-recursive case. But yes, I can see that it probably at least exists.

    • CommentRowNumber45.
    • CommentAuthoratmacen
    • CommentTimeSep 17th 2019
    I see. I wasn’t thinking of also quotienting. But yeah, induction-recursion.
Add your comments
  • Please log in or leave your comment as a "guest post". If commenting as a "guest", please include your name in the message as a courtesy. Note: only certain categories allow guest posts.
  • To produce a hyperlink to an nLab entry, simply put double square brackets around its name, e.g. [[category]]. To use (La)TeX mathematics in your post, make sure Markdown+Itex is selected below and put your mathematics between dollar signs as usual. Only a subset of the usual TeX math commands are accepted: see here for a list.

  • (Help)