broken links
algeboy
]]>I guess with a “quotient inductive-recursive definition” you could write down something that could arguably be called intrinsically typed syntax with implicit substitutions? I’m generally rather leery of induction-recursion (and especially quotient/higher induction-recursion) because I’ve never seen a convincing explanation of its categorical semantics in general. In particular, one selling point of intrinsic syntax seems to be its essentially-algebraic character, and I’m unclear on whether that could be applied to the inductive-recursive case. But yes, I can see that it probably at least exists.
]]>Re #39:
Whereas intrinsically typed syntax seems to require explicit substitutions and de Bruijn indices.
I don’t think that’s true. I think that’s just the combination that’s been worked out because the Agda camp likes it.
]]>Re #38:
I agree that type theory is not really about syntax, but I don’t agree that type theory is about directly working with mathematical objects. It cannot be, because then it would not be different from math itself.
I believe type theory is about the presentation of mathematical objects. Math doesn’t need to be presented directly by syntax. Computational type theory presents math using computations (roughly: behavioral equivalence classes of programs), and material set theory presents math using material sets. I consider these approaches to be legitimate styles of type theory, and I lament that they’re often not treated as such. (If the presentation is not syntax, there’s the separate technical issue of accessing the presentation via syntax. That is not (necessarily) type theory.)
Syntax is the least common denominator style of presentation. Everyone understands the gist of syntax, whether they realize it or not. Type theory is not limited to syntax, but it sure works with it. Syntactic type systems are a way to develop type theoretic ideas by example.
So when it comes to concrete syntax and deciding what aspects of it are important, I think mostly about what is needed in computerized implementations. Here the answers are very clear: implementations of type theory do _not_ need untyped substitutions to exist
That’s true. But it’s mostly irrelevant to implementation, because the notion of typed substitution that’s needed is the one you’re supposed to be implementing, not one that you already have available. The practical question is whether to implement the substitution of the new type system in terms of untyped operations or operations taken from some other type system. It’s quite important that untyped implementations exist, because otherwise there’d be no reason to believe that type theory can be implemented using almost any programming language.
Possible advantages of typed implementations are worth considering, but the more exotic features of the (metalanguage!) type system they require, the less broadly applicable they are, without including an implementation of parts of the metalanguage. Eventually, there needs to be a reduction to something ubiquitous.
If one requires other substitutions to exist which are not captured by the substitution rule, then there is of course a very real difference which we can discuss.
OK, I’m curious. Tell us about it.
I am being a bit provocative and polemical because I hope that I can spark a debate, and that people who believe that implicit substitution vs explicit substitution is a sticking point in the study of type theories can defend their positions.
I admit to preferring implicit substitutions, although they’re the worse-performing implementation, and are more remote from category theory. They’re just so convenient. But I consider it an implementation detail. So no debate from me about that.
]]>One other small remark to begin with is that it seems a bit contradictory to me to say that
it would not be taken seriously to argue that the expressions we write or speak are passing through a translation involving the concrete syntax of the ZFC formalism
but then acknowledge that the argument you’re advancing is
in contrast to the point of view which pervades the field of formal logic.
Perhaps I am misunderstanding what you mean by one or both of these two phrases?
]]>Thanks for opening this conversation, Jon. (Although I am a bit puzzled by the choice of thread – what does it have to do with bidirectional typechecking?)
Maybe we can start out by being a little more clear about what’s being contrasted. At some points it seems to me that you’re conflating what I see as three different axes:
You didn’t say anything about the third one explicitly, but when you wrote “implicit substitutions (don’t forget capture avoiding!)” the most obvious interpretation to me is as a reference to it, since I usually think of “variable capture” as a concept pertaining only to the use of named variables. One does have to do reindexing when substituting with de Bruijn indices, but I don’t think I’ve heard that described as “capture avoidance” before – although that could just be my ignorance.
My understanding is that in extriniscally typed syntax, one can pair either kind of substitution with either kind of variable, so the second and third distinctions are orthogonal. Whereas intrinsically typed syntax seems to require explicit substitutions and de Bruijn indices. So when you say “working at a level of abstraction where it is possible to see the difference between implicit and explicit substitutions” I would phrase it as instead “working with definitions where it is possible to use implicit substitutions and named variables.” (I wouldn’t say that intrinsically typed and extrinsically typed syntax have different amounts of abstraction.)
Overall, it seems to me that you are arguing for what I would call intrinsically typed syntax, not for explicit substitutions. One can have a syntax with raw terms and typing judgments and explicit substitutions in the raw terms, but that’s not what you want. Right?
]]>Added Dunfield and Krishnaswami’s survey article to the general references.
]]>Yes, of course.
]]>Joshua Dunfield and Neel Krishnaswami wrote a survey article about bidirectional type checking, and Neel linked to a draft/preprint from a blog post. I’m reading it, and it seems very good and thorough. I figure this page should link to it, right?
]]>Relating this kind of “syntax” to the more usual type-theoretic syntax with implicit substitutions involves something like cut-elimination / normalization / initiality theorems.
Agreed 100%. Though I would add (and perhaps you agree) that it should be possible to express such theorems directly in categorical language, rather than seeing them as a “wormhole” between strangely separated universes of type theory and category theory.
Maybe it’s obvious to you, but it’s not obvious to me that the primitive cut/substitution rule and associativity quotient in your free category give you something even essentially analogous to the derivation trees in a deductive system.
Yes I think it is clear in this case, but agree that it would be nice to have a theorem in the literature to refer to. For example, in this case the appropriate theorem is that equivalence classes of morphisms in a ccc generated freely over some collection of operations are in one-to-one correspondence with -normal simply-typed lambda terms generated freely over the signature (hence, “higher-order abstract syntax”).
]]>I agree that there is a continuum between syntax and semantics, and there’s a sense in which a free category is syntax. But it’s not the syntax that one usually writes down as a type theory, because it has a primitive cut/substitution rule and is quotiented by an explicit associativity axiom. Relating this kind of “syntax” to the more usual type-theoretic syntax with implicit substitutions involves something like cut-elimination / normalization / initiality theorems. You can call your free category syntax, but then the onus is on you to relate it to what other people call syntax. Maybe it’s obvious to you, but it’s not obvious to me that the primitive cut/substitution rule and associativity quotient in your free category give you something even essentially analogous to the derivation trees in a deductive system.
]]>I don’t think it’s old-fashioned to insist that syntax (even higher-order abstract syntax) is not the same as semantics.
It does seem to me somewhat old-fashioned to insist that freely generated categories cannot count as syntax. Would you say that a Lawvere theory is semantics rather than syntax? In this case, I think that the difference between Reynolds’ type system and our encoding is sufficiently small for the latter to count as “syntax”, and that our categorical formalization really captures the essence of his proof.
That said, I think I do understand your broader point, that you would like to have better tools/theorems for proving results about concrete type systems without having to think too hard about questions of adequacy. The lack of such tools is a criticism I’ve heard before about this work with Paul-André, and something that I agree should be addressed. Personally, though, I don’t think it is helpful to phrase this criticism in terms of ambiguous and loaded expressions like “syntax vs semantics”. Instead it is better to appeal to well-defined mathematical distinctions, like “category vs multicategory”, “tree vs graph”, etc. In computer science, we are used to the idea that a program can have many different representations, e.g.:
and so on. I wouldn’t say that any of these particular representations has the unique privilege of being counted as “syntax” (or as “semantics”), but we can nonetheless make meaningful distinctions between them and apply them to different purposes.
]]>As explained at the beginning of the section, we give an encoding of his language + type system in the style of LF, using higher-order abstract syntax.
That’s indeed what you say at the beginning of the section, but the actual definition later on says that is a cartesian category freely generated by an object and some morphisms, and that is the “cartesian logical refinement system” — which you’ve defined to be a certain kind of category and functor — freely generated by some other stuff. I don’t think it’s old-fashioned to insist that syntax (even higher-order abstract syntax) is not the same as semantics.
I don’t mean to belittle the conceptual insight of using functors to represent aspects of extrinsic typing. But I think there’s more to be done in using this insight to actually prove theorems about syntax.
]]>As explained at the beginning of the section, we give an encoding of his language + type system in the style of LF, using higher-order abstract syntax. An alternative that is closer to Reynolds’ original syntax (and which you might find more palatable) is to construct and as multicategories.
In any case, part of the “honest toil” in this work was the conceptual one of realizing that many extrinsic typing concepts (such as subtyping and subsumption) are explained by passing from categories to functors, and that moreover this is in harmony with a fibrational view of substructural proof theories such as separation logic (which, as David said above, is very much in the spirit of the LSR program). Perhaps it would focus the discussion if you said what kind of (alternative?) analysis you would give for extrinsic typing?
]]>Well, you can always get a category by postulating that you have a category, but I prefer honest toil. I don’t think it’s reasonable to use words like “term” and “typing judgment” if what you’re really talking about is just a category, rather than actual syntax. Even your section 6, if I’m reading it correctly, doesn’t actually start with Reynolds’ syntax, but generates some category (thereby postulating that you have a category) subject to some rules that look kind of like the rules of Reynolds’ syntax.
]]>Mike, I think you must be reading more into the construction than what we explain in the paper. The whole point is to have two categories and related by a functor , and thereby distinguish between “terms” (= morphisms of ), “typing judgments” (= morphisms of paired with objects of refining their domain and codomain), and “typing derivations” (= morphisms of ), rather than taking the usual syntactic category of a type theory which collapses the three notions. This proof-theoretic reading of a functor as a type refinement system can be applied whenever you have a functor (even if it is not always so evocative), because it is just a collection of elementary definitions.
In Section 6 of the POPL paper, we explain how to construct a particular “syntactic type refinement system” corresponding to Reynolds’ type system by first constructing as a freely generated category, then constructing as another freely generated category (with a priori no relation to ), and finally defining to express the desired typing laws. Alternatively, this could be seen more directly as the construction of a freely generated displayed category over the freely generated category . I don’t think there is anything suspect here, it is just perhaps a different way of looking at type systems categorically than most people are used to.
]]>It’s not even clear to me that you get a category if you don’t quotient by anything, let alone a category with all the structure you expect to have. All the usual constructions of the syntactic category that I’ve seen take the morphisms to be typing judgments.
]]>To quote Reynolds: “In fact, as we will see in Section 5, our intrinsic semantics is coherent. However, this fact depends critically on the details of the language design, especially of the semantics of implicit conversions and “overloaded” operators with more than one signature.”
In other words, the coherence theorem is accomplishing something non-trivial, and I would suggest that this thing is best expressed in categorical language (here: for any two morphisms in with the same domain and codomain, if then ), rather than as a precondition for the use of that language.
]]>doesn’t the construction of a syntactic category already mod out by the existence of different derivations of the same term?
Do you mean different derivations of the same typing judgment, rather than of the same term? In any case, the construction of doesn’t mod out by either. In general, we do not ask that a type refinement system is a faithful functor, since we do not want to rule out the possibility of having essentially different typing derivations of the same judgment.
You could certainly construct the quotient category with the same objects as and where two morphisms of are identified just in case they have the same domain and codomain and map to the same morphism in (i.e., they are derivations of the same typing judgment). But then proving that a functor lifts to a functor would amount to the same work as proving coherence.
]]>I looked at the papers, but I’m still confused. I think I don’t even understand what you mean by “coherent”. I see the statement is “the interpretation of a judgment doesn’t depend on its derivation”, but in that case I don’t see how you can have a functor that isn’t coherent; doesn’t the construction of a syntactic category already mod out by the existence of different derivations of the same term?
]]>My point is that thinking about as fibered over isn’t useful for the majority of semantics, which seems to be exactly what you’re saying
No, that’s not what I’m saying, and I certainly think that it is useful. I explained how Reynolds’ proof uses the semantics in order to prove the coherence of his semantics , and suggested that it could probably be adapted to proving coherence of an alternative “intrinsic” semantics of typing derivations valued in some other ccc such as . Also, I made the point that knowledge of the forgetful functor is implicitly required even to make the statement of the coherence theorem.
Sorry if I am not being clear, but again I suggest you have a look at Reynolds’ paper and/or our analysis in the POPL paper.
]]>