Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Started bib task (called this Initiality Project - References because bib is difficult to spell).
Could somebody spell out the theorem that you are trying to prove?
I know the informal idea, what I would like to see is what precise formulation you are trying to prove.
We haven’t added such a statement to any of the pages yet because the definitions we are using haven’t been finalized yet. But it should be something like “The CwF with $\Pi$-structure (or whatever) constructed in Theorem X, which is built out of equivalence classes of syntactic terms in Type Theory Y, is an initial object of the category of CwF’s with $\Pi$-structure”.
Right, but for which definition of “syntax”? One may take “syntax” to mean “contextual category”, but maybe that’s not what you want to do.
@Urs it seems to be Harpers definition of ’abstract syntax tree’ and variants thereof. Unless I am mistaken.
Thanks. Would you agree that before setting out to prove anything, the first thing to do is to write down a really clear and precise version of the statement to be proven?
Is there any precedent where Harper’s abstract syntax trees have been shown to be equivalent to any semantic structure, type theoretic or not? Something I could look at to get a feeling for which kind of statement you are after. Thanks.
@Urs have you seen the length of the ’Raw Syntax’ thread? That is one of the things that is trying to be pinned down.
In classical logic this is done by defining an alphabet and making words and sentences from these. These aren’t very easy to work with. What I think Mike is proposing, is this inductive syntax as in Harper. The details haven’t been fully agreed upon yet however.
David, certainly I noticed the flood of activity here in numerous threads with “Initiality Project” in the title. Now that this flood had died out, I come back to see what the mountain’s labour had brought forth.
I just want to get an idea of what the project is really about. As you may know, various type theorists out there claim that initiality is a non-issue, others claim that it is a minor issue that has been solved, such as in
yet others claim that it is a major issue that still has essentially been solved.
This variety of opinions makes it clear that it all depends on what exactly one actually means. I’d love to see a clarifying discussion on this point, but it seems hard to come by.
I’d suggest any Initiality Project should first and foremost give a critical discussion of what is lacking with existing claims of initiality, as in Castellan14 above, then lay out how that lack is to be overcome, and then state precisely which improved version of the theorem is in need of proof.
But I do gather now that it’s meant to be about the relation of some very low-level (“raw”) concept of syntax.
I would like to know: ls this effort a priori so much about initiality, or is it rather first of all about translating from raw assembler code to a more high-level syntax, in the vein of this quote here.
If it’s (just or foremost) the latter, it would seem to me that the question is not so much a natively type-theoretic one, but a general computer-science-type issue that must have appeared and must have been dealt with elsewhere among people looking into formalization of mathematics?
Lemma 3 and Lemma 4 in that report are proofs by induction, as expected, but for Lemma 2 only one case is detailed, and the statement of the proof of Lemma 4 is, in toto, “By induction”. This is somewhat antithetical to the aim of actually detailing the whole proof. Even the definition of the term model comes says things like
The CwFLFs constructions on $\mathbb{T}$ are defined without any surprise, for instance […] and so on. Type conversions make it a good definition.
It is easy easy [sic] to check that $\mathbb{T}$ is a CwFLF.
And again, the proof of Lemma 5
The proof is by induction on derivations. We detail only the lambda-abstraction case
In section 3.2
A proof of normalization for a similar calculus can be found in [ACD07]. We believe that this proof could easily be extended to our calculus.
Lemma 2 in section 3.3
By induction on the equality derivation. All cases are straightforward but the reflexivity case. This is where normalization is needed. The proof, omitted can be found in [Str91] (for a different calculus but the idea is the same).
and so on. This typifies the situation. The author claims “we will give the proof” and then just points to places where similar proofs are given for similar systems.
I think the flood has died out because we are catching our breath. I’m hoping soon to come back and reorganize things so that we can really get started as a group.
I actually don’t agree that we need to start by writing out a fully precise definition of everything. Part of the problem is to write out the definitions, and that can be parallelized too: some people can be working on writing out all the cases of the definitions while other people are already working on writing out cases of the proof. As I’ve said before, after we are all done, we will work on the exposition and explain how it is related to existing literature, etc. as in any publication. Right now we are not writing a paper, we are using the nLab only as a “lab book”.
ls this effort a priori so much about initiality, or is it rather first of all about translating from raw assembler code to a more high-level syntax, in the vein of this quote here.
I don’t think there is a dichotomy there: I would say that initiality means the translation described in that quote. Although I would not describe the translation discussed in that quote as “raw assembler code” and “high-level syntax” – both actual syntax and its category-theoretic stand-ins are at the same “level”, but just expressed differently.
David, thanks for being concrete about Castellan. Now I’d like to know: Is the general setup of Castellan’s article what the Initiality Project is also meant to stick to, with the goal just being to spell out and fill in those missing induction steps? May I read Castellan as a blueprint for the Initiality Project?
Elsewhere type theorists said that the problem with Initiality is not with any given type theory, but in having a proof that works homogeneously over classes of type theories. Is this an issue with Castellan? Is this an/the issue that the Initiality Project is concerned with?
Yet elsewhere people seem to settle for rather different definitions of “syntax” in the first place, I seem to remember Thorsten Altenkirch consider “internal syntax” formalized right there in Agda. Given that apparently the proof of Initiality is reduced to boring but tedious checking of certain induction steps, then maybe it would be a really good idea to take this to its logical conclusion and try set it up Agda-friendly to start with?
The “blueprint” for the current project is Streicher’s book Semantics of type theory: Correctness, completeness and independence results, with some substantial modification as suggested by Peter Lumsdaine and more minor modifications as developed here.
Streicher’s proof is generaly considered complete for the type theory he worked with (no inductive cases are “left to the reader”), but is awkward and non-modular in places. (I have not read Castellan in detail, but I have been told that he has similar issues.) The current goal is to write a proof in modern notation, starting with a very basic type theory, but written in a modular way so that the proof can be extended straightforwardly as the type theory is made more complex. The main goal is to make the proof organized and readable, to improve public understanding of what an initiality proof looks like. Proving an actually new theorem is not a main goal (in particular, the type theory we are starting with is a strict subset of Streicher’s), although we may eventually get there.
There is disagreement over whether “the problem with initiality” is “with any given type theory”. Personally, I think it’s reasonable to be dissatisfied with a claim that “initiality is easy to prove for any given type theory” when a complete proof of initiality has only ever been written out for one “given type theory”, and hardly anyone has ever read that proof. We should have significantly more experience than that with “this sort of proof” before declaring that “all proofs of this sort are easy”, even if it eventually turns out to be true. (So far, our experience in the project has justified my feeling: there is a lot of “there” there in an initiality theorem that doesn’t merit glossing over.) The goal of this project is to start gaining that experience.
Ideally, we would also eventually have a general initiality theorem that could be proven once and then applied to all examples without having to be re-proven every time. That is not a goal of this project, but I think that our work on this project may inform us as to what shape such a theorem might eventually take. (I am also aware that various people are already working on such theorems, but to date no results have appeared. Castellan’s work also only relates to one particular type theory.)
Finally, regarding your last question, I like to think of initiality theorems in a way analogous to Tom’s take on the homotopy hypothesis here (scroll down to “What is the Homotopy Hypothesis?”). Initiality theorems are about connecting syntax to semantics; but the line between syntax and semantics is not bright and clear. Rather there is a sort of continuum, with very syntactic things on one side, moving through “semantic-like syntax” through “syntax-like semantics” and ending up at very semantic things on the other side. As with the homotopy hypothesis, you can make the initiality theorem a triviality by defining both sides to mean the thing in the middle: define semantics to consist of categories with families, define syntax to be the initial category with families, and you’re done. This is the sort of thing that Thorsten is advocating, and as with “define an $\infty$-groupoid to be a Kan complex”, I don’t think it’s valueless. But I also think it’s interesting and important to actually connect the two remote ends of the continuum. In particular, the remote semantic end is what we are given “in nature”, while the remote syntactic end is what we actually write down “on paper”, and these are the things that really need to be connected in order to use type theory as an internal language for (higher) categories.
The comparison with the homotopy hypothesis might not be the way to fill us with confidence. That page points out that the hypothesis still hasn’t been proved for some definitions, e.g., Trimble $\infty$-categories, and that there are choices as to which form equivalence should take. People are still questioning its status.
But then perhaps we should take heart that for some definitions and some versions of equivalence the hypothesis has been established.
Thanks. Castellan’s pdf note was advertised to me as being a streamlined version of Streicher’s.
there is a sort of continuum
I know, that’s why I keep asking where in this continuum the Initiality Project is meant to work, and suggesting that this needs to be stated clearly.
But I understand now that you are after the extreme side of the sort of continuum. It still would seem worthwhile to break this up, if possible, into intermediate stages of syntax. I suppose one step of the initiality theorem must be something conceptual, insightful, enjoyable, whereas the last meters to the raw syntax seem to be more a tedious engineering issue in computer science.
To me it seems that a better analogy than with the homotopy hypothesis business is with the string diagram notation, as suggested at the end of the quote here.
In that analogy, I hope you will eventually state a string-diagram-style version of the Initiality Theorem and disentangle that from statement of the engineering problem that string-diagram-style reasoning is equivalent to some manipulation of abstract syntax trees. Then to prove these two things separately.
David: I’m not sure of your point. An analogy to explain the logical meaning of two statements does not have anything to do with whether the two statements have been proven. The statements “there are an infinite number of primes” and “there are an infinite number of twin primes” have a similar logical structure but, of course, very different statuses of proof. But in any case my point was not to “fill you with confidence” as to whether an initiality theorem has been proven already – we are trying to prove it right now!
Urs: There’s no reason that two analogies can’t be equally good, especially if they explain different aspects of a problem. I don’t know what you mean by “string-diagram-style version”. The theorems that rigorously establish string diagrams are generally of the same form as an initiality theorem for type theory: they say that string diagrams modulo deformation-equivalence can be assembled into an initial object in some category of structured categories. I don’t usually think of string diagram theorems as relating to abstract syntax trees at all, while type theory is essentially by definition already about abstract syntax trees. What exactly are the two things that you want to see proven separately, and why?
I was thinking more of levels of agreement as to what the homotopy hypothesis involves and what more needs to be done. And if there is disagreement, how precisely the point of the disagreement can be described. And when one party claims to have achieved something, whether all parties agree.
It would be impressive enough to reach a stage for each hypothesis where it is agreed by all that there is a continuum, and what is to count as establishing an equivalence between two points on the continuum, even if there is disagreement as to the value of connecting one end to the other. Have we reached this point in either case?
I’ve never heard of any disagreement over what counts as establishing an equivalence between two points on the homotopy hypothesis continuum, or about the fact that whether or not it’s been proven depends on your definition of “$\infty$-groupoid”, or about what exactly has been achieved. The fact that a non-expert is asking a question about it on MO doesn’t mean there is any disagreement among experts about facts (though there may be still about values). I don’t know that everyone would naturally describe it using Tom’s continuum, but I’ve never heard anyone disagree with such a formulation either.
I think reaching a comparable point for initiality theorems would be a useful achievement, which is why I like to talk about the syntax-semantics continuum, and I hope the current project will progress towards that goal as well.
Thanks. Reassuring about the homotopy hypothesis, though perhaps it was the mode of expression on the page that got me wondering:
Apart from having different models of ∞-groupoids that lend themselves more or less to a comparison with topological spaces, there is also the issue as to how to conceive of the notion of equivalence between ∞-groupoids…The reason this is important to specify is that there are other notions of equivalence on categorical structures which model homotopy types in other ways.
It’s easy to read this as though this choice makes a difference to what one can establish.
Regarding initiality, it would be interesting to hear what kinds of views are out there concerning “value” as you outline at the end of #20.
Are you perhaps confusing the choice of the notion of equivalence between particular $\infty$-groupoids with the question of how to state an equivalence between notions of $\infty$-groupoid?
The latter is perhaps most naturally phrased by saying that the two notions are the objects of two $(\infty,1)$-categories, and these two $(\infty,1)$-categories are equivalent. In practice we usually prove this by assembling the two notions into Quillen model categories that are Quillen equivalent. But in order to do this for a particular notion, we need to choose what the “weak equivalences” between two $\infty$-groupoids of that sort should be, and that choice is what that paragraph is talking about: choosing different notions of weak equivalence on the same underlying 1-category of objects can yield inequivalent model categories that present inequivalent $(\infty,1)$-categories.
Ah OK. But then isn’t the way it’s explained on the page rather odd? It sounds like you could establish the equivalence between notions before settling on the variety of equivalence for each notion.
I don’t hear that myself in the phrasing, but please feel free to clarify it!
Re #23:
while type theory is essentially by definition already about abstract syntax trees.
No it isn’t. I think you’ve even said so yourself. It’s about functions, ordered pairs, etc.
I think this is part of the problem of understanding the initiality problem. It’s not really about connecting certain categories to type theory, it’s about connecting certain categories to certain formal systems, which happen to be developed by type theorists.
You can’t establish a connection between category theory and type theory without agreeing what type theory is. But in any case, it’s not primarily about manipulation of syntax trees. (That’s just what we end up doing in practice, because we’re good at it, I suppose.)
From #20:
Rather there is a sort of continuum, with very syntactic things on one side, moving through “semantic-like syntax” through “syntax-like semantics” and ending up at very semantic things on the other side.
The continuum has a relatively large jump though, between extrinsic syntax on one side, and intrinsic semantics, on the other. “Extrinsic” and “intrinsic” here refer to extrinsic/intrinsic typing. With extrinsic typing, types from some type system are assigned to objects that don’t already have types from that system. With intrinsic typing, objects from a system come with a type from that system, and cannot be handled, except as permitted by that type.
The two broad approaches to bridge the gap between extrinsic syntax and intrinsic semantics are with intrinsic syntax, like Thorsten Altenkirch likes, or with extrinsic semantics, like in Streicher’s method. Intrinsic syntax minimizes the use of extrinsic semantics. Streicher’s method also seems pretty minimal. But note that the partial objects in Streicher’s method are extrinsically assigned dependent types by the totality proof. Intrinsically, they’re almost unityped.
Thanks, atmacen (do I know your real name?), relieved to hear somebody speak that way, that’s useful. If you could take 10 minutes to just lay out in a few paragraphs exactly what you just indicated, just with a tad more background/explanation to make it a tad more self-contained to the non-type-theorist, I’d be very grateful. Am hoping to finally see more clearly.
I’m sure intrinsic and extrinsic views of typing could use some more explanation. In our discussions I never heard about the distinction being applied to semantics.
David, my impression is that the intrinsic/extrinsic distinction applies primarily to semantics. That’s also the impression I get from that nLab page. Intrinsic syntax is a relatively recent idea, that you can specify an object language without ever involving ill-typed expressions. The metalanguage type system rules out things that typing rules would traditionally rule out, when starting from raw expressions. In other words, raw expressions then typing rules is extrinsic syntax. (It’s extrinsic syntax whether the object language was designed to have interesting extrinsic semantics or not.)
Thanks for the link to that interesting-looking discussion. I will read it.
Thanks for the clarification. I often struggle to remember where syntax ends and semantics begins.
We have some very stub-like pages in this vicinity: syntax, syntax-semantics duality, semantics, semantics of a programming language.
No it isn’t. I think you’ve even said so yourself. It’s about functions, ordered pairs, etc.
I think you are playing with words or willfully misunderstanding me. The subject matter of type theory is functions, ordered pairs, etc., but the formal system of type theory involves syntax trees, and the latter is of course what a formal theorem like an initiality theorem would be concerned with. When I said that, I was trying to understand what Urs said:
I hope you will eventually state a string-diagram-style version of the Initiality Theorem and disentangle that from statement of the engineering problem that string-diagram-style reasoning is equivalent to some manipulation of abstract syntax trees. Then to prove these two things separately.
and I still don’t.
Regarding extrinsic semantics vs intrinsic syntax, I have yet to see intrinsic syntax actually used to bridge the gap; generally it seems that people simply declare that intrinsic syntax is what they were interested in anyway without bothering to connect it to extrinsic syntax. It’s unclear to me that bridging the gap between extrinsic syntax and intrinsic syntax will be simpler than going directly from extrinsic syntax to intrinsic semantics.
I don’t really see Streicher’s method as “extrinsic semantics” either. The objects of the partial interpretation are not untyped or unityped; what they are (like the raw syntax, actually) is simply typed. I would say that it erases the type dependency, at the cost of partiality, and then restores it with the totality proof.
Do we essentially have an equivalence of categories:
$\mathrm{Type theories} \simeq \mathrm{CwF}$where the morphism to CwF is the partial interpretation and the morphism from CwF is the “internal language”?
Hence an initial CwF will give us an initial type theory?
That depends on how you define the “category of type theories”. Some people like to define that category so as to create such an equivalence. My own feeling is that creating such an equivalence in that way is not very interesting because it’s not very far from simply defining a “morphism of type theories” to be a morphism between their corresponding CwFs, making the equivalence “true by definition” and hence not very contentful; I prefer to define a “morphism of type theories” in a more natural syntactic way, yielding an adjunction rather than an equivalence. But these are matters of preference.
In any case, proving any of these statements involves essentially the same work as proving the initiality theorem.
Re #36, sorry. I guess what I was reacting to was that it sounded kind of like Urs Schreiber expects/hopes the initiality theorem would involve isolating some high-level subject matter of type theory, that isn’t category theory, connecting those, and finally connecting syntax to high-level type theory. But it doesn’t necessarily do that, and doing that would be better called something else. And type theorists probably wouldn’t agree what “doing that” should actually be doing.
I like Max’s take: initiality is a soundness and completeness theorem for a certain kind of interpretation of a certain kind of formal system.
Re #37:
I have yet to see intrinsic syntax actually used to bridge the gap; generally it seems that people simply declare that intrinsic syntax is what they were interested in anyway without bothering to connect it to extrinsic syntax.
From this thread on the HoTT list, that sounds like a slight oversimplification. Intrinsic syntax can be used as the internal representation of syntax, and token strings can be parsed and type-checked to intrinsic syntax trees. To additionally connect intrinsic syntax to extrinsic syntax trees would be computationally unnecessary. It sounds like in Thorsten’s vision, extrinsic syntax trees would be “legacy”.
You are bridging a gap that not everyone needs bridged. There’s no need to prove something about a representation you don’t use. Maybe you already realized all that, but I suppose it deserves to be reviewed, for others. This is probably why Thorsten-style intrinsic syntax hasn’t been used to bridge the gap you want to cross.
It’s unclear to me that bridging the gap between extrinsic syntax and intrinsic syntax will be simpler than going directly from extrinsic syntax to intrinsic semantics.
I guess that’s true.
I don’t really see Streicher’s method as “extrinsic semantics” either. The objects of the partial interpretation are not untyped or unityped; what they are (like the raw syntax, actually) is simply typed.
Well we probably will not be able to completely agree on what typing is like in informal math. :) In any case, it looks like we at least agree that Streicher’s method can be seen as assigning dependent refinement types to less-refined partial objects. (One may wonder why the partial objects should be typed at all if refinement is called for in any case.)
What about essentially algebraic theories? That has the gap between extrinsic syntax and intrinsic semantics bridged already, right? The syntax is not the style we want, but still: There is the option of making the unproven parts of initiality just CS-style slicing and dicing of syntax, right?
I’m tired of this conversation; can we stop it? The whole point of the initiality project was to stop arguing about whether the initiality theorem is important or trivial, hard or easy, etc., and just prove the blasted theorem. (-:
Sorry, the end of the semester is kind of a rough time to be starting new projects. Realistically I’ll probably be able to get back to this in January.
The latest edit looks like (mathematically sophisticated!) spam, I’ll delete soon unless someone objects.
1 to 51 of 51