Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 10 of 10
There is a new preprint by Kraus & Raumer.
Here is the abstract:
The study of equality types is central to homotopy type theory. Characterizing these types is often tricky, and various strategies, such as the encode-decode method, have been developed. We prove a theorem about equality types of coequalizers and pushouts, reminiscent of an induction principle and without any restrictions on the truncation levels. This result makes it possible to reason directly about certain equality types and to streamline existing proofs by eliminating the necessity of auxiliary constructions. To demonstrate this, we give a very short argument for the calculation of the fundamental group of the circle (Licata and Shulman ’13), and for the fact that pushouts preserve embeddings. Further, our development suggests a higher version of the Seifert-van Kampen theorem, and the set-truncation operator maps it to the standard Seifert-van Kampen theorem (due to Favonia and Shulman ’16). We provide a formalization of the main technical results in the proof assistant Lean.
From what I have skimmed:
There are some interesting techniques used in this paper. For example using ’wild categories’ (untruncated categories) to approximate (oo,1)-categories.
There is also some interesting discussion of a higher van Kampen at the end.
A nice paper. The basic idea, as I understand it, is that the flattening lemma (i.e. descent for coequalizers) is an equivalence of categories, not just an equivalence of groupoids, and thus the initial object of the category of “pointed type families over the coequalizer” (namely, the based identity type) corresponds to an initial object of the other category of “descent data”.
Regarding “wild categories”, I notice that my impredicative construction of HITs could also be phrased in terms of initial objects in wild categories.
I am a little puzzled by their remark (p18) that “indexed higher inductive types…. are not available in most HoTT-based theorem provers.” As far as I know, indexed HITs can be represented in Coq or Agda without any trouble, using the standard private-inductive-types hack. From their remarks on p21 I gather they are thinking about newer proof assistants such as Lean, cubical Agda, and redtt that provide primitive HITs, but it seems to me like a stretch to call those experimental systems “most HoTT-based theorem provers”. (-:
Perhaps it could be fixed by “most HoTT-based theorem provers that we’ve tried” ;-)
Thanks for your interest!
The basic idea, as I understand it, is that the flattening lemma (i.e. descent for coequalizers) is an equivalence of categories, not just an equivalence of groupoids, and thus the initial object of the category of “pointed type families over the coequalizer” (namely, the based identity type) corresponds to an initial object of the other category of “descent data”.
Thanks, this description makes a lot of sense.
Regarding “wild categories”, I notice that my impredicative construction of HITs could also be phrased in terms of initial objects in wild categories.
This also came to my mind when reading your post. Maybe it’s not so surprising. I’d say the strategy of “expressing higher-categorical arguments while only referring to finitely much structure” is a main component of HoTT. Usually, it’s just presented/viewed differently. (Btw, I found the construction in your post amazing.)
I am a little puzzled by their remark (p18) that “indexed higher inductive types…. are not available in most HoTT-based theorem provers.” As far as I know, indexed HITs can be represented in Coq or Agda without any trouble, using the standard private-inductive-types hack. From their remarks on p21 I gather they are thinking about newer proof assistants such as Lean, cubical Agda, and redtt that provide primitive HITs, but it seems to me like a stretch to call those experimental systems “most HoTT-based theorem provers”. (-:
Probably the formulations “available” and “HoTT-based theorem prover” are not so good. Anyway, let me explain what I [*] meant here. I would not speak of Agda and Coq as “HoTT-based”. They are older than HoTT, so how could they be based on it? I was really only thinking of cubicaltt, cubical Agda, yactt, RedPRL, redtt; so, yes, by “HoTT-based theorem provers” I meant exactly what you refer to as “those experimental systems”. In the sentence you cite, I was trying to express that, although these systems support HITs, they don’t support indexed HITs (but I’ve learnt that cubical Agda now does). Yes, we can get higher inductive families with the standard hack. I wouldn’t call this “supported” (or “available”) though, I think it just means that we feel confident enough to postulate the induction principle in concrete cases. This hack has been extremely helpful, but it’s great that we are now getting to a point where we have something better.
[*] I take full responsibility for this badly written comment. :)
Hi Nicolai, thanks for stopping by!
I’d say the strategy of “expressing higher-categorical arguments while only referring to finitely much structure” is a main component of HoTT.
Yes, I agree.
Btw, I found the construction in your post amazing.
So did I. I’m still not really sure I believe it. (-:
I would not speak of Agda and Coq as “HoTT-based”. They are older than HoTT, so how could they be based on it?
I see what you mean, in that things like HITs are not built explicitly into their cores. But even though Agda and Coq existed pre-HoTT, they have since been modified with the express intent of supporting HoTT better: --indices-matter
and --without-K
were added to be consistent with HoTT, and private inductive types were explicitly added to Coq for the purpose of supporting HITs. So I think it makes sense to talk about “HoTT-Coq” in the same breath as cubical Agda or redtt, whether or not you want to call it “HoTT-based”.
Also I don’t think I share your disdain for postulating things that we feel confident about. Isn’t it more flexible and powerful to permit the user to postulate things that they feel confident about, giving them room to experiment even if it also gives them “sufficient rope to hang themselves” when their confidence is misplaced, than to hardcode a particular collection of rules that the implementer of the software happens to feel comfortable about? Consider for instance Andromeda, which takes it to the next level by allowing the user to postulate arbitrary type formers satisfying arbitrary judgmental equalities, including in particular HITs. Would you say that HITs are not “supported” or “available” in Andromeda? If so, then nothing is “supported” in Andromeda aside from -types and strict equality types, since those are the only type formers that are built into the core. But I think that’s a misuse of terminology; the design of the system is such that it “supports” everything the user wants to implement.
But even though Agda and Coq existed pre-HoTT, they have since been modified with the express intent of supporting HoTT better
Ok, fair enough. “Being HoTT-based” is not a [mere] proposition and the way in which I have used this expression on p18 isn’t very accurate (and the word “available” is even worse).
Isn’t it more flexible and powerful to permit the user to postulate things that they feel confident about, giving them room to experiment even if it also gives them “sufficient rope to hang themselves” when their confidence is misplaced, than to hardcode a particular collection of rules that the implementer of the software happens to feel comfortable about?
It’s more flexible and powerful, no doubt about that. I think the rest depends on how one views Coqda & co. If one sees them as systems for experiments or as proof assistants, then I agree with you. If one wants a verifier which checks proofs for correctness, then I find postulates unsatisfactory. Helpful, but not ideal. (This feels a bit related to a recent discussion on the Agda mailing list: Do you use postulates extensively?)
a particular collection of rules that the implementer of the software happens to feel comfortable about
Ideally, what is implemented reflects what the community agrees on. Of course, the experimental proof assistants should not dictate which concrete type theory people work in. But, if people agree that postulates following a certain scheme are ok, then I find it reasonable to include this in the language itself such that the language it able to verify whether the scheme has been used correctly. (cubicaltt et al. do much better than this.)
Consider for instance Andromeda
I think Andromeda is “one meta-level up”. Maybe I indeed wouldn’t have said that it supports HITs but it supports a more general mechanism which gives HITs. However, I can see why you would consider this a misuse of terminology, and I agree.
I don’t see what’s unsatisfactory about verifying a proof involving postulates. What gets verified is a conditional statement: assuming these postulates, this result follows. That’s no different from any other verification in which we assume the axioms or rules of whatever system we’re working in.
I agree that Andromeda is a meta-level up; I view it as potentially a dependently-typed analogue of Isabelle, a framework in which a “meta-user” can implement many different systems that an “ordinary user” can then use. However, like “being HoTT-based”, this is not a bright dividing line, and I think the private-inductive-types implementation of HITs in Coq/Agda can be viewed as a similar “meta-user” use of the proof assistant which the “ordinary user” doesn’t have to peek under the hood of.
I don’t see what’s unsatisfactory about verifying a proof involving postulates. What gets verified is a conditional statement: assuming these postulates, this result follows.
In one case, it’s: “In the theory which proof assistant X implements, plus some postulates which can be found in lines k, l, m, n, result R holds.” In the other case, it’s: “In the theory which proof assistant Y implements, result R holds.” If I know X and Y well, then I (as a “reader”) prefer the second. That’s all I meant.
I think the private-inductive-types implementation of HITs in Coq/Agda can be viewed as a similar “meta-user” use of the proof assistant which the “ordinary user” doesn’t have to peek under the hood of.
I agree if, by “meta-user”, you mean someone who implements HITs in a library and, by “ordinary user”, you mean someone who uses this library (for which they don’t need to use the implementation details). But this is not how it works in practice, because we can’t have a library of all HITs that an ordinary user might possibly want to use. The ordinary users has to peek under the hood.
Of course, the value of a proof depends on our familiarity and trust in the rules and axioms it uses. I’m just saying that in principle that’s an orthogonal question to whether the rules and axioms are implemented in the “core” of a computer program or as “postulates” at user-level. I think we understand and trust basic Book-style HITs pretty well theoretically, and implementing them at core-level doesn’t by itself create any additional trust.
It’s true that the ordinary user of Coq/Agda who wants to define an entirely new HIT has to know a bit about how the private types hack works. But as we now know, lots of HITs can be implemented in terms of just pushouts/coequalizers, and using those in Coq/Agda isn’t much different from using them in Lean. Also it’s pretty easy to implement a new private-types HIT; there are clear instructions and there’s only a little boilerplate. Much simpler than, say, modifying the underlying type theory of Isabelle/HOL. Again, “peeking under the hood” isn’t a bright line.
Yes, I agree.
1 to 10 of 10