Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
created univalence axiom
added to univalence a section In simplicial sets, making things a bit explicit.
The References section at univalence axiom attributes it jointly to Awodey and Voevodsky. I always thought it was Voevodsky’s alone; where is the attribution to Awodey from? Awodey and his collaborators and students did independently come at the idea of homotopy type theory, and especially identity types as path spaces, but I thought univalence was Voevodsky’s invention.
Right, I have fixed it.
Not that it’s very important, but what’s the right date to give in “Univalence was proposed by Voevodsky around 20xx.” ?
Is it 2005, or is this also just when he envisioned homotopy type theory without univalence?
I am working on a more general section In categorical semantics.
I am still fiddling with some details. (But I notice that the only reference (pages 5 and 6) is, too ;-)
For instance: Voevodsky and these authors speak about the morphism . But we need a morphism instead. I understand that by 2-out-of-3 the latter is a weak equivalence if the former is, but I still need to think about how to define the latter in the first place.
Oh, I get it.
Let’s see, what is path induction really in the model categorical semantics? It is lifting in
using that the left vertical morphism can be chosen to be an acyclic cofibration and that is fibrant. Right?
[edit: I have added a bit more along these lines.]
what’s the right date to give
I don’t know.
Yes, your definition of the univalence morphism is right. I changed the definition of the map to an equivalent one which I find more intuitive (I think it explains better why this is really just the identity map), but if you prefer the other one we can discuss it.
Although, I think it requires an extra hypothesis on , such as all objects (or certain objects) being cofibrant, in order to identify this statement with the univalence axiom holding in the internal type theory. The point is that a given map being a weak equivalence in the model category has to be identified with the inhabitation of the corresponding type in the type theory. I’ve worked this out to a certain degree, so I should put it on the lab somewhere.
Thanks, Mike.
such as all objects (or certain objects) being cofibrant,
It occured to me today during lunch that I should have added some cofibrancy condition also to be able to deduce that is indeed fibrant. But it is not until many hours later now that I have the leisure to come back to this.
I feel like I should say that these days I only have a few stolen moments here and there to think about these things, such as yesterday on a night train from Hamburg, which is why I keep leaving stubby entries.
I am guessing the following is true.
The univalent fibrations in (not necessarily universal) are, up to equivalence, precisely the canonical associated infinity-bundles which are associated to -principal -bundles, namely the fibrations
where is the two-sided bar-construction.
I know that the universal fibration is , where the coproduct runs over all small homotopy types. The above statement (“I am guessing”) immediately implies univalence for this.
The guess is obviously “morally right”, unless I am mixed up. It seems one only needs to fiddle a bit to show that the canonical morphisms discussed at univalence do exactly what one expects them to do.
Yes, that seems right. It’s the “up to equivalence” that is the kicker — we need a good point-set model which classifies al small fibrations via strict pullback. This is what Voevodsky’s trick with the universal Kan fibration accomplishes.
Yes, I understand. But once you have the strict classifier, the statement of univalence should not depend on the model that much, since it mentions weak equivalence, and not isomorphism.
Take the -component of some strict model for the small object classifier. It should be equivalent, as an -bundle to, say, , or some other convenient model of the weak small object classifier. Showing univalence for the latter should imply it for the former, I’d think.
Ieke Moerdijk presented a proof of univalence along these lines two weeks ago in Leiden, by passing through a Kan minimal model for the universal -associated bundle.
I was thinking that with Wendt’s arguments for classification of associated bundles in -stacks one could make progress towards the parameterized version. But looking at Peter May’s argument from 1975 that he keeps refering to, I get the feeling now that there is still a bit of a gap to be filled here.
Let me turn this around:
we can ask “is it univalent?” about the -compact object classifier in any -topos, quite independently of the question of how to find a strict model for it in some presentation.
That is: are object classifiers in general -toposes associated -bundles?
Do we have an argument for this in the context of -topos theory (independently of the needs of homotopy type theory)??
Yes, I think that’s easy to see. By the Yoneda lemma, is an equivalence if it induces an equivalence on all hom--groupoids , which is a fiberwise map over . For a given map , classifying two objects and of , lifts to form the space , while lifts to form the space . These two are naturally equivalent by the obvious map, by definition of what it means to be an object classifier; hence is also an equivalence.
Thanks, Mike. That’s of course an evident argument, which I should have noticed.
So to prove univalence for a given fibration, it almost seems sufficient to prove that the fibration indeed presents the -small object classifier in the -topos theoretic sense.
In I know how to prove that the -small object classifier is . From Stasheff (1963) and then May (1975) we know that this is the base for the coproduct of universal -fibrations. (The pdf that went over the HoTT list a few hours back rederives this, up to and including its equation (3) on the second but last page.)
So to see that it’s univalent, by the above it suffices to observe that the relevant morphism from the path object to indeed presents the abstractly defined morphism, which you point out is an an equivalence by definition of object classifier.
By this kind of argument I seem to deduce that to prove univalence in a general local model structure on simplicial presheaves. the work is in producing a model for the universal fibration. Once one has it, it must be univalent since the thing that it models trivially is.
(I may still be mixed up. Hope you don’t mind chatting about it a bit more, nevertheless.)
Yes, the work is in producing a model, but it has to not just be a model of the universal fibration (we know, of course, that some model of that exists, even if we don’t have an explicit construction on the model category level) but we also need good closure properties of the class of morphisms that it strictly classifies, as I said in #11.
Okay, thanks. Sorry for going in circles.
But one last question for the moment, which I should have asked long before:
what is the/a good detailed reference for Voevodsky’s proof of his univalent model?
The only references I know are Voevodsky’s notes on type systems and more recently these notes taken by Chris Kapulkin on a talk by Ieke Moerdijk presenting a shorter proof of the univalence (which could equivalently be considered a proof that the particular object in question does in fact repreesent the object classifier).
A nice readable exposition of the proof of univalence in simplicial sets has just appeared on the arXiv.
And, at last, a construction of a few new models of univalence.
Great! I have added the reference to univalence.
(for other readers: related discussion is on the HoTT site here.)
here a comment/question just on terminology:
in the entry univalence it says “path reduction” for the non-dependent or “simple” version of the elimination rule. Elsewhere we had agreed that this should be called recursion, with induction being the dependent / non-simple version.
Unless I am mixed up: is it still too late to change terminology? I would enjoy seeing the induction/recursion distinction be used systematically, as it is kind of nice, conceptually.
Well, recursion is a special case of induction, so I don’t see what’s wrong with using “induction” to refer to cases that don’t happen to be dependent.
They are both special cases of each other. So there is nothing wrong with it either way, as it is equivalent.
But I thought it would still be nice to make the distinction in words anyway, because even though they are equivalent, when written out one is verbatim the generalization of traditional induction, the other verbatim that of traditional recursion.
It’s not so important, though.
I thought you said that “recursion” meant the non-dependent elimination rule. The dependent one is not a special case of that, is it?
Hm, I had indeed thought that that’s what the article by Awodey-Gambino-Sojakova said on p. 9, but now I remember that it is only true if one adds some -rule to the simple elimination.
Okay, I understand.
I have only right now become aware of the recent preprint
and so I added it to the References-section at univalence.
added to univalence a pointer to the IAS video recording of somebody’s lecture The computational interpretation of HoTT.
NOTICE: the web-site has that video mixed up with the page for On the setoid model of type theory. (the titles are interchanged, or else the titles are right, but the links to the videos are intechanged).
You’re right. (-:
Took me a while: I got really annoyed when after 20 minutes the speaker was still talking about setoids without even having mentioned univalence… before I noticed that I am watching a talk on setoids ;-) (Well, I was listening with only half an ear it while working on something else.)
When I then switched to the correct talk on univalence, I found the following kind of curious, given our earlier discussion here: the speaker (that’s Dan Licata, I suppose? Somebody should really fix these websites…) begins with saying something like “I don’t have to review natural deduction, do I?” only to be flooded with questions from the audience, for the whole first 15 minutes, before he can even mention the topic of univalence.
While I have your attention (if I still do after these ramblings): do you have an opinion on the program by Harper&Licata on canonicity for univalence? I am asking because I gather it’s still somewhat inconclusive, so it would be helpful to hear what an inside perspective would be.
Also, after watching the talk I realized a mistake that I had once introduced in the entry canonicity: I had claimed there that Harper-Licata speak about 2-type theory (directed type theory), but instead they just talk about h-level 3 HoTT. (right?) I have fixed that now.
I’m still figuring out my opinion. I agree that eventually, we will need some sort of better computational understanding of univalence. It’s not clear (and Dan Licata, who is indeed the speaker in that talk, agrees with this) that this understanding will look like their presentation of (undirected) 2-dimensional type theory in which operations on equalities are given as basic operations, rather than derived from the elimination rule for identity types. After hearing Dan’s talk, I’m a bit more sympathetic towards that approach than I was — but it’s certainly not clear how to extend it to infinite dimensions. Dan has said he thinks an alternative approach would be an algorithm that can “compute” with propositional equalities in some way, taking a term and “evaluating” it, not by definitional beta-reductions, by but transformations connected to it by paths, until it reaches a term of canonical form.
Perhaps I should mention that Thierry Coquand and some other people here have a related project: to build something like Voevodsky’s simplicial set model of type theory starting from a constructive type-theoretic foundation. Essentially, to build a model of intensional type theory (with univalence) starting from extensional type theory, or even better starting from type theory that doesn’t even have identity types (“un-tensional” type theory?). If this succeeds, then arguably it would automatically result in a computational understanding of univalence, since “everything in type theory is computational”. I don’t fully understand to what extent this would answer the same question that Harper&Licata are after, but it’s certainly closely related.
added a section univalance – Canonicity with a basic pointer to Mike’s result and to the Coquand-Huber approach.
(I wrote this three times, losing previous versions twice with the nLab and my computer crashing in turn. The version now is not as good as a previous one, but I am out of good spirits now.)
External editors! It's All Text helps make it convenient with Firefox.
corrected qualifications in the first paragraph at univalence – Canonicity (thanks to discussion with Andrej Bauer and Mike Shulman and Bob Harper)
The link doesn’t work for
Simon Huber (with Thierry Coquand), Towards a computational justification of the Axiom of Univalence , talk at TYPES 2011 (pdf)
Is this a similar talk?
The latest details are being discussed here.
(Can you see this?)
Yes, thanks. Hopefully this is building towards making Martin-Lof happy with a computational justification.
I was wondering also though for that dead link whether the suggested replacement is adequate.
Sorry, which dead link? (I guess I wasn’t paying attention.)
The link to the talk mentioned at univalence axiom
Simon Huber (with Thierry Coquand), Towards a computational justification of the Axiom of Univalence , talk at TYPES 2011
Now, I see I gave a wrong link to a talk by Coquand in #35, which if I can get it right is here. Presumably it’s the same material.
Is this a similar talk?
Bad link - missing “http” and a trailing invisible Unicode LRM
David - where you copying and pasting those LRMs from?
I really don’t know where the LRM comes from. I just copy out of my browser and it seems to appear sometimes.
Those LRMs come from copying the green URLs in Google’s result list!
One can fix it by removing what looks like “pdf” at the end with the backspace key and then re-typing “pdf” (and pre-fixing the url with its “http://”).
Maybe this didn’t always used to be this way with Google. I do this fixing a lot lately.
I have added the Humboldt-Kolleg talk link to the entry here
@Urs 36 - is that a private discussion? I can’t see it.
Yes, it’s marked private. If I find a second (or somebody else who can see it) we should just make a note of some of the actual technical statements given there in the nLab entry.
Perhaps Andrej or Mike can at least cut and paste/summarise some of the discussion to somewhere more readable?
With this firefox add-on you can right-click on a google search result and copy the URL.
I’ll see if I can find time to summarize the points.
Okay, I tried to explain a bit more at univalence.
Thanks!!
in univalence these links are broken:
Thanks, Mike.
@spitters, I think the issue is with the universes rather than the univalence – we don’t know how to get strict universes in type theory out of object classifiers in general. Is that what you mean? We should probably clarify the sentence.
You can make links to the nLab from here by typing [[type-theoretic model category]]
, just as on the nLab itself. I don’t see why or what to merge though.
What?
I tried to clarify this reference. I also added a remark that univalence essentially occurs in the 1996 Hofmann-Streicher paper under the name “universe extensionality” (although “type extensionality” would be a better analogon to “function extensionality”).
univalence contains the assertion: “The univalence axiom says that [equality and equivalence] are the same. Morally, this says that Type behaves like an object classifier.” I would like to make this less cryptic, but first I have to decode it–I would be grateful if someone would explain the intuition behind this.
[I’ve understood the proof that the universe is an object classifier in the HoTT book, and I see that it employs univalence. But I don’t see the conceptual relation.]
For to be an object classifier, the pullback map should be fully faithful. The morphisms in are equalities of types, while those in are equivalences of types. So univalence, which says that equality and equivalence of types are the same, says that this map is an equivalence on morphisms, i.e. fully faithful. Is that the sort of thing you’re looking for?
Can someone specify what the often phrase that is often discussed means, “make univalence compute”.
Does it mean to find a realization (in programming language) of the term
that is inverse to the function running in the other direction? I.e. code up that idea - and idea that is maybe simpler to realize in normal/less formalized mathematics of -groupoids?
Roughly, it means to find a type theory satisfying a normalization/canonicity theorem and in which univalence is true. This doesn’t involve just realizing the single term “univalence” as a program, but designing the entire type theory so that all terms can be evaluated.
I added to univalence axiom some mention of the weaker statements that are known to be sufficient to imply the full UA.
The impression I always got as a non-expert was that univalence is basically designed to be a weaker form of the axiom of extensionality, and as soon as you require this, you’re going to have things become uncomputable. Do people think there are real prospects for UA to hold in a completely computable/evaluable type theory?
The approach called cubical type theory claims to achieve this, though I never quite know what precisely the claim is regarding “book-HoTT”:
Cyril Cohen, Thierry Coquand, Simon Huber, Anders Mörtberg, Cubical Type Theory: a constructive interpretation of the univalence axiom (arXiv:1611.02108)
Of course Mike will know more.
univalence is basically designed to be a weaker form of the axiom of extensionality
What does that mean? Usually when people say “the axiom of extensionality” they’re referring to membership-based set theory, which is a totally different context from type theory in which the univalence axiom makes sense. They’re totally incomparable.
as soon as you require this, you’re going to have things become uncomputable
Why?
Do people think there are real prospects for UA to hold in a completely computable/evaluable type theory?
Yep; cubical type theory appears to basically solve the problem. What’s written at univalence axiom is a bit out of date. Homotopy canonicity for UA as an axiom in Book HoTT is still open, but cubical type theory is a different theory that enjoys both canonicity and univalence.
Sorry, Mike, whatever the axiom you take in addition to the usual Intensional Type Theory to get Extensional Type Theory. “The extensional axiom”. It looks remarkably similar to the univalence axiom.
but cubical type theory is a different theory
It would be good to know how it relates to book-HoTT. Do you know?
To get extensional type theory one adds axiom K or axiom UIP. They look very different from univalence to me; in addition to the fact that they contradict it, they seem to be saying totally different kinds of things. Can you explain what relationship you see?
I believe that Book HoTT can be interpreted into cubical type theory. I’m still a little fuzzy on the details, but I think the situation is that the “natural” identity types of cubical type theory only satisfy the Id-elimination rule of Book HoTT (and MLTT) with a typal computation rule, but you can define a different family of identity types that satisfy it strictly.
@Mike “Any two terms of the same identity type are propositionally equal” vs “Any two terms of the same identity type are homotopic” sound pretty similar to me, just that the homotopic one sounds like a weakening, so you have homotopy-extensionality instead of propositional extensionality. I would have thought that univalence is trivially satisfied, not traduced by extensionality.
If it is computable, which you indicate that it is, I find it surprising and encouraging!
the “natural” identity types of cubical type theory only satisfy the Id-elimination rule of Book HoTT (and MLTT) with a typal computation rule, but you can define a different family of identity types that satisfy it strictly.
Interesting. And you are saying that univalence computes does not actually depend on the identity types having this extra weakening?
It should be exciting to translate book-HoTT proofs, like that of the generalized Blakers-Massey-theorem, to cubical HoTT and see them become programs that run.
I’m having trouble relating “Any two terms of the same identity type are homotopic” to anything resembling univalence. What is the connection there?
Also, there’s nothing intrinsically uncomputable about extensional type theory either. Axiom K can be made to compute by defining in terms of Agda-style generalized pattern matching, while observational type theory is a sort of 0-truncated version of homotopy/cubical type theory.
univalence computes does not actually depend on the identity types having this extra weakening?
Well, the Id-elimination rules in cubical type theory are not primitive, but defined in terms of the cubical Kan compositions. If you define them in the straightforward way for the natural identity types, it’s just a fact that what you get only satisfies a typal computation rule. But I think I’ve been told that if you are clever you can define a type that’s equivalent (hence, by univalence, typally equal) but not judgmentally equal to the natural identity types, and define an Id-elimination operation for it that does happen to satisfy the judgmental computation rule.
It should be exciting to translate book-HoTT proofs, like that of the generalized Blakers-Massey-theorem, to cubical HoTT and see them become programs that run.
Yes, there was a group at Snowbird working on this. I think they made some good progress.
For a long time I was resistant to cubical type theory on philosophical and semantic grounds, but I’m gradually coming around.
Thanks, Mike. That’s very interesting.
Assuming that book-HoTT does have an interpretation into cubical-HoTT, would this imply univalent models for book-HoTT in categories of cubical objects in a given sheaf topos?
Yes, I think so.
About the judgemental J-rule; see section 9.1 of the cubical paper. A confirmation of what Mike is saying.
Re 75. No, we don’t know yet whether this holds in general sheaf toposes. We know how to do the cubical model internal to some presheaf toposes. We have precise conditions in our paper on guarded cubical type theory. Orton and Pitts provide axioms in the internal language of a topos sufficient for constructing the cubical model, but the main examples are presheaf models.
I’m confused; I thought the original whole point of the cubical model was to be constructive, and if it is constructive then why can’t you do it internally in any topos? (I think Orton and Pitts are doing something different, talking about presheaf topos that could replace the topos of cubical sets; here we’re talking about starting from an arbitrary topos and taking cubical diagrams in that topos.)
Sorry, you are right. What OP and we were doing is slightly different.
About the cubical model, it’s constructive in that it can be formalized in NuPrl, in CZF+universe, and possibly in other systems. The problem with toposes is that they do not have universes, and that the Hoffman-Streicher construction starts from a Grothendieck topos in Set. It is expected should not be a problem, but I am not sure whether anyone has worked out the details.
Grothendieck toposes all have universes, though, don’t they? #75 was about “sheaf toposes”, i.e. Grothendieck ones.
Grothendieck toposes all have universes
That’s a surprising statement to me. Could you elaborate? They have universes for propositions, but a universe for 0-truncated objects would have to be non-0-truncated, no?
Presumably as long as there are enough universes in the base category of sets? And I take the statement to mean the ordinary kind of Grothendieck universe.
Mike, the construction of the universe in cubical sets depends on the Hoffman-Streicher construction, which depends on a Grothendieck universe in (classical) set. There does not seem to be a published account of iterating the Hoffman-Streicher construction. I.e. starting from a Grothendieck universe in Set, make a universe in the sheaf topos, and use that to make another Hoffman-Streicher universe for the internal cubical sets. It’s likely that this can indeed be iterated. (In fact, I seem to recall a discussion with Thierry in which he checked that it actually works.)
Oh, I see, universes in that sense. Never mind #81
The Hoffmann-Streicher universe is just defined by Yoneda, right? Surely it should be perfectly constructive.
I believe I was looking at p9 of this note by Thomas Streicher. If there was a trivial internal version for sheaf toposes, I would have expected Thomas to state it. In any case, I you have a complete argument in mind, it might be worth recording it, as it seems to be lacking from the literature at the moment. I might try to do it myself later, but I am on holiday now.
Well, I don’t have a surplus of time right now, and I’m not hugely motivated to put a lot of thought into this because internal cubical objects in sheaf 1-toposes are not a very interesting class of models for me. I suppose there might be some choicey issues regarding whether one wants the universe to classify things by actual pullbacks or only after passage to a cover. But couldn’t one also note that cubical objects in a Grothendieck topos (over a classical base ) are themselves again a Grothendieck topos over the same and apply Streicher’s construction directly?
Mike, re 87,
But couldn’t one also note that cubical objects in a Grothendieck topos (over a classical base Set) are themselves again a Grothendieck topos over the same Set and apply Streicher’s construction directly?
Yes, we used a similar construction in our paper. I just want to point out that the theory of HS-universes is not as polished as one might like it to be.
1 to 88 of 88