Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
stub for homotopy type theory
Have added to homotopy type theory a paragraph on machine implementation and some more links in the References-section. Also started a page Coq, but more than stubby for the moment.
I linked the Coq wiki.
uploaded a pdf with the basic HoTT-Coq code documentation, linked to now from homotopy type theory.
Added to the References/Code section at homotopy type theory a pointer to the Proviola-HoTT archive which offers HTML-functionality that displays the proofs in the code (which otherwise is only visible when loading the code into Coq itself).
I have further brushed up and expanded the dictionary, now with more comments on identity types.
I edited the Idea section at homotopy type theory some.
By the way, Urs, I’m really grateful for all the stuff you’ve been writing on the nLab about HoTT! I’ve just been too caught up with figuring stuff out to read it carefully. But I’m going to try to find some time to look it over and make tweaks.
Thanks!
I’ve edited homotopy type theory some more, clarifying the dictionary a bit and adding a section “Advantages” after a discussion on the HoTT mailing list.
discussion on the HoTT mailing list.
Is this a public list? I once tried to find it, but all I ended up finding was a Coq mailing list and the HoTT Google-group. If you don’t mean one of these two, could you tell me if there is a way for me to subscribe to “the HoTT mailing list”?
I mean the google group. You can configure google groups to act like mailing lists.
Ah, okay, thanks. Yes, I am subscribed to the HoTT google list by email, so I know that one. Thanks.
Mike,
a question out of curiosity, concerning the triple of correspondences on slide 4, part I of your Swansea notes:
evidently there is one item missing to complete the picture: what would be the text in red to go in item 1?
What is the homotopy-version of “A programming language”?
I did ask somebody else this question. He wasn’t entirely sure but did make an educated guess that sounded interesting. But I have entirely forgotten now what that answer was, unfortunately.
The reason it’s missing is that I don’t know the answer!
Another question: In the same set of slides, you say at one point “Diagonals in higher categories are not monic!”. How do I square this with my naive, 1-categorical style of thought “But, diagonals followed by projections are identity, and thus diagonals must surely be (split) monic…”?
While many statements from ordinary category theory generalize essentially verbatim to $(\infty,1)$-category theory, the notions of monomorphisms and epimorphisms form the big exception. Instead of generalizing straightforwardly, these notions split up each into a tower of notions: a monomorphism $f : A \to B$ is a (-1)-truncated object in the slice over $B$. Similarly an $\infty$-truncated object in the slice is just any old morphism. So the $n$-truncated objects in the slice are close to being monomorphisms (“$n$ steps close”!) but are not quite monomorphisms.
What does generalize is the following characterization of monos: in ordinary category theory a morphism $f : A \to B$ is a mono precisely if its diagonal $A \to A \times_B A$ is an isomorphism. Now, isomorphisms are the (-2)-truncated morphisms. So this says that a morphism is (-1)-truncated if its diagonal is (-2)-truncated.
This does generalize to a recursive characterization: a morphisms is $n$-truncated precisely if its diagonal is (n-1)-truncated.
[edit: for the discussion of the recursion principle see here]
Ah, I forgot state the punchline:
so if $X$ is an $n$-truncated object, then $X \to *$ is $n$-truncated, and so the diagonal $X \to X \times X$ is (n-1)-truncated.
So the diagonal on an object $X$ is a mono precisely if the object is 0-truncated (is an h-set)!
I have added that remark at n-truncated object of an (infinity,1)-category at Examples-Diagonals
Also to address Sridhar’s last question directly: in a higher category, a “split monic” (a morphism admitting a retraction) is not necessarily a monic!
Ah, I see now. Between sets, a functor admitting a retraction must be full and faithful, but between nontrivial groupoids, this needn’t be the case (e.g., the map from 1 into the delooping of Z). So in a 1-category, “split monics” are monics, but in a higher category, they needn’t be.
(Incidentally, there are separate articles fully faithful (infinity,1)-functor and full and faithful (infinity,1)-functor, both describing, I believe, the same concept…)
Thanks for noticing that! Discuss here.
So how truncated is a split mono in an $(\infty,1)$-category, and why don’t we call that a mono instead?
In general, I don’t see any reason for it to be at all truncated, although I can’t think of a counterexample off the top of my head.
Of course there’s a choice for what we want to call a “mono” in an $(\infty,1)$-category. $(-1)$-truncated morphisms have many of the usual properties of monomorphisms, and are the natural way to interpret propositions as some types in HoTT, so that’s some justification for giving them the unadorned name.
A split monomorphism is an $(n,1)$-category will be $(n-2)$-truncated, so in an $(\infty,1)$-category, that tells us nothing, is that right?
@Toby: that seems right.
In particular, a split mono in a $(0,1)$-category (a poset) is $(-2)$-truncated (an iso). Good, that makes me feel happy about it.
I’ve added a remark along these lines to split monomorphism.
Cool. Can I proselytise that a $(0,1)$-category is a poset while a proset is a strict $(0,1)$-category?
Sure.
added a pointer to the recent
to the list of references at homotopy type theory.
Added to the list of References at homotopy type theory a pointer to the new
Earlier I had added more pointers at univalence on univalence in the cubical set model by Coquand and Huber.
Added to the references a pointer to Voevodsky’s latest talk (pdf)
Of course I have added
to a bunch of related entries (including homotopy type theory, cubical set, univalence, relation between category theory and type theory)
have added pointer to Shulman 16
Welcome Ernesto! Coherence here is meant in the same sense as coherence in a monoidal category or bicategory: associativity doesn’t hold on the nose, but coherence of the isomorphisms/paths witnessing associativity means that we don’t usually need to worry about applying them explicitly. MacLane’s coherence theorem for monoidal categories (there is a similar theorem for bicategories etc.) says that once that pentagon and triangle commute, then “all diagrams” of associativity and unit isomorphisms also commute. Thus, whenever we have two composites that differ from each other by some reassociations, we don’t need to worry about how we do the reassociation to get from one to the other; there is an essentially unique way to do it.
In general, a good place to ask questions about the HoTT book is the hott-cafe mailing list.
I didn’t mean that the nForum was a bad place to ask questions; I’m sorry if it sounded that way. This particular thread is mainly for discussion about the nLab page homotopy type theory, but the nForum as a whole is for talking about all kinds of “Math, physics, and philosophy” and this sort of question is fine here. I just mentioned hott-cafe because it’s more focused on the particular subject and on exactly this sort of question, so you may find more people there than here who have helpful things to say.
Composition of functions is associative and therefore composition of linear functions is associative because composition of linear functions is a linear function
Right, but it’s not associative up to definitional equality, because the proof of linearity is carried along as part of the notion of “linear function”. Any two proofs of linearity are equal, because being linear is a proposition, but not necessarily definitionally equal. The situation is the same for functors: the function on objects and the function on morphisms have definitionally associative composition, but the proofs of functoriality don’t. The reason coherence doesn’t arise for linear functions is that vector spaces are sets (0-types), so coherence holds automatically: any diagram of equalities in a vector space commutes.
Hey anqurvanillapy – thanks for your attention to blemishes and for repairing them. Just a friendly note to say that small typo corrections really don’t need to be announced at the nForum. You can leave the description box blank when you make such corrections at the nLab.
What is Axiom R? The link to Shulman 15 doesn’t work. It seems to link to some subsection in cohesive homotopy type theory.
Someone typed a slash instead of a #
, I’ve fixed it.
That last comment mystified me. It means to refer to What I Wish I Knew When Learning HoTT
added pointer to
I have !include
-ed the list homotopy theory in homotopy type theory – references into the References-subsection, which seems to be closer to what Anonynous has intended here
Fixed your link to Bob Harper, by making it redirect to Robert Harper
added pointer to:
I have added publication data for:
I came to this while updating the list of references on the universal Kan fibration. It’s curious that the above article was published in 2021 but does not cite standard textbook accounts of the universal Kan fibration from the years before. The first one is due to Lurie 09, Prop. 3.3.2.7; then there is Cisinski 2019, Sec. 5.2.
I have added DOI-links and other niceties to a couple of bibitems.
In particular, I added the proper publication data for
which some previous author had tried but failed with.
added pointer to:
The Wikipedia page “Homotopy type theory” knows that
Advocates claim that HoTT allows […] However, these claims aren’t universally accepted
What’s good citable (at least linkable) examples of HoTT heretics?
I am aware of a couple of rants by Buzzard (e.g. Where is the fashion? or Is HoTT the way?) but his complaint is focused on doing mathematics (as opposed to practical computer programming, and be it certification of practical programs) and his complaint is mainly sociological (choice of research priorities) rather than technical.
So how about HoTT as a potential tool in actual computing theory: Are there any good heretic writings of computer scientists complaining about HoTT? Anyone arguing that higher homotopy data types are never going to solve any industry-relevant problems (contrary to hopes such as expressed e.g.by Ghani et al. here)?
(I am not asking for discussion or debate, just for interesting links to contrarian arguments.)
There is essentially nobody in industry using functional programming languages, never mind dependently typed languages or HoTT.
Maybe tangential to my question, but how about stuff like this (Hedera calls itself “enterprise-level”, if not “industry-level” – I may not know the difference, if any):
Verification of (blockchain-like) hashgraph consensus algorithms:
Background:
Pierre Tholoniat, Vincent Gramoli, Formal Verification of Blockchain Byzantine Fault Tolerance, in Handbook on Blockchain, Optimization and Its Applications 194 (2022) [arXiv:1909.07453, doi:10.1007/978-3-031-07535-3_12]
Leemon Baird, Mance Harmon, and Paul Madsen, Hedera: A Public Hashgraph Network & Governing Council, (last update 2020) [pdf]
With Coq:
Hedera blog: Coq Proof Completed By Carnegie Mellon Professor Confirms Hashgraph Consensus Algorithm Is Asynchronous Byzantine Fault Tolerant (Oct 2018)
Leemon Baird: Formal Methods: A Deep Dive Using the Coq Proof Assistant – Hedera18 (2018) [video: YT]
Karl Crary, Formalizing the Hashgraph gossip protocol, talk at CyLab Partners Conference (2019) [pdf]
Karl Crary, Verifying the Hashgraph Consensus Algorithm [arXiv:2102.01167, pdf]
Musab A. Alturki, Jing Chen, Victor Luchangco, Brandon Moore, Karl Palmskog, Lucas Peña, Grigore Roşu, Towards a Verified Model of the Algorand Consensus Protocol in Coq, Formal Methods. FM 2019 International Workshops. Lecture Notes in Computer Science 12232 (2019) 362-367 [arXiv:1907.05523, doi:10.1007/978-3-030-54994-7_27]
With Agda:
Apologies for distracting a little from your question. What I meant was that in practise only a trivial amount of companies are using even Haskell, say, and it is only a trivial part even of those companies’ codebases. There are certainly one or two things; if one searches, there is a claim that a few large companies do use a little Haskell, say. But I can assure you that the scale is unbelievably tiny. In the country I work in, I have never seen a job offer mention a dependently typed language, and never seen one require a functional programming language; very occasionally, the latter are mentioned as good to be familiar with.
Of course functional programming ideas have had influence in other types of languages, they are not useless, but they are not used in industry directly and I doubt ever will be, because electronics and low-level programming are fundamentally non-functional. No dependently typed language can do I/O, and that makes them unusable in industry (proof verification is not a part of industry except in an unbelievably tiny percentage; zero as far as I know in the country in which I live for example).
Most approaches to homotopy type theory do not attempt to include a type of all propositions, the type theoretic equivalent of the set of truth values in set theory. This means that one is forced to include an entire universe in the dependent type theory in order to get a power set on a type for certain areas of mathematics, which is overkill for stuff like real analysis or complex analysis where one only needs a type of propositions rather than an entire universe. In practice, if one is doing something like real analysis, one really only needs a type of all propositions to construct the Dedekind real numbers, and afterwards one only really quantifies over the real numbers, tuples of real numbers, sequences of real numbers, and functions of real numbers, none of which requires an entire universe and the complex set of rules which come with having a universe in the type theory. Furthermore, the resulting “power set” constructed from universes would also only be locally small relative to the universe, so doesn’t exactly behave as a true power set in the type theory from the perspective of the categorical semantics.
Certain branches of mathematics like set theory or algebra require quantification over certain structured types, such as sets or rings, which means that a universe is indeed required. But most homotopy type theory includes multiple universes, where one universe suffices. If there are multiple universes in the type theory, there is the issue in having to distinguish between, for example, different “locally small power sets” or “locally small Dedekind real numbers” for different universes, and the bureaucratic requirements to keep track of the different universes becomes a nightmare in practice. It becomes even worse in the HoTT book and in Agda, Lean, and Coq, where their infinite hierarchy of universes corresponds to having an infinite series of inaccessible cardinals in set theory, which is simply overkill for a foundations of mathematics.
Personally, I believe that one should only include the bare minimum amount of type formers in the theory in order to do whatever mathematics you set out to do so. If one is doing elementary number theory, for example, including James constructions in the theory is not necessary at all. Indeed, while I view higher inductive types as quite necessary for mathematics in dependent type theory/homotopy type theory, as propositional truncations are necessary to define predicate logic in the theory and quotient sets are often used, a lot of mathematics could be done without resorting to universes at all, and much of the rest could be done using only one universe maximum.
Different guest
Thanks for the chat!
Maybe we are not quite communicating yet, as I am not sure how your comments really pertain to my original question #68 (clearly software verification is a thing?!) nor to my followup reply #70 refuting (I thought :-) the claim in #69.
Incidentally, I found it fascinating to watch the talk by Leemon Baird that I linked to [YT]: a vision of a future of type-theory-savvy crypto-finance CEOs.
$\,$
But allow me to use the occasion to pick up the claim in #71 — that industry is all about low-level electronic hardware-aware programming — and try something on you:
Let’s take this seriously and ask what programming language infrastructure really reflects the notion of computation at the physical hardware (electronics) level (or as close as any language can get to it).
This question is currently being re-asked in quantum computation, a perspective which may help take the necessary step back to look at this ancient question afresh.
I think that real close to the physical hardware level, computation is not described by imperative programming languages either – it’s not described by any “language” as such:
Real close to the realistic physical hardware level, computation is described by the circuit model.
In classical computation this may have been forgotten, but in quantum computation, where all these fundamental questions are resurfacing now, it is clear as day.
And if we are really to talk physics and foundations here, then I go one step further and say that real close to the realistic physical hardware level, computation is given by the reversible circuit model.
If we accept that (? :-) then we need to ask: What is the circuit model of computation, fundamentally?
So let’s see: in the circuit model we have
a generating set of labels of reversible operations (gate names)
for each such a corresponding operation on a state space.
A program is a path of (labels of) gates (namely in the HIT generated by the gate labels)
its execution on any input data is the corresponding lifting of that path through the corresponding fibration.
(For fun and exposition I recently produced some schematic illustration of what I am saying here: pp. 60 here and also pp. 60 here (to be viewed on full screen). Some related references are compiled in the entry Computation – References – As path lifting.)
Therefore – my argument now goes – if we are really serious about understanding programming at the physical hardware level, and formal verification of that – then what we need is a high-level language in which path lifting is a native operation.
This leads us full circle back to… HoTT. From a completely different direction than usually (or ever before) advertised.
Let’s take this seriously and ask what programming language infrastructure really reflects the notion of computation at the physical hardware (electronics) level (or as close as any language can get to it).
I think for that one has hardware description languages like Verilog, at least in the classical case.
[The same person as in #69 and #71]
I am not sure how your comments really pertain to my original question #68 (clearly software verification is a thing?!) nor to my followup reply #70 refuting (I thought :-) the claim in #69.
In #68, ’practical computer programming’ and ’industry-relevant problems’ were mentioned. What I was asserting was that even the most well-known functional programming languages have the most miniscule presence in the actual programming industry. Proof verification is an extremely tiny proportion of that industry as a whole. These assertions are based on first-hand knowledge of the industry; it cannot be refuted by listing a handful of examples of use. Amongst the millions of lines of code written every day, perhaps a few thousand are written in functional programming languages in industry, perhaps a few hundred in dependently typed languages, and perhaps a dozen in HoTT by some lost soul. These are trivial amounts. In #71 I gave a different metric (job advertisements), quite easily verified, indicating how minimal their use is.
My intention was to contribute your sharpening your main question: one can certainly ask whether computer scientists think HoTT helps solve their problems, but one should divorce this matter from the programming industry as a whole.
that industry is all about low-level electronic hardware-aware programming
This was not what I wrote. The overwhelming majority of the industry uses high-level languages. Even embedded software almost always does not go lower than C.
But my point was that computers are based on 0s and 1s. It is essential to turn one into the other, i.e. have mutability. In high-level programming languages, it is all well and good to abstract away the details and write ’immutably’, etc, but cannot get the smallest thing done on any actual computer if one tried to do that at a low-level. Therefore it is necessary to have languages like C which reflect that reality more closely, which higher-level programming languages build further upon. Haskell for example is totally reliant on C in its implementation, as are essentially all modern languages (older languages such as Fortran and Forth are not, but are as equally fundamental mutable).
It may be difficult to appreciate if one doesn’t have experience with low-level programming, but mutability is absolutely unavoidable with today’s machines. That is true whether one programs really close to the electronics, say with an FPGA, in assembly, in C, or whatever. To able to do anything, one has to able to be handle interupts, i.e. signals, i.e. changes from 0s to 1s. Memory is about changes from 0s to 1s. If every single process was left sitting in memory after it was run, the computer would be useless in a few hours of use at once.
Of course one can dream as much as one likes about other possible ways of doing things, but binary numbers are the simplest possible ways to express change, and Ockham’s Razor almost always applies in the end. Only if something came along which rendered current computers as useful in practise as the abacus in comparison would there likely be a fundamental change. Society as we know it might collapse before that. Who knows.
Urs
Since you found out you have the power to delete comments from the nForum, have you been posting comments to this thread and then deleting them?
Something is making the nForum forget how much I’ve read in threads. Just now this thread appeared at the top of the All Discussions page (indicating it had new content) but when I open It I am taken to a Jun 15th 2022 posting and I have to scroll to the bottom. This has happened several times in this thread and as I recall some other ones you are posting in. But not in any other threads.
Am coming back online only now after a long flight.
have you been posting comments to this thread and then deleting them?
I did delete & repost #73 right after posting the first version, because the first version ended up appearing without my signature.
Afterwards i was trying to see if the technical team can help us fixing this issue that the nForum logs users out who spent too long editing a comment.
While I notice every now and then that some threads don’t take me to their latest messages, I had not idea what is causing this. Maybe if we can isolate a minimal test thread showcasing the issue, then we could tell the technical team about it.
I may try to look into this later, currently it’s mighty busy here…
Progress in theoretical quantum computer science might be good but how useful would it be in industry if it turns out that actual quantum computers are limited by decoherence effects and interactions with the environment, or limited by the exponentially large cost to construct a quantum computer?
added pointer to:
1 to 80 of 80