added pointer to:

- Martin Hofmann, Thomas Streicher,
*The groupoid model refutes uniqueness of identity proofs*, Proceedings Ninth Annual IEEE Symposium on Logic in Computer Science (1994) [doi:10.1109/LICS.1994.316071]

and similarly to:

- Martin Hofmann,
*The groupoid interpretation of type theory, a personal retrospective*, talk at*HoTT at DMV2015*(2015) [pdf]

below Hofman & Streicher 1998 I added pointer to:

- Ethan Lewis, Max Bohnet,
*The groupoid model of type theory*, seminar notes (2017) [pdf]

This summer at CQTS, David Myers and Mitchell Riley have run an introductory course on HoTT via cubical Agda. While their lecture notes (all commented Agda code) are being finalized for publication they are already available in some form at: github.com/CQTS/summer23-homework/tree/master/homework.

]]>added more of the original/precursor articles (here)

]]>added pointer to:

- David Jaz Myers,
*How do you identify one thing with another? – an intro to Homotopy Type Theory*, talk in Prof. Sadok Kallel’s seminar at American University of Sharjah (10 Apr 2023) [slides:pdf]

Progress in theoretical quantum computer science might be good but how useful would it be in industry if it turns out that actual quantum computers are limited by decoherence effects and interactions with the environment, or limited by the exponentially large cost to construct a quantum computer?

]]>Am coming back online only now after a long flight.

have you been posting comments to this thread and then deleting them?

I did delete & repost #73 right after posting the first version, because the first version ended up appearing without my signature.

Afterwards i was trying to see if the technical team can help us fixing this issue that the nForum logs users out who spent too long editing a comment.

While I notice every now and then that some threads don’t take me to their latest messages, I had not idea what is causing this. Maybe if we can isolate a minimal test thread showcasing the issue, then we could tell the technical team about it.

I may try to look into this later, currently it’s mighty busy here…

]]>Urs

Since you found out you have the power to delete comments from the nForum, have you been posting comments to this thread and then deleting them?

Something is making the nForum forget how much I’ve read in threads. Just now this thread appeared at the top of the All Discussions page (indicating it had new content) but when I open It I am taken to a Jun 15th 2022 posting and I have to scroll to the bottom. This has happened several times in this thread and as I recall some other ones you are posting in. But not in any other threads.

]]>[The same person as in #69 and #71]

I am not sure how your comments really pertain to my original question #68 (clearly software verification is a thing?!) nor to my followup reply #70 refuting (I thought :-) the claim in #69.

In #68, ’practical computer programming’ and ’industry-relevant problems’ were mentioned. What I was asserting was that even the most well-known functional programming languages have the most miniscule presence in the actual programming industry. Proof verification is an extremely tiny proportion of that industry as a whole. These assertions are based on first-hand knowledge of the industry; it cannot be refuted by listing a handful of examples of use. Amongst the millions of lines of code written every day, perhaps a few thousand are written in functional programming languages in industry, perhaps a few hundred in dependently typed languages, and perhaps a dozen in HoTT by some lost soul. These are trivial amounts. In #71 I gave a different metric (job advertisements), quite easily verified, indicating how minimal their use is.

My intention was to contribute your sharpening your main question: one can certainly ask whether computer scientists think HoTT helps solve their problems, but one should divorce this matter from the programming industry as a whole.

that industry is all about low-level electronic hardware-aware programming

This was not what I wrote. The overwhelming majority of the industry uses high-level languages. Even embedded software almost always does not go lower than C.

But my point was that computers are based on 0s and 1s. It is essential to turn one into the other, i.e. have mutability. In high-level programming languages, it is all well and good to abstract away the details and write ’immutably’, etc, but cannot get the smallest thing done on any actual computer if one tried to do that at a low-level. Therefore it is necessary to have languages like C which reflect that reality more closely, which higher-level programming languages build further upon. Haskell for example is totally reliant on C in its implementation, as are essentially all modern languages (older languages such as Fortran and Forth are not, but are as equally fundamental mutable).

It may be difficult to appreciate if one doesn’t have experience with low-level programming, but mutability is absolutely unavoidable with today’s machines. That is true whether one programs really close to the electronics, say with an FPGA, in assembly, in C, or whatever. To able to do *anything*, one has to able to be handle interupts, i.e. signals, i.e. changes from 0s to 1s. Memory *is* about changes from 0s to 1s. If every single process was left sitting in memory after it was run, the computer would be useless in a few hours of use at once.

Of course one can dream as much as one likes about other possible ways of doing things, but binary numbers are the simplest possible ways to express change, and Ockham’s Razor almost always applies in the end. Only if something came along which rendered current computers as useful in practise as the abacus in comparison would there likely be a fundamental change. Society as we know it might collapse before that. Who knows.

]]>Let’s take this seriously and ask what programming language infrastructure really reflects the notion of computation at the physical hardware (electronics) level (or as close as any language can get to it).

I think for that one has hardware description languages like Verilog, at least in the classical case.

]]>Thanks for the chat!

Maybe we are not quite communicating yet, as I am not sure how your comments really pertain to my original question #68 (clearly software verification is a thing?!) nor to my followup reply #70 refuting (I thought :-) the claim in #69.

Incidentally, I found it fascinating to watch the talk by Leemon Baird that I linked to [YT]: a vision of a future of type-theory-savvy crypto-finance CEOs.

$\,$

But allow me to use the occasion to pick up the claim in #71 — that industry is all about low-level electronic hardware-aware programming — and try something on you:

Let’s take this seriously and ask what programming language infrastructure really reflects the notion of computation at the physical hardware (electronics) level (or as close as any language can get to it).

This question is currently being re-asked in quantum computation, a perspective which may help take the necessary step back to look at this ancient question afresh.

I think that real close to the physical hardware level, computation is not described by imperative programming languages either – it’s not described by any “language” as such:

Real close to the realistic physical hardware level, computation is described by the *circuit model*.

In classical computation this may have been forgotten, but in quantum computation, where all these fundamental questions are resurfacing now, it is clear as day.

And if we are really to talk physics and foundations here, then I go one step further and say that real close to the realistic physical hardware level, computation is given by the *reversible* circuit model.

If we accept that (? :-) then we need to ask: What is the circuit model of computation, fundamentally?

So let’s see: in the circuit model we have

a generating set of labels of reversible operations (gate names)

for each such a corresponding operation on a state space.

A program is a

*path*of (labels of) gates (namely in the HIT generated by the gate labels)its execution on any input data is the corresponding

*lifting*of that path through the corresponding fibration.

(For fun and exposition I recently produced some schematic illustration of what I am saying here: pp. 60 here and also pp. 60 here (to be viewed on full screen). Some related references are compiled in the entry *Computation – References – As path lifting*.)

Therefore – my argument now goes – if we are really serious about understanding programming at the physical hardware level, and formal verification of *that* – then what we need is a high-level language in which *path lifting* is a native operation.

This leads us full circle back to… HoTT. From a completely different direction than usually (or ever before) advertised.

]]>Most approaches to homotopy type theory do not attempt to include a type of all propositions, the type theoretic equivalent of the set of truth values in set theory. This means that one is forced to include an entire universe in the dependent type theory in order to get a power set on a type for certain areas of mathematics, which is overkill for stuff like real analysis or complex analysis where one only needs a type of propositions rather than an entire universe. In practice, if one is doing something like real analysis, one really only needs a type of all propositions to construct the Dedekind real numbers, and afterwards one only really quantifies over the real numbers, tuples of real numbers, sequences of real numbers, and functions of real numbers, none of which requires an entire universe and the complex set of rules which come with having a universe in the type theory. Furthermore, the resulting “power set” constructed from universes would also only be locally small relative to the universe, so doesn’t exactly behave as a true power set in the type theory from the perspective of the categorical semantics.

Certain branches of mathematics like set theory or algebra require quantification over certain structured types, such as sets or rings, which means that a universe is indeed required. But most homotopy type theory includes multiple universes, where one universe suffices. If there are multiple universes in the type theory, there is the issue in having to distinguish between, for example, different “locally small power sets” or “locally small Dedekind real numbers” for different universes, and the bureaucratic requirements to keep track of the different universes becomes a nightmare in practice. It becomes even worse in the HoTT book and in Agda, Lean, and Coq, where their infinite hierarchy of universes corresponds to having an infinite series of inaccessible cardinals in set theory, which is simply overkill for a foundations of mathematics.

Personally, I believe that one should only include the bare minimum amount of type formers in the theory in order to do whatever mathematics you set out to do so. If one is doing elementary number theory, for example, including James constructions in the theory is not necessary at all. Indeed, while I view higher inductive types as quite necessary for mathematics in dependent type theory/homotopy type theory, as propositional truncations are necessary to define predicate logic in the theory and quotient sets are often used, a lot of mathematics could be done without resorting to universes at all, and much of the rest could be done using only one universe maximum.

Different guest

]]>Apologies for distracting a little from your question. What I meant was that in practise only a trivial amount of companies are using even Haskell, say, and it is only a trivial part even of those companies’ codebases. There are certainly one or two things; if one searches, there is a claim that a few large companies do use a little Haskell, say. But I can assure you that the scale is unbelievably tiny. In the country I work in, I have never seen a job offer mention a dependently typed language, and never seen one require a functional programming language; very occasionally, the latter are mentioned as good to be familiar with.

Of course functional programming ideas have had influence in other types of languages, they are not useless, but they are not used in industry directly and I doubt ever will be, because electronics and low-level programming are fundamentally non-functional. No dependently typed language can do I/O, and that makes them unusable in industry (proof verification is not a part of industry except in an unbelievably tiny percentage; zero as far as I know in the country in which I live for example).

]]>Maybe tangential to my question, but how about stuff like this (Hedera calls itself “enterprise-level”, if not “industry-level” – I may not know the difference, if any):

Verification of (blockchain-like) hashgraph consensus algorithms:

Background:

Pierre Tholoniat, Vincent Gramoli,

*Formal Verification of Blockchain Byzantine Fault Tolerance*, in*Handbook on Blockchain*, Optimization and Its Applications**194**(2022) [arXiv:1909.07453, doi:10.1007/978-3-031-07535-3_12]Leemon Baird, Mance Harmon, and Paul Madsen,

*Hedera: A Public Hashgraph Network & Governing Council*, (last update 2020) [pdf]

With Coq:

Hedera blog:

*Coq Proof Completed By Carnegie Mellon Professor Confirms Hashgraph Consensus Algorithm Is Asynchronous Byzantine Fault Tolerant*(Oct 2018)Leemon Baird:

*Formal Methods: A Deep Dive Using the Coq Proof Assistant – Hedera18*(2018) [video: YT]Karl Crary,

*Formalizing the Hashgraph gossip protocol*, talk at*CyLab Partners Conference*(2019) [pdf]Karl Crary,

*Verifying the Hashgraph Consensus Algorithm*[arXiv:2102.01167, pdf]Musab A. Alturki, Jing Chen, Victor Luchangco, Brandon Moore, Karl Palmskog, Lucas Peña, Grigore Roşu,

*Towards a Verified Model of the Algorand Consensus Protocol in Coq*, Formal Methods. FM 2019 International Workshops. Lecture Notes in Computer Science**12232**(2019) 362-367 [arXiv:1907.05523, doi:10.1007/978-3-030-54994-7_27]

With Agda:

- Harold Carr, Christopher Jenkins, Mark Moir, Victor Cacciari Miraldo, Lisandra Silva,
*Towards Formal Verification of HotStuff-based Byzantine Fault Tolerant Consensus in Agda: Extended Version*[arXiv:2203.14711]

]]>

There is essentially nobody in industry using functional programming languages, never mind dependently typed languages or HoTT.

]]>The Wikipedia page “Homotopy type theory” knows that

Advocates claim that HoTT allows […] However, these claims aren’t universally accepted

What’s good citable (at least linkable) examples of HoTT heretics?

I am aware of a couple of rants by Buzzard (e.g. *Where is the fashion?* or *Is HoTT the way?*) but his complaint is focused on doing mathematics (as opposed to practical computer programming, and be it certification of practical programs) and his complaint is mainly sociological (choice of research priorities) rather than technical.

So how about HoTT as a potential tool in actual computing theory: Are there any good heretic writings of computer scientists complaining about HoTT? Anyone arguing that higher homotopy data types are never going to solve any industry-relevant problems (contrary to hopes such as expressed e.g.by Ghani et al. here)?

(I am not asking for discussion or debate, just for interesting links to contrarian arguments.)

]]>added pointer to:

- Cecilia Flori, Tobias Fritz,
*Homotopy Type Theory: Univalent Foundations of Mathematics*, lecture notes (2014) [pdf, webpage]

added link to book HoTT

Anonymous

]]>I have added DOI-links and other niceties to a couple of bibitems.

In particular, I added the proper publication data for

- Thomas Streicher,
*A model of type theory in simplicial sets: A brief introduction to Voevodsky’s homotopy type theory*, Journal of Applied Logic**12**1 (2014) 45-49 [doi:10.1016/j.jal.2013.04.001]

which some previous author had tried but failed with.

]]>also added “path types” to the first sentence as they are different from identity types in cubical type theory

Anonymous

]]>editing first paragraph to indicate that homotopy type theory is not just a Martin-Löf type theory.

Anonymous

]]>I have added publication data for:

- Chris Kapulkin, Peter LeFanu Lumsdaine,
*The Simplicial Model of Univalent Foundations (after Voevodsky)*, Journal of the European Mathematical Society**23**(2021) 2071–2126 $[$arXiv:1211.2851, doi:10.4171/jems/1050$]$

I came to this while updating the list of references on the universal Kan fibration. It’s curious that the above article was published in 2021 but does not cite standard textbook accounts of the universal Kan fibration from the years before. The first one is due to Lurie 09, Prop. 3.3.2.7; then there is Cisinski 2019, Sec. 5.2.

]]>added pointer to:

- Dan Licata:
*Homotopy theory in type theory*, 2013 (pdf slides, blog entry 1, blog entry 2)