added pointer to:

- Robert Rand, Jennifer Paykin, Steve Zdancewic,
*QWIRE Practice: Formal Verification of Quantum Circuits in Coq*, EPTCS 266, 2018, pp. 119-132 (arXiv:1803.00699)

added this pointer:

- Klaus Mainzer,
*From Proof Theory to Proof Assistants – Challenges of Responsible Software and AI*, talk at*Arbeitstagung Bern-München, ABM Spring 2019*(pdf)

added this one:

- Patrick Cousot, Radhia Cousot,
*A gentle introduction to formal verification of computer systems by abstract interpretation*, NATO Science for Peace and Security Series - D: Information and Communication Security, Volume 25: Logics and Languages for Reliability and Security (doi:10.3233/978-1-60750-100-8-1)

also this:

- Andrew W. Appel,
*Verification of a Cryptographic Primitive: SHA-256*, ACM Transactions on Programming Languages and SystemsApril 2015 Article No.: 7 (doi:10.1145/2701415, pdf)

and this one:

- Mihhail Aizatulin,
*Verifying Cryptographic Security Implementations in C Using Automated Model Extraction*(arXiv:2001.00806)

and this one:

- Cédric Fournet, Markulf Kohlweiss, Pierre-Yves Strub,
*Modular code-based cryptographic verification*, CCS ’11: Proceedings of the 18th ACM conference on Computer and communications securityOctober 2011 Pages 341–350 (doi:10.1145/2046707.2046746)

and this:

- Cédric Fournet, Karthikeyan Bhargavan, Andrew D. Gordon,
*Cryptographic Verification by Typing for a Sample Protocol Implementation*, In: Aldini A., Gorrieri R. (eds) Foundations of Security Analysis and Design VI. FOSAD 2011. Lecture Notes in Computer Science, vol 6858. Springer (2011) (doi:10.1007/978-3-642-23082-0_3)

and this one:

- Andres Erbsen, Jade Philipoom, Jason Gross, Robert Sloan, Adam Chlipala,
*Simple High-Level Code for Cryptographic Arithmetic - With Proofs, Without Compromises*, 2019 IEEE Symposium on Security and Privacy (SP) (doi:10.1109/SP.2019.00005)

and this one:

- Matthias Berg,
*Formal Verification of Cryptographic Security Proofs*, 20013 (pdf, doi:10.22028/D291-26528)

added this pointer:

- Adam Petcher, Greg Morrisett,
*The Foundational Cryptography Framework*, In: R. Focardi, A. Myers (eds.)*Principles of Security and Trust*POST 2015. Lecture Notes in Computer Science, vol 9036. Springer, Berlin, Heidelberg (doi:10.1007/978-3-662-46666-7_4)

also this one:

- Paventhan Vivekanandan,
*HoTT-Crypt: A Study in Homotopy Type Theory based on Cryptography*, Kalpa Publications in Computing Volume 9, 2018, Pages 75-90 (doi:10.29007/tvpp, web slides)

added this pointer:

- Paventhan Vivekanandan,
*A Homotopical Approach to Cryptography*, talk at FCS 2018 (pdf, easychair:GLtQ#)

also subdivided the list of references into subsections

]]>added this pointer:

- Aleksandar Nanevski, Anindya Banerjee, Deepak Garg,
*Dependent Type Theory for Verification of Information Flow and Access Control Policies*, ACM Transactions on Programming Languages and Systems July 2013 Article No.: 6 (arXiv:10.1145/2491522.2491523)

added pointer to:

- Catherine Meadows,
*Program Verification and Security*(doi:10.1007/978-1-4419-5906-5_863), In: Henk C. A. van Tilborg, Sushil Jajodia (ed.)*Encyclopedia of Cryptography and Security*Springer (2011)

fixed the link for

(now via WaybackMachine)

and added a reference on verification of quantum programming languages such as Quipper:

- Linda Anticoli, Carla Piazza, Leonardo Taglialegne, Paolo Zuliani,
*Towards Quantum Programs Verification: From Quipper Circuits to QPMC*, In: Devitt S., Lanese I. (eds) Reversible Computation. RC 2016. Lecture Notes in Computer Science, vol 9720. Springer, Cham (doi:10.1007/978-3-319-40578-0_16)

The entry still quotes this remarkable claim, from the EPSRC research grant *Homotopy Type Theory: Programming and Verification*, which says (in drastically shortened quote):

The cost of software failure is truly staggering. $[\cdots]$ a fundamental and difficult problem which has remained unresolved for over 40 years. But now it looks like we might have a solution! $[\cdots]$ namely homotopy theory, leading to Homotopy Type Theory (HoTT).

I see that this grant has run out just last month. Can one say anything in update of these claims?

]]>The old sense of “prove” mentioned in #5, meaning “to test”, is in such sayings as “the exception proves the rule” and “the proof is in the pudding”, and obviously in phrases such as “proving grounds”. I would have thought the distinction in meaning is very familiar in legal circles (although obviously IANAL).

]]>I cannot remember the details. Statistically based testing had been used and won the day in court, because the company who produced the chip could claim that they had ’proved’ that the chip was fault free. They also claimed that their advertising for the product in using ’proved’ had been misunderstood by the person who was claiming damages and that with common (engineering) usage their advertising was correct, and there had been no intent to deceive. (It may have been a chip for a pocket calculator that was in question. I do not recall, as I said.)

I have a very vague thought that the detected fault was in something that was a simple calculation, something like a ‘tan’ and because of the type of sample calculations that had been used in the testing, it had not been detected.

]]>Tim, It would be nice to know a bit more about what you have in mind. Statistical proof is allowed in court (and there have been some horrible accidents ). Most cryptographic proofs are statistical in nature.

]]>But every mathematical proof of a real life matter depends on the validity of overall assumptions and model which itself may be fundamentally flawed or missing essential influencing real-life parameters.

]]>As I remember it when the idea of ’proof’ of software was being debated in a law court, the software engineers used ’prove’ in the engineering sense, i.e. ’test’ as in ’proved steel’. Unfortunately some people had understood ’prove’ in the mathematical sense. The lawyers for the engineers won that battle! (I am not sure when this happened and it may be misremembered, but there are two uses of ’prove’.)

]]>I have added to the Idea-section at *certified programming* the following text, which appears as the description of the UK research grant titled *Homotopy Type Theory: Programming and Verification*:

]]>The cost of software failure is truly staggering. Well known individual cases include the Mars Climate Orbiter failure (£80 million), Ariane Rocket disaster (£350 million), Pentium Chip Division failure (£300 million), and more recently the heartbleed bug (est. £400 million). There are many, many more examples. Even worse, failures such as one in the Patriot Missile System and another in the Therac-25 radiation system have cost lives. More generally, a 2008 study by the US government estimated that faulty software costs the US economy £100 billion annually.

There are many successful approaches to software verification (testing, model checking etc). One approach is to find mathematical proofs that guarantees of software correctness. However, the complexity of modern software means that hand-written mathematical proofs can be untrustworthy and this has led to a growing desire for computer-checked proofs of software correctness. Programming languages and interactive proof systems like Coq, Agda, NuPRL and Idris have been developed based on a formal system called Martin-Löf type theory. In these systems, we can not only write programs, but we can also express properties of programs using types, and write programs to express proofs that our programs are correct.

In this way, both large mathematical theorems such as the Four Colour Theorem, and large software systems such as the CompCert C compiler have been formally verified. However, in such large projects, the issue of scalability arises: how can we use these systems to build large libraries of verified software in an effective way?

This is related to the problem of reusability and modularity: a component in a software system should be replaceable by another which behaves the same way even though it may be constructed in a completely different way. That is, we need an “extensional equality” which is computationally well behaved (that is, we want to run programs using this equality). Finding such an equality is a fundamental and difficult problem which has remained unresolved for over 40 years.

But now it looks like we might have a solution! Fields medallist Vladimir Voevodsky has come up with a completely different take on the problem by thinking of equalities as paths such as those which occur in one of the most abstract branches of mathematics, namely homotopy theory, leading to Homotopy Type Theory (HoTT). In HoTT, two objects are completely interchangeable if they behave the same way. However, most presentations of HoTT involve axioms which lack computational justification and, as a result, we do not have programming languages or verification systems based upon HoTT. The goal of our project is to fix that, thereby develop the first of a new breed of HoTT-based programming languages and verification systems, and develop case studies which demonstrate the power of HoTT to programmers and those interested in formal verification.

A social historian, Donald MacKenzie wrote on the attempt to certify that a chip met a specification in his 2001 Mechanizing proof: computing risk, and trust Cambridge, MA:MIT Press. Apparently the case nearly came to court over whether this had been proved, but unfortunately the company went bust. Otherwise we would have heard lawyers thrashing out the nature of proof. See also MacKenzie D 1991 The fangs of the VIPER Nature 352 467–468.

]]>One shouldn’t get too excited, though; certified programming as done in Coq and Agda hasn’t made many inroads in actual industry. The automation is just not there yet for it to be worth the time and effort.

]]>