Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I created a stub certified programming.
That’s motivated from me having expanded the Idea-section at type theory. I enjoyed writing the words “is used in industry”. There are not many Lab pages where I can write these words.
I am saying this only half-jokingly. Somehow there is something deep going on.
Anyway, in (the maybe unlikely) case that somebody reading this here has lots of information about the use and relevance of certified programming in industry, I’d enjoy seeing more information added to that entry.
One shouldn’t get too excited, though; certified programming as done in Coq and Agda hasn’t made many inroads in actual industry. The automation is just not there yet for it to be worth the time and effort.
A social historian, Donald MacKenzie wrote on the attempt to certify that a chip met a specification in his 2001 Mechanizing proof: computing risk, and trust Cambridge, MA:MIT Press. Apparently the case nearly came to court over whether this had been proved, but unfortunately the company went bust. Otherwise we would have heard lawyers thrashing out the nature of proof. See also MacKenzie D 1991 The fangs of the VIPER Nature 352 467–468.
I have added to the Idea-section at certified programming the following text, which appears as the description of the UK research grant titled Homotopy Type Theory: Programming and Verification:
The cost of software failure is truly staggering. Well known individual cases include the Mars Climate Orbiter failure (£80 million), Ariane Rocket disaster (£350 million), Pentium Chip Division failure (£300 million), and more recently the heartbleed bug (est. £400 million). There are many, many more examples. Even worse, failures such as one in the Patriot Missile System and another in the Therac-25 radiation system have cost lives. More generally, a 2008 study by the US government estimated that faulty software costs the US economy £100 billion annually.
There are many successful approaches to software verification (testing, model checking etc). One approach is to find mathematical proofs that guarantees of software correctness. However, the complexity of modern software means that hand-written mathematical proofs can be untrustworthy and this has led to a growing desire for computer-checked proofs of software correctness. Programming languages and interactive proof systems like Coq, Agda, NuPRL and Idris have been developed based on a formal system called Martin-Löf type theory. In these systems, we can not only write programs, but we can also express properties of programs using types, and write programs to express proofs that our programs are correct.
In this way, both large mathematical theorems such as the Four Colour Theorem, and large software systems such as the CompCert C compiler have been formally verified. However, in such large projects, the issue of scalability arises: how can we use these systems to build large libraries of verified software in an effective way?
This is related to the problem of reusability and modularity: a component in a software system should be replaceable by another which behaves the same way even though it may be constructed in a completely different way. That is, we need an “extensional equality” which is computationally well behaved (that is, we want to run programs using this equality). Finding such an equality is a fundamental and difficult problem which has remained unresolved for over 40 years.
But now it looks like we might have a solution! Fields medallist Vladimir Voevodsky has come up with a completely different take on the problem by thinking of equalities as paths such as those which occur in one of the most abstract branches of mathematics, namely homotopy theory, leading to Homotopy Type Theory (HoTT). In HoTT, two objects are completely interchangeable if they behave the same way. However, most presentations of HoTT involve axioms which lack computational justification and, as a result, we do not have programming languages or verification systems based upon HoTT. The goal of our project is to fix that, thereby develop the first of a new breed of HoTT-based programming languages and verification systems, and develop case studies which demonstrate the power of HoTT to programmers and those interested in formal verification.
As I remember it when the idea of ’proof’ of software was being debated in a law court, the software engineers used ’prove’ in the engineering sense, i.e. ’test’ as in ’proved steel’. Unfortunately some people had understood ’prove’ in the mathematical sense. The lawyers for the engineers won that battle! (I am not sure when this happened and it may be misremembered, but there are two uses of ’prove’.)
But every mathematical proof of a real life matter depends on the validity of overall assumptions and model which itself may be fundamentally flawed or missing essential influencing real-life parameters.
Tim, It would be nice to know a bit more about what you have in mind. Statistical proof is allowed in court (and there have been some horrible accidents ). Most cryptographic proofs are statistical in nature.
I cannot remember the details. Statistically based testing had been used and won the day in court, because the company who produced the chip could claim that they had ’proved’ that the chip was fault free. They also claimed that their advertising for the product in using ’proved’ had been misunderstood by the person who was claiming damages and that with common (engineering) usage their advertising was correct, and there had been no intent to deceive. (It may have been a chip for a pocket calculator that was in question. I do not recall, as I said.)
I have a very vague thought that the detected fault was in something that was a simple calculation, something like a ‘tan’ and because of the type of sample calculations that had been used in the testing, it had not been detected.
The old sense of “prove” mentioned in #5, meaning “to test”, is in such sayings as “the exception proves the rule” and “the proof is in the pudding”, and obviously in phrases such as “proving grounds”. I would have thought the distinction in meaning is very familiar in legal circles (although obviously IANAL).
The entry still quotes this remarkable claim, from the EPSRC research grant Homotopy Type Theory: Programming and Verification, which says (in drastically shortened quote):
The cost of software failure is truly staggering. a fundamental and difficult problem which has remained unresolved for over 40 years. But now it looks like we might have a solution! namely homotopy theory, leading to Homotopy Type Theory (HoTT).
I see that this grant has run out just last month. Can one say anything in update of these claims?
fixed the link for
(now via WaybackMachine)
and added a reference on verification of quantum programming languages such as Quipper:
added pointer to:
added this pointer:
also this one:
added this pointer:
and this one:
and this one:
and this:
and this one:
and this one:
also this:
added this one:
added this pointer:
added pointer to:
added pointer to:
and
added pointer to:
added pointer to:
added (here) pointer to:
added pointer to:
Kesha Hietala, Robert Rand, Shih-Han Hung, Xiaodi Wu, Michael Hicks, A verified optimizer for Quantum circuits, Proceedings of the ACM on Programming Languages 5 Issue POPL 37 (2021) 1–29 [doi:10.1145/3434318]
Kesha Hietala, Robert Rand, Shih-Han Hung, Liyi Li, Michael Hicks, Proving Quantum Programs Correct, in 12th International Conference on Interactive Theorem Proving (ITP 2021), Leibniz International Proceedings in Informatics (LIPIcs) 193 (2021) [arXiv:2010.01240]
Kesha Hietala, A verified software toolchain for quantum programming (2022) [pdf, blog]
added pointer to:
added pointer to
and this quote from its abstract (which however seems to only appear on the webpage behind the DOI, but not in the actual book):
The characteristic difficulty in creating pure quantum software is mainly due to the inaccessibility to intermediate states, which makes debugging practically impossible. However, the use of formal methods, which apply rigorous mathematical models to ensure error-free software, can overcome this barrier and enable the production of reliable quantum algorithms and applications right out of the box.
added pointer to:
with this quote (reiterating a point which may have been first voiced by Rand (2018)):
“The classical parts of a quantum program can be debugged using traditional methods. The quantum parts, however, can not be treated in the same way because of the properties of a QC – such as superposition, entanglement, and no-cloning – which are governed by the laws of quantum mechanics. The purpose of debugging a program is to present the user with human readable, i.e., classical, information about the runtime state of the system. Extracting classical information from a quantum state is done using measurement which is usually a non-unitary operation and results in collapse of the state, and hence an unintended behavior of the program.”
added pointer to:
QPMC
: A Model Checker for Quantum Programs and Protocols, in Formal Methods. FM 2015, Lecture Notes in Computer Science 9109, Springer (2015) [doi:10.1007/978-3-319-19249-9_17]with this quote:
“In practice, however, security analysis of quantum cryptographic protocols is notoriously difficult; for example, the manual proof of BB84 in [15] contains about 50 pages. It is hard to imagine such an analysis being carried out for more sophisticated quantum protocols. Thus, techniques for automated or semi-automated verification of these protocols will be indispensable.”
added pointer to:
with this quote:
“But to check whether a quantum system satisfies a certain property at a time point, one has to perform a quantum-measurement on the system, which can change the state of the system. This makes studies of the long-term behaviours of quantum systems much harder than that of classical system.”
“The state spaces of the classical systems that model-checking algorithms can be applied to are usually finite or countably infinite. However, the state spaces of quantum systems are inherently continuous even when they are finite-dimensional. In order to develop algorithms for model-checking quantum systems, we have to exploit some deep mathematical properties of the systems so that it suffices to examine only a finite number of (or at most countably infinitely many) representative elements, e.g. those in an orthonormal basis, of their state spaces.”
added pointer to:
added pointer to:
added pointer to:
added pointer to:
more references:
Jim Woodcock, Peter Gorm Larsen, Juan Bicarregui, John Fitzgerald, Formal methods: Practice and experience, ACM Computing Surveys 41 19 (2009) 1–36 [doi:10.1145/1592434.1592436]
Matthew Salazar, Consensus, Blockchain and Proof Assistants (2018) [pdf, Salazar-ConsensusProof.pdf:file]
1st Workshop on Formal Methods for Blockchains(October 2019)
Sudhani Verma; Divakar Yadav; Girish Chandra, Introduction of Formal Methods in Blockchain Consensus Mechanism and Its Associated Protocols, IEEE Access 10 (2022) [doi:10.1109/ACCESS.2022.3184799]
Quantstamp blog Formally Verifying Hedera Hashgraph’s Stablecoin Framework (2020)
Quantstamp blog: Applying lightweight formal methods and SAT solvers to build better blockchain applications (July 2023)
added pointer to
with this quote:
[p 6:] “In quantum computation, the cost of debugging is likely to be quite high. To begin with, observing a quantum system can change its state. A debugger for a quantum program would therefore necessarily give incomplete information about its state when run on actual quantum hardware. The alternative is to use a quantum simulator for debugging. But this is not practical due to the exponential cost of simulating quantum systems. Moreover, it can be expected that the initial quantum computers will be rare and expensive to run and therefore that the cost of runtime errors in quantum code will initially be much higher than in classical computing. This shifts the cost-benefit analysis for quantum programming toward strong compile-time correctness guarantees, as well as formal specification and verification.”
(Previously I used to think that Rand 2018 was the first to make this point so explicitly.)
added pointer to:
1 to 45 of 45