Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I created a stub certified programming.
That’s motivated from me having expanded the Idea-section at type theory. I enjoyed writing the words “is used in industry”. There are not many Lab pages where I can write these words.
I am saying this only half-jokingly. Somehow there is something deep going on.
Anyway, in (the maybe unlikely) case that somebody reading this here has lots of information about the use and relevance of certified programming in industry, I’d enjoy seeing more information added to that entry.
One shouldn’t get too excited, though; certified programming as done in Coq and Agda hasn’t made many inroads in actual industry. The automation is just not there yet for it to be worth the time and effort.
A social historian, Donald MacKenzie wrote on the attempt to certify that a chip met a specification in his 2001 Mechanizing proof: computing risk, and trust Cambridge, MA:MIT Press. Apparently the case nearly came to court over whether this had been proved, but unfortunately the company went bust. Otherwise we would have heard lawyers thrashing out the nature of proof. See also MacKenzie D 1991 The fangs of the VIPER Nature 352 467–468.
I have added to the Idea-section at certified programming the following text, which appears as the description of the UK research grant titled Homotopy Type Theory: Programming and Verification:
The cost of software failure is truly staggering. Well known individual cases include the Mars Climate Orbiter failure (£80 million), Ariane Rocket disaster (£350 million), Pentium Chip Division failure (£300 million), and more recently the heartbleed bug (est. £400 million). There are many, many more examples. Even worse, failures such as one in the Patriot Missile System and another in the Therac-25 radiation system have cost lives. More generally, a 2008 study by the US government estimated that faulty software costs the US economy £100 billion annually.
There are many successful approaches to software verification (testing, model checking etc). One approach is to find mathematical proofs that guarantees of software correctness. However, the complexity of modern software means that hand-written mathematical proofs can be untrustworthy and this has led to a growing desire for computer-checked proofs of software correctness. Programming languages and interactive proof systems like Coq, Agda, NuPRL and Idris have been developed based on a formal system called Martin-Löf type theory. In these systems, we can not only write programs, but we can also express properties of programs using types, and write programs to express proofs that our programs are correct.
In this way, both large mathematical theorems such as the Four Colour Theorem, and large software systems such as the CompCert C compiler have been formally verified. However, in such large projects, the issue of scalability arises: how can we use these systems to build large libraries of verified software in an effective way?
This is related to the problem of reusability and modularity: a component in a software system should be replaceable by another which behaves the same way even though it may be constructed in a completely different way. That is, we need an “extensional equality” which is computationally well behaved (that is, we want to run programs using this equality). Finding such an equality is a fundamental and difficult problem which has remained unresolved for over 40 years.
But now it looks like we might have a solution! Fields medallist Vladimir Voevodsky has come up with a completely different take on the problem by thinking of equalities as paths such as those which occur in one of the most abstract branches of mathematics, namely homotopy theory, leading to Homotopy Type Theory (HoTT). In HoTT, two objects are completely interchangeable if they behave the same way. However, most presentations of HoTT involve axioms which lack computational justification and, as a result, we do not have programming languages or verification systems based upon HoTT. The goal of our project is to fix that, thereby develop the first of a new breed of HoTT-based programming languages and verification systems, and develop case studies which demonstrate the power of HoTT to programmers and those interested in formal verification.
As I remember it when the idea of ’proof’ of software was being debated in a law court, the software engineers used ’prove’ in the engineering sense, i.e. ’test’ as in ’proved steel’. Unfortunately some people had understood ’prove’ in the mathematical sense. The lawyers for the engineers won that battle! (I am not sure when this happened and it may be misremembered, but there are two uses of ’prove’.)
But every mathematical proof of a real life matter depends on the validity of overall assumptions and model which itself may be fundamentally flawed or missing essential influencing real-life parameters.
Tim, It would be nice to know a bit more about what you have in mind. Statistical proof is allowed in court (and there have been some horrible accidents ). Most cryptographic proofs are statistical in nature.
I cannot remember the details. Statistically based testing had been used and won the day in court, because the company who produced the chip could claim that they had ’proved’ that the chip was fault free. They also claimed that their advertising for the product in using ’proved’ had been misunderstood by the person who was claiming damages and that with common (engineering) usage their advertising was correct, and there had been no intent to deceive. (It may have been a chip for a pocket calculator that was in question. I do not recall, as I said.)
I have a very vague thought that the detected fault was in something that was a simple calculation, something like a ‘tan’ and because of the type of sample calculations that had been used in the testing, it had not been detected.
The old sense of “prove” mentioned in #5, meaning “to test”, is in such sayings as “the exception proves the rule” and “the proof is in the pudding”, and obviously in phrases such as “proving grounds”. I would have thought the distinction in meaning is very familiar in legal circles (although obviously IANAL).
The entry still quotes this remarkable claim, from the EPSRC research grant Homotopy Type Theory: Programming and Verification, which says (in drastically shortened quote):
The cost of software failure is truly staggering. a fundamental and difficult problem which has remained unresolved for over 40 years. But now it looks like we might have a solution! namely homotopy theory, leading to Homotopy Type Theory (HoTT).
I see that this grant has run out just last month. Can one say anything in update of these claims?
fixed the link for
(now via WaybackMachine)
and added a reference on verification of quantum programming languages such as Quipper:
added pointer to:
added this pointer:
also this one:
added this pointer:
and this one:
and this one:
and this:
and this one:
and this one:
also this:
added this one:
added this pointer:
added pointer to:
1 to 26 of 26