Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorUrs
    • CommentTimeJun 3rd 2015

    added references to Lean

    • CommentRowNumber2.
    • CommentAuthorUrs
    • CommentTimeJan 5th 2019

    added pointer to

    diff, v5, current

    • CommentRowNumber3.
    • CommentAuthorUrs
    • CommentTimeJan 5th 2019

    added updated pointers for Lean libraries:

    diff, v5, current

    • CommentRowNumber4.
    • CommentAuthorAli Caglayan
    • CommentTimeJan 6th 2019

    There is also the extension to the lean library called the Spectral library. When finished it will probably merge with the main lean3 library.

    • CommentRowNumber5.
    • CommentAuthorUrs
    • CommentTimeJan 7th 2019
    • CommentRowNumber6.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 7th 2019

    Is there somewhere to read about comparison of these theorem provers based on dependent types, relative (dis)advantages, etc.?

    • CommentRowNumber7.
    • CommentAuthorUrs
    • CommentTimeJan 7th 2019

    See for instance that review by Hales, from #2 above.

    • CommentRowNumber8.
    • CommentAuthorAli Caglayan
    • CommentTimeJan 7th 2019

    The best way is to ask people who use them.

    Coq has been around for a while and has quite an active development team, although some argue that coq is stuck in the past, given the nature of a project older than 20 years. I find it much easier to interactively prove things in coq than the other provers. Coq has many tactics and a well developed tactic language which allows one to hint at Coq and it can fill in the rest of the details, much like how mathematics is done. Tactics are statments such as “prove this by induction” which tells coq to go and find an induction rule and apply it etc.

    Agda is a dependently typed programming language in Haskell style functional programming. It also has an active development allowing the Agda compiler to guess things like coq has. Unlike coq however agda has no tactics, however if you have programmed in Haskell before this won’t be such a problem. Agda has some new slick programming features that coq missed out on due to the 20 year gap. I haven’t really proven anything serious in Agda but I have done some simple exercises, and for the most part it feels quite slick, but I can’t honestly say much more.

    Lean is very new but also very powerful. It is more similar to agda than coq in syntax, but unlike agda it has tactics and all sorts of other things. It is very fast compared to the rest. Coq is written in OCaml, agda in haskell but Lean is written in C++. From what I have heard people are formalising things in lean that haven’t really been done properly by coq or agda such as algebraic geometry and some differential geometry.

    Overall I would say that Coq is easiest to learn, simply by being around longer and you will find more subject matter on it. I would probably guess that coq proofs such as Feit-Thompson would be much faster had they been written in agda or lean. (Perhaps even less cumbersome to read and write too).

    But this is ultimately the impression I got from all of them.

    • CommentRowNumber9.
    • CommentAuthorDavid_Corfield
    • CommentTimeJan 7th 2019

    Re #7, from the list of flaws in the second part, especially #5-11, I’m surprised he’s stuck with Lean. Is it that the Lean developers actively made these design choices, or is it just too hard to do otherwise?

    Re #8, thanks for your impressions.

    • CommentRowNumber10.
    • CommentAuthorMike Shulman
    • CommentTimeJan 7th 2019

    Hales’ criticisms of Lean are pretty damning. I haven’t used Lean a whole lot, but from the little that I have I find it less pleasant to use than Coq. However, to me the really critical fact is that Lean is too much in flux for anyone to actually use it (see Hales’ criticism #4). If and when it ever settles down will be the time to really evaluate it, as by then it may look very different from how it looks now.

    Personally, I find it quite difficult to cope with the lack of tactics in Agda. Agda’s interactive term-construction mode is perhaps almost as easy to use as writing a proof interactively in Coq with tactics, as you can leave “holes” in arbitrary places and Agda will tell you what the hypotheses and goal are for each hole and then you can then fill them in progressively. However, once you’re all done, what you have is a proof term, which I find to be more brittle and much harder to understand and modify than a well-constructed tactic script: the latter is, I feel, much closer to the mathematician’s informal understanding of a “proof”, whereas the bare term retains no information about how it was put together. Ltac is an abomination, but at least it is an abomination that works.

    However, many people swear by Agda and have good arguments for it, so overall I think it’s good that both exist.

    • CommentRowNumber11.
    • CommentAuthorspitters
    • CommentTimeJan 7th 2019

    The lean users are having a workshop now. I expect there are some updates.

    • CommentRowNumber12.
    • CommentAuthorUrs
    • CommentTimeJan 8th 2019

    added pointer to

    diff, v7, current

    • CommentRowNumber13.
    • CommentAuthorjonsterling
    • CommentTimeJan 9th 2019
    • (edited Jan 9th 2019)
    Hales's critique of Lean is useful, but as someone who is involved in the implementation and use of proof assistants, let me make a few comments about it:

    The first critique ("The kernel is bloated") is not so convincing to me -- but to those who care about this issue, the critique applies equally well to all other type-theoretic proof assistants. The HOL tradition is probably one of the few that can claim truly minimal kernels, but we have just not found minimization of the kernel to be a priority whose emphasis correlates with improved usability of a proof assistant in practice.

    The second critique (lack of backwards compatibility) is correct, and this is endemic of any proof assistant that aims to push the margins of possibility more than just once over a period of time.

    The third critique is correct (who can learn Lean?) but it applies equally well to almost every proof assistant; of course, there's less literature about using lean than using Coq or Isabelle, but most people who end up being successful users of proof assistants were taught to do so by someone, and did not just learn it from books. Hales's critique is something that affects us all, and we need to think carefully about how to do better in this area.

    The forth critique is incorrect and irrelevant; already, some very sophisticated automation has been written using Lean as its own metalanguage, and because the metalanguage is essentially a partial version of type theory, anything that one would code in ML can be coded in Lean using a monad. Personally, I think that using type theory as its own metalanguage may be a limiting point of view, and I'd like to consider other approaches, so I'm sympathetic to Hales's point, but I don't believe what he says is correct or relevant.

    The other criticisms about hierarchies for algebraic structures and the induced projection chains are well-taken, and this problem is not specific to lean, but is endemic in every proof assistant that I am aware of. It will require us to either have some new ideas, or use some forgotten old ones, to fix it! It will be crucial in order to make our tools scale up to large formalizations of algebra and category theory without enduring pain.

    ----------------------

    Some comments about Agda in response to this thread:

    Agda has support (in theory) for tactics now, via its reflection mechanism, but it is up to us to design tactics that can approach the usability of (for instance) something like ssreflect in coq, which is to me one of those abominable hacks that works so beautifully in practice that it is almost impossible to give up.

    Let me just say that Agda's superior support for things like dependent pattern matching (which are not matched by Coq, even when using the Equations package), makes many things that are very gnarly to do in Coq quite easy; induction-recursion is also a huge advantage, something which is planned for Coq, but has not been released (and has been promised for a really long time!).

    One finds that in Agda, dependently typed encodings of things can be leveraged to a degree that isn't possible in Coq, where best practice requires a minimization of the use of dependent types to avoid difficult situations (a lesson I learned the hard way more than once).

    Coq is rightfully widely used, but the advantage of a tool that moves so quickly (like Agda, or perhaps Lean) is that we can try out ideas quickly & experimentally which may never make it into Coq. I hope that tools like Agda and Lean can in the future combine the best of modern type theoretic affordance with very powerful and extensible tactic languages.

    Personally, my serious formalization efforts live in Coq for the time being, but I hope for a different kind of world :)
    • CommentRowNumber14.
    • CommentAuthorMike Shulman
    • CommentTimeJan 9th 2019

    Thanks for the useful comments, Jon!

    The first critique (“The kernel is bloated”) is not so convincing to me – but to those who care about this issue, the critique applies equally well to all other type-theoretic proof assistants.

    I agree that something like putting mutually inductive types in the kernel doesn’t really seem like very problematic bloat. But I also think kernel bloat is something that’s worth considering even for type-theoretic proof assistants. For instance, I think in hindsight it was probably a mistake to put modules into the Coq kernel.

    The second critique (lack of backwards compatibility) is correct, and this is endemic of any proof assistant that aims to push the margins of possibility more than just once over a period of time.

    Yeah; there’s something of a continuum from “stable” proof assistants like Coq to “experimental” ones like cubicaltt and Andromeda. I would say that Agda is pretty close to the stable end, but a little more experimental than Coq. I gather that the new cubical proof assistants aim to be fairly stable as well. My impression used to be that Lean was intended to be a stable proof assistant; there was a lot of talk about implementing it in C++ to get good performance, and adding lots of automation, so that “real-world” programmers could use it and rely on it (in particular, I recall it being said that such real-world applications were necessary for Leo to justify his work on it to his employer Microsoft). But now it seems to be sliding further over into experimental territory, making it less suitable for people who want to use it for a substantial and maintainable development (like, unfortunately, the formalization of spectral sequences in HoTT, which I hope will one day be ported from the already-deprecated Lean 2 to a more stable proof assistant like Coq).

    The third critique is correct (who can learn Lean?) but it applies equally well to almost every proof assistant; of course, there’s less literature about using lean than using Coq or Isabelle, but most people who end up being successful users of proof assistants were taught to do so by someone, and did not just learn it from books.

    I would say that there being less literature about Lean means that the critique doesn’t apply equally to every proof assistant. I’ve found that there’s actually quite a good deal of useful documentation about Coq. In addition to textbooks like Software Foundations and CPDT, Coq has a reference manual with a pretty good index that aligns pretty closely with the capabilities of the released version. Even if these resources aren’t enough for someone to learn from square one, they are very useful for the intermediate student and continue to be useful for the advanced user.

    Moreover, I read Hales’ critique as referring specifically not (just) to literature but to people who can do the teaching. There are a lot of Coq and Agda users and a lot of workshops and summer schools where you can be taught to use them by someone. Lean doesn’t have the same size user base.

    The fourth critique is incorrect and irrelevant

    I’ll take your word for that.

    You didn’t say anything about the fifth critique (“Typing in Lean is nominal rather than structural”). I found this a very curious critique; if I understand the words correctly, then as a mathematician and type theorist I find nominal typing to be obviously the correct choice, corresponding in particular more closely to what we tend to call (unfortunately) structural mathematics. The weak typing of set theory doesn’t seem to me to be something we should emulate. But maybe I am misunderstanding the words? Are there any proof assistants that use “structural typing”?

    Criticism #8 (“Structure depends on notation”) is something that’s always rankled me about Lean. I literally couldn’t believe it when I first heard that they’d actually copy-and-pasted the library about “additive monoids” into a library about “multiplicative monoids”. It reminded me of some old arguments with Arnold Neumaier on the n-Cafe .

    • CommentRowNumber15.
    • CommentAuthorRichard Williamson
    • CommentTimeJan 9th 2019
    • (edited Jan 9th 2019)

    Regarding the fifth critique, that’s interesting, Mike! I have the opposite view :-). I think I made pretty much the same point as this on the HoTT list some months ago, and am happy to see someone influential like Hales make it. For me, something like duck typing, or at least something like the way Go does it, is essential if dependently typed languages are ever to be adopted widely. It is just too cumbersome otherwise. It’s the direction of all modern languages widely used in industry to cut down explicit typing as much as possible; even Java, which is famous for being longwinded and is committed to being backwards compatible, is innovating in this direction.

    I mean I would never in the world write anything that actually does anything in a dependently typed language currently. It is useful for mathematics only because it is more expressive. If some way could be found to incorporate dependent types in a nice way in a language like Python, that would be revolutionary.

    But there are people, I think Max for instance, and I might have thought myself to be such a person myself once (!) at a time before I had used them in earnest in concrete work, who I think genuinely feel that duck typed languages are fundamentally wrong in some sense! So it is a challenge :-).

    • CommentRowNumber16.
    • CommentAuthorMike Shulman
    • CommentTimeJan 9th 2019

    To add dependent types to Python you’d have to first add types to Python. To me there is a world of difference between Python and Go: Go is a real (statically) typed language, and its implicitly instantiated interfaces can I think be regarded as just a syntactic variation on a simple form of typeclass inference that saves the user some typing (not a trivial consideration, of course). If that’s what Hales wants then I’m not as bothered. (Although it does seem to me that calling this something non-“nominal” is a misnomer, since it still does rely on names to detect instantiation, it just happens to be the name of the method rather than the name of the class.) Reducing the amount of type annotations the user is required to give explicitly is one thing; eliminating the types themselves is something totally different.

    • CommentRowNumber17.
    • CommentAuthorRichard Williamson
    • CommentTimeJan 9th 2019
    • (edited Jan 9th 2019)

    Yes, I think Hales had in mind something like the Go approach. In my experience, there is an enormous amount of typing saved in Go compared to some other languages.

    As you know, but just for the benefit of others, Python is typed, just an runtime rather than compile time. It is conceivable that one can have duck dependent types as much as duck non-dependent types. And, as I wrote on the HoTT list, I even think this fits with the philosophy of category theory: we care about structure, not about exactly what something is. If it walks like an abelian group and quacks like an abelian group, it might just as well be an abelian group!

    • CommentRowNumber18.
    • CommentAuthoratmacen
    • CommentTimeJan 9th 2019

    Re #13:

    The other criticisms about hierarchies for algebraic structures and the induced projection chains are well-taken, and this problem is not specific to lean, but is endemic in every proof assistant that I am aware of.

    I don’t understand something about Hales’ criticism #10 (meaningless parameterization). He says once you’re using “bundled” structures, you can’t get back a “parameterized” structure. Why not? What’s wrong with this? (Other than syntax; I don’t know Lean syntax.)

    wantsParameterized α (M:magma_with_parameter α) := ...
    
    hasBundled (M:magma_bundled) := wantsParameterized (M.α) {|op := M.op|}

    And I don’t understand criticism #7 (ugly projection chains) at all. So what if it’s ugly when you turn off syntactic sugar? How often would you do that?

    • CommentRowNumber19.
    • CommentAuthorMike Shulman
    • CommentTimeJan 9th 2019

    Python already has “runtime dependent types” – e.g. if you want a vector, just check at runtime that your list has the right length. “Runtime types” are just syntactic sugar for the programmer checking properties of an untyped input value; real type-checking has to be static to give compile-time guarantees about run-time behavior.

    • CommentRowNumber20.
    • CommentAuthorMike Shulman
    • CommentTimeJan 9th 2019

    BTW, I also like Go very much, and I agree that they’ve succeeded in eliminating a lot of the annoying boilerplate from other languages while maintaining a useful (statically typed) core. There are still some warts, but in most cases I can understand that eliminating them would have made the language overly complicated for its goal of being a C-like systems language.

  1. Will reply properly later, was just wondering if you were thinking of Rust rather than Go in the last sentence of #20? At least I don’t really think of Go as a systems language? It has no manual memory handling for instance.

    • CommentRowNumber22.
    • CommentAuthorjonsterling
    • CommentTimeJan 9th 2019
    I didn't have much to say about the critique of nominal vs structural typing, because I wasn't sure how to write down my opinion in a way that would be clear & precise. I cringe a little bit at this distinction (reminds me of similar pseudo-distinctions that are popular in amateur programming languages communities, which are not the same as the ones that scientists in the area study), because I feel mystifies multiple distinct lines of variation between type theory and set theory.

    There is on the one hand the untypedness of set theory, the fact that you can express in set-theoretic language things which break abstraction and don't make sense; in the context of type theory, Nuprl would be a good example of a system which inherits this characteristic of set theory. It can be a bit tempting, and it is useful if one is trying to think of type theory as a logic for proving things about computational gadgets, but it seems to be mostly a hindrance when trying to formalize mathematics.

    The above has little to do with the actual "structural vs nominal" distinction, which is manifest in the difference between (for instance) whether defining two of the "same" datatype or mathematical structure yields things which are definitionally interchangeable, or not; or whether a structure which is more constrained can be silently transformed into a structure which is less constrained. Type theory can be made to support both of these styles without resorting to untypedness; these different styles have to be employed simultaneously, because in some cases you want something to be abstract, and in others you don't. Part of the problem is that we don't have yet a good scientific understanding of abstraction in a type-theoretic setting -- unlike our understanding of abstraction in the context of programming languages, which is further along. The other point is that using the nominal style makes it easier for an elaborator to resolve things (like, what does the "+" operator mean?), whereas the structural style can make this either difficult or impossible. Anyway, let me just re-emphasize that these questions have nothing to do with typed vs untyped. Type theory can be *more* structural than set theory.

    ---------

    In response to atmacen's comment about the ugly projection chains, I think it issue is that hiding these in the notation is not always enough unless there is some kind of definitional coherence rule which allows two versions of the "same" projection chain to be treated as the same. What often happens in languages (like Coq and Lean) where these gnarly things are hidden behind the pretty printer and the elaborator is that you get some error message like "Foo is not equal to Foo": morally, they are clearly the same, but underneath the notation is hidden something that is actually different. In many cases you can do some proof to see that they cohere, but this is still a defect. It's for this reason that we need to study modules and algebraic hierarchies directly as part of type theoretic language, and not just blithely elaborate them to records or nested sigma types, and then call it a day. We have a lot of work to do! All of it is plausible, but hard to realize -- and difficult to convince people that it is worth it, as with all aspects of the theory & practice of proof assistants.
    • CommentRowNumber23.
    • CommentAuthorMike Shulman
    • CommentTimeJan 9th 2019

    Maybe “systems” is the wrong word, but I think Go is definitely designed to be a C-like language rather than a Python-like language.

    Thanks for the explanation Jon!

    • CommentRowNumber24.
    • CommentAuthoratmacen
    • CommentTimeJan 9th 2019

    Re #20:

    Python already has “runtime dependent types” – e.g. if you want a vector, just check at runtime that your list has the right length.

    I would say that’s run-time refinement types, not run-time dependent types. I’m not sure what run-time dependent types would even be. For a warm up, what would run-time polymorphism be?

    “Runtime types” are just syntactic sugar for the programmer checking properties of an untyped input value;

    This is not quite right. In a dynamically typed language, there are additional properties of values that you wouldn’t have in an untyped language. For example, in machine code, there’s no such thing as an int or float in memory, just a “word”. It’s meaningless to say that a word is an int (or that it’s a float). Any word can be informally understood as an int or float. A dynamically typed language would introduce a true distinction between int and float in a way that can be tested at run time. A statically typed language would probably introduce the distinction, but you probably wouldn’t be able to test it at run time, by default. (Also, in an intrinsic static type system, the distinction would not be a property.)

  2. I agree with Matt’s comments in #24.

    I’m not sure what run-time dependent types would even be.

    Exactly! This is why I think it would be revolutionary to come up with something like this :-). I wish type theorists would engage with this kind of question. I think it would bring us much closer to Voevodsky’s original aims of making formalisation/dependent types practical for the average mathematician/programmer.

    • CommentRowNumber26.
    • CommentAuthorRichard Williamson
    • CommentTimeJan 9th 2019
    • (edited Jan 9th 2019)

    Here is a quick sketch off the top of my head of how the natural numbers as an inductive type, and the corresponding definition of addition, might look in some Python-like syntax.

    class NaturalNumbers:
        def _init__(self):
            0 : self
            s : self -> self
    
        def add(self, m):
            if m == 0:
                return self
            elif m == s(l):
                return s(self.add(l))
    

    I have written the elif like this to make it clear, but I’d expect to be able to drop it, maybe writing it as follows instead.

        def add(self, m):
            if m == 0:
                return self
            inductive s(l):
                return s(self.add(l))
    

    Or maybe even the following.

        def add(self, m):
            if m == 0:
                return self
            return s(self.add(inductive))
    

    Later on, I might define an abelian group class with an ’add’ method as well, and if I write

    def double(n):
        return n.add(n)
    

    I would expect this to work if I put a natural number in or an element of an abelian group in.

    • CommentRowNumber27.
    • CommentAuthoratmacen
    • CommentTimeJan 9th 2019

    If some way could be found to incorporate dependent types in a nice way in a language like Python, that would be revolutionary.

    The way is called refinement typing. If the starting language lacks static types, it’s more specifically called extrinsic typing. Nuprl has an extrinsic, dependent type system on top of an untyped lambda calculus. I don’t think there are any conceptual hurdles before someone could make what you’re asking for. Most dependent type theorists seem to look down their nose at extrinsic typing though.

  3. Regarding #27, I think that what I have in mind is slightly different. But I’m not completely sure, I don’t think I know much about refinement/extrinsic types. Have a look at #26 and see if it is the same kind of thing.

    • CommentRowNumber29.
    • CommentAuthoratmacen
    • CommentTimeJan 9th 2019
    Re #26, I’m afraid I have no idea what you’re getting at. I started listing things that are unexpected to me about your code, but it was basically everything.
    • CommentRowNumber30.
    • CommentAuthorRichard Williamson
    • CommentTimeJan 9th 2019
    • (edited Jan 9th 2019)

    Re #29, OK, thanks for taking a look. Maybe this is why I consider that doing what I have in mind would be revolutionary :-).

    It is not supposed to compile in Python today, of course. The point is to illustrate how duck typing might work in a setting where we have dependent types (see especially the double method). If it is use of things like 0 : self that is bothering you, which of course is not the way one instantiates a class today, that is very deliberate, I think it is clear that the ordinary object oriented paradigm has to be expanded. One could still mix and match with today’s syntax. E.g. one might have

    class ConstantList:
        def __init__(self, x):
            empty : self
            append : self -> self
            self.x = x
    

    Here the self.x = x is today’s syntax, the other two lines are the new syntax that allows for dependent (or at least inductive) typing. If one removes those two lines, this will compile in today’s Python and is the correct way to do it.

    • CommentRowNumber31.
    • CommentAuthoratmacen
    • CommentTimeJan 9th 2019
    Also, warning: I don’t have much experience with Python, and it was some time ago.
    • CommentRowNumber32.
    • CommentAuthoratmacen
    • CommentTimeJan 9th 2019
    I think if you explain your thinking behind the features you’re adding and using, I can be more helpful. But maybe you should start a different thread.
    • CommentRowNumber33.
    • CommentAuthorRichard Williamson
    • CommentTimeJan 9th 2019
    • (edited Jan 9th 2019)

    Hehe, I’m just sketching quickly something in the hope that it might intrigue one or two of the type theorists reading, and be a slight help in encouraging this to be taken more seriously. It’s not something I have time for at the moment to properly try to think about, I’m happy to stop here.

    (It doesn’t really matter, but in case it reassures someone, Python is one of the languages I use professionally; the differences with today’s Python are deliberate :-)).

    I should say, though, that the double method does compile in today’s Python and will work perfectly well if n is an object which has an add method, and this is precisely my point. We should not need a whole different paradigm, we should be able to build on ones that already work well.

    Take just plain functional languages for instance like Haskell. Nobody I’ve heard of writes REST APIs in Haskell, but Haskell has had an enormously positive influence on languages in which REST APIs are written. Monads for instance have been incorporated in Java (without calling them such) in a way which makes sense (for example the Optional and Stream types) for Java, one has first class functions in Java since Java 8, etc. I think something similar is what eventually needs to happen with dependent types, and indeed my guess is that either it will one day happen, or else dependent types will never catch on in mainstream programming languages in their present incarnation and will be replaced in the end by something else.

    • CommentRowNumber34.
    • CommentAuthoratmacen
    • CommentTimeJan 10th 2019

    Re #22:

    in the context of type theory, Nuprl would be a good example of a system which inherits this characteristic of set theory. [breaking abstractions]

    Could you give an example of what you mean?

    • CommentRowNumber35.
    • CommentAuthoratmacen
    • CommentTimeJan 10th 2019

    Re #22:

    In response to atmacen’s comment about the ugly projection chains, I think it issue is that hiding these in the notation is not always enough unless there is some kind of definitional coherence rule which allows two versions of the “same” projection chain to be treated as the same.

    Maybe that’s the “no diamonds” restriction from Hales’ criticism #9. But even with diamonds, in that case Lean should show some projections, so it isn’t ambiguous, but that doesn’t mean it looks as bad as no sugar.

    For the rest of your response to me, it sounds like you’re extrapolating from Hales’ criticisms to your own, much tougher problems that Lean happens to not solve.

    Coq’s “Foo is not equal to Foo” errors show that Coq’s pretty printing is ambiguous. This seems extremely Coq-specific, and they probably ought to fix it.

    • CommentRowNumber36.
    • CommentAuthoratmacen
    • CommentTimeJan 10th 2019

    Regarding “duck typing”, I don’t like that term. I’m pretty sure it’s a buzzword that’s not actually meaningful. The gist is that it’s some kind of type system that doesn’t make pedantic distinctions. But you do actually need to eventually say what type compatibility is, and there’s this whole rich “type theory” that studies this, and doesn’t ever talk about ducks. :)

    So for example, I have no idea what “duck dependent typing” is imagined to be.

    • CommentRowNumber37.
    • CommentAuthorDavidRoberts
    • CommentTimeJan 10th 2019

    @Richard #17

    If it walks like an abelian group and quacks like an abelian group, it might just as well be an abelian group!

    But a ring walks like an abelian group and quacks like one: but sometimes one really wants the ring, and sometimes really wants the underlying additive group. For more richly structured objects (like a perfectoid ring, for instance), it acts like many things, and one has to be precise which structure is being used at any one time. (Not to hijack the thread, but I feel this is a failing in Mochizuki’s writeup of IUTT: he uses all sorts of idiosyncratic mental crutches instead of making the various forgetful functors in play explicit.)

    • CommentRowNumber38.
    • CommentAuthorRichard Williamson
    • CommentTimeJan 10th 2019
    • (edited Jan 10th 2019)

    I wouldn’t describe duck typing as a buzz word, it is a standard term that I think accurately conveys what is happening. But ’dynamic typing’ would be an alternative. I think #26 gives an indication of how it could look in practise for inductive types. I have no idea how to build an entire theory around it. :-) It may not be a type theoretic question as much, more of a programming language design one.

    Re #37: if I understand what you have in mind, I think dynamic typing would handle this as well as static typing, just in a different way. Python today handles subclasses just fine, for example :-).

    • CommentRowNumber39.
    • CommentAuthoratmacen
    • CommentTimeJan 10th 2019
    Re #38,
    So by “duck typing”, you just mean “dynamic typing”?
  4. Yes. Type checking happens at run time.

    • CommentRowNumber41.
    • CommentAuthorAli Caglayan
    • CommentTimeJan 10th 2019

    I think dependent types in Haskell will need to happen before python developers even hear about it. Supposedly it will be finished be next year…

    • CommentRowNumber42.
    • CommentAuthoratmacen
    • CommentTimeJan 10th 2019

    Re #17:

    If it walks like an abelian group and quacks like an abelian group, it might just as well be an abelian group!

    Richard, what’s your idea of an abelian group in a dynamically typed language? In particular, what happens to the carrier? This is not a wise guy question; module systems (support for structured types) and dependent type systems are closely related. Based on your answer, I might be able to recommend something.

    • CommentRowNumber43.
    • CommentAuthorAli Caglayan
    • CommentTimeJan 10th 2019

    Wouldn’t you need some sort of Identity type to encode the notion of abelian groups in python? Is a better question not “How to implement inductive families”?

    • CommentRowNumber44.
    • CommentAuthoratmacen
    • CommentTimeJan 10th 2019

    Re #14:

    Criticism #8 (“Structure depends on notation”) is something that’s always rankled me about Lean. I literally couldn’t believe it when I first heard that they’d actually copy-and-pasted the library about “additive monoids” into a library about “multiplicative monoids”. It reminded me of some old arguments with Arnold Neumaier on the n-Cafe .

    There’s definitely something wrong if they’re compelled to copy the library for groups just to change the name of the operator. But I don’t see how this could be the fault of the technologies implemented, rather than a stupid misuse of them. In some shallow sense, a mathematical development inevitably depends on the notation: it needs to refer to things somehow. But it’s a strong convention to regard the name choice as meaningless, so that a copy using a different name is “obviously” redundant. Why didn’t they follow the convention?

    • CommentRowNumber45.
    • CommentAuthoratmacen
    • CommentTimeJan 10th 2019

    Re #43, I figured identity “types” would just be dynamically testing identity. I’m trying to get at what Richard expects from dependent types beyond refinement types. Decidable (actually, testable) equality types are available with dynamic refinement types.

    • CommentRowNumber46.
    • CommentAuthorMike Shulman
    • CommentTimeJan 10th 2019

    Why didn’t they follow the convention?

    I never completely understood; you’ll have to ask them. It’s true that this is somewhat about the use of the technology rather than the technology itself, but as a practical matter a programming language is hard to separate from its standard libraries, and it could be that they were pushed towards this choice by (mis)features of Lean itself.

    Coq’s “Foo is not equal to Foo” errors show that Coq’s pretty printing is ambiguous. This seems extremely Coq-specific, and they probably ought to fix it.

    I think Coq has gotten better about this in recent releases. I used to get a lot of “Foo is not equal to Foo” errors that boiled down to universe mismatches, and now it seems that Coq detects when the problem is a universe mismatch and automatically displays the universe parameters in such a case without my having to turn on Set Printing Universes.

    • CommentRowNumber47.
    • CommentAuthorGuest
    • CommentTimeJan 15th 2019

    Good day! I know this is somewhat off topic but I was wondering if you knew where I could locate a captcha plugin for my comment form? I’m using the same blog platform as yours and I’m having difficulty finding one? Thanks a lot! https://androtestopropills.com/

  5. fix dead link

    Anonymous

    diff, v14, current

  6. Added the Liquid Tensor Experiment as another notable example of math formalized in Lean. Added a link to the Lean community/Mathlib to the references.

    Anonymous

    diff, v15, current

  7. Small change to previous edit

    Anonymous

    diff, v15, current

    • CommentRowNumber51.
    • CommentAuthorUrs
    • CommentTimeMar 27th 2023

    added these pointers:

    • Lean for the Curious Mathematician 2020, online event (Jul 2020) [web]

    • Lean for the Curious Mathematician 2022, Brown University (Jul 2022) [web]

    • Lean for the Curious Mathematician 2023, Düsseldorf Univ. (Sep 2023) [web]

    diff, v16, current

    • CommentRowNumber52.
    • CommentAuthorzskoda
    • CommentTimeApr 21st 2023

    Lean is presented in part of the ICM talk

    diff, v17, current

  8. linkfix

    Floris van Doorn

    diff, v18, current