Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I feel like this should be the analgoue of the first few chapters of the HoTT book for the HoTT wiki. The aim should be to list all type theoreticy things here, give the appropriate definitions and ideas with each article.
This will allow us to remark concretely in pages about say Synthetic homotopy theory about type theoretic results.
I have begun remodelling this page but I don’t really know what it will look like. I have made an effort to keep the old page in order to eventually incoroprate it.
I remember when I was trying to understand what people meant by “type theory” a few years back, I found it a little frustrating because it often felt circular
Now I feel the sentence
A type theory is a formal system in which every term has a ‘type’, and operations in the system are restricted to acting on specific types.
may have a similar problem. The issue is that once one knows what “type theory” means it seems so utterly obvious that it is hard to recall how it was when one did know it. But if one really does not know yet, then “every term has a type” is less of an explanation than the thing that should be explained. I believe.
And in set theory: what is a set?
analytic versus synthetic – analytic theories give you more than you want by defining concepts in terms of something else, whereas synthetic ones only define implicitly so you need to get a feel for the whole theory.
No, it is easily possible to write a decent explanation of type theory for outsiders.
Do you think we have such a decent explanation yet at type theory?
Certainly one augment a formal theory with natural language descriptions which may perhaps resonate with other areas of the reader’s understanding.
That first paragraph at type theory gets closer to giving an idea, I am sure. It could easily be improved further.
I won’t do it (anymore). But I suggest to those who think of themselves as type theorists to go through the exercise of taking a step back and trying to see how to explain their field to the educated outsider. This is beneficial for both sides.
I always find myself lost in a fog of confusion over the most basic aspects of such systems. I mean, what is it to judge something, , as being of type, ?
I don’t get the sense, e.g., from our extrinsic/intrinsic type discussion, that the experts are all sitting there with a crystal-clear conception that they could choose to write down if they could be bothered.
It’s one thing to think about mathematics type-theoretically, but things must begin with my ability to judge something as being of a kind. And then I feel guilty for not having read vast swathes of Husserl.
But of course we mustn’t let ourselves be paralysed, and I’m very grateful you write things such as motivation for sheaves, cohomology and higher stacks.
So the idea is that in an age where kids are taught an idea of what it means to have a bool, an int, a string; the experts are left speechless, grappling for words? Lost the ability of colloquial explanation? Unable to make conceptual sense of their profession in the broad scheme of human endeavour, reduced to mechanically manipulating symbols?
You can’t seriously mean that?!
I am not sure what the “no” in #6 was in response to. I meant exactly the same thing: we explain set theory even though it defines sets synthetically rather than analytically, and we can do the same for type theory. I’ve attempted to write many such explanations myself, on blogs and in papers.
Personally, I don’t find it useful to descend into Martin-Löf-ian philosophy of what it means to “judge” something. Type theory is a formal system whose “basic act” is called “judging”, just as first-order logic is a formal system whose basic act is called “proving”.
What I mean is, “judging” and “proving” are undefined terms that are given meaning by the formal system, just like “type” and “set”.
Mike makes a very good point in #11: we “pretend” set theory is synthetic, and then defer the proof that “something exists” to the future.
I feel that - as most people have been exposed to set-theoretic intuitions - the best way to present type theory is to explain the difference between types and sets.
Something is in a set if it belongs to it. Once we assume that a set whose elements have certain properties, we can then begin to act in a more synthetic manner, e.g. we use these assumed properties to prove theorems about it. At some point someone should prove that such a set exists using some set-theoretic axioms (if one is so inclined, of course).
Type theory begins with types. We are not told what is in a type, but only how it works: what suffices to construct an element of it (introduction rule), how to use it (elimination rule), etc. In programming parlance we would say that we are given its interface, and not its specific implementation. Thus, type theory is - in one sense - about specifying how and when one is allowed to “interact” with mathematical objects.
I don’t know if I could explain that to a general audience, but to those coming from Computer Science - such as myself - this idea was more than natural. So, if we are ready to believe that little kids these days have a working grasp of int and bool, then maybe that is where one should begin.
Re #13: I think a lot of what you wrote here is very nice! I just wanted to say that, thinking of Java, say, a type would in my view correspond more to a class than an interface. In particular, I would say that an interface in Java does not have introduction rules.
I would say that a fundamental principle behind type theory, that should accessible to a layman, is that things only make sense in context. Suppose for example that I know that when something is a fruit, I may ask whether it is sweet or sour. Then if I have never seen a lime before and you say “I have a lime”, I can say “What is a lime?”, and you can say “It is a fruit”, then I know that I may ask if it is sweet or sour. In type theory, there is always an answer to “What is X?”, and that answer is useful. Once I know that a lime is a fruit, I can infer many other things about it, without asking you to look at the lime. Whereas in set theory, one can answer “I have no idea what a lime is, it was just given to me by somebody”, and the only way to find out anything about your lime is to look at it itself and explore it. This is close to what lambdabetaeta was getting at in #13. It is course closely related to the way things are in category theory: we cannot know objects in themselves by exploring them directly, we only know what category they belong to, and infer things from that.
I think I would only begin to know what it is “to have a bool” if you gave me some context. Am I a student writing a program and specifying a kind of input? Am I a child about to answer whether or not it was I who stole the pie? Am I an experimenter reporting whether my detector has fired?
Two questions:
(1) Type theory is useful for reasoning in which of the following: mathematics, computer science, physics, ordinary life? Elsewhere?
(2) Where it is useful, do we adopt an intrinsic or an extrinsic typing approach? See intrinsic and extrinsic views of typing.
Does it make sense to have a single term judged to belong to two different types? Husserl noted the difference between mathematical entities and physical objects, that the latter are never wholly given. Reasonable then perhaps to opt for extrinsic typing in ordinary life, where I can ask for more information about ’that X’, i.e., engage in type refinement. An easier case for intrinsic typing can be made in mathematics.
I think part of the difficulty is because there are so many different kinds of type theory and so many different intuitions for it. Intrinsic/extrinsic typing is just one example. Any given explanation or motivation will apply to some people and some type theories, but probably not all of them. I think several times while learning type theory I’ve had this experience of having an “ah-hah” moment, going to Dan Licata and saying “I understand now, in type theory things are thus-and-so!” only to have him reply “well, yes, sometimes, but sometimes we do it a different way too.”
We are not told what is in a type, but only how it works
I don’t think I’d agree. Don’t the introduction rules precisely tell us “what is in a type”? Maybe that’s what Richard #14 was saying too.
I think I would only begin to know what it is “to have a bool” if you gave me some context. Am I a student writing a program and specifying a kind of input? Am I a child about to answer whether or not it was I who stole the pie? Am I an experimenter reporting whether my detector has fired?
Well, on the one hand, maybe you are proving the point of #15 that “things only make sense in context”. But on the other hand I don’t think I understand in what way your answers to “what is a bool” would differ in those three cases. Isn’t a “bool” precisely the abstraction of what is common to all these sorts of situations?
Just to muddy the waters a little more, I’ve recently begun pondering whether “bidirectional type-checking” can be taken seriously as a philosophical aspect of the ontology of type theory, rather than just a technically convenient algorithm for manipulating syntax. The idea here is that there are two ways in which a term can belong to a type, called “inferring/synthesizing” and “checking”. Some terms “synthesize” their type, which means that if you know the term you can deduce algorithmically what type it has (or, at least, one “canonical” type that it has). Other terms don’t have any such canonical type, but if we are given a type we can “check” whether they have that type.
Lots of terms synthesize a type. In particular, when we assume a variable, we associate it to a type, and it thereafter synthesizes that type. Then, for instance, if synthesizes a function type like , and we can check that has type , then the application synthesizes the type . Similarly, if synthesizes a type like , then synthesizes type , while synthesizes the type . Nearly all elimination forms synthesize, as do (I believe) the introduction rules of some positive types, like and . The main examples of terms that only check against a type are introduction forms for negative types, such as lambda-abstractions and pairs. For instance, the term describing the identity function doesn’t determine its type, but we can check that it has type for any type we want. Similarly, the term can be checked to have type , but can also be checked to have type for any such that (plenty of such are definable by recursion on ).
Of course, one can add annotations to such terms that do determine their type, and often one regards the un-annotated versions as simply “syntactic sugar” in which the “implicit arguments” are inferred by the (human or computer) reader before being given any meaning. But I think I can also imagine an ontology in which objects like and are meaningful objects before we check that they belong to any particular type, and despite the fact that we can check that they belong to many different types. Of course, in categorical semantics every morphism comes with a specified codomain, but I don’t think that’s inconsistent with such a view; we simply say that the objects we interpret into the semantics are “terms together with a type they can be checked to have”.
If one goes even further in this direction one arrives at “computational type theory” where the untyped terms belong to a language with an untyped notion of computation, and the type(s) that a term belongs to are determined by its behavior under that computation. For instance, in that world a term like belongs to because it evaluates to which belongs to , despite the fact that syntactically the original term contains nonsense like (by which I mean “8 applied to 4” which is nonsense because is not a function and so cannot be applied to anything). Computational type theory is problematic from a categorical perspective because it’s very closely tied to one particular model (the PER model constructed from the untyped computations). But right now I think one can go “partway there” with bidirectional type-checking and still believe in terms that can be judged to have multiple types, but nevertheless maintain the principle of such judgments being determined by syntactic analysis of the term rather than its behavior and thus preserve all the same categorical models.
Anyway, I think the main reason I mention this is to make the point that the world of type theory contains so much variety that there’s no single way to “explain what type theory is”. Which, of course, doesn’t mean we shouldn’t write such explanations! Just that we should be aware, and perhaps make the readers aware, that no such explanation can be universally applicable or definitive.
Does it make sense to have a single term judged to belong to two different types?
For me, the answer to this at a philosophical level is clearly “yes”.
At a practical level, of course, many type theories do not include the possibility for this, but some do, e.g. those with coercion.
Re #19, sorry we’ve rather hijacked your thread. This should be an nLab discussion on type theory. It would be good to give there some account of the kinds of type theory out there, and then this page could point out which apply to HoTT.
Yes, the HoTT wiki page should certainly focus on homotopy type theories, but even among those there is lots of variation. For instance, there is a “computational higher type theory” of Harper et. al. where the typing of untyped terms is determined by their behavior, and even aside from that one may make various choices about things like universe cumulativity that also affect whether a term can belong to multiple types. Book HoTT is only one type theory among many that are used for HoTT nowadays.
Maybe this would be more appropriate at a page like Book HoTT? I thought this page was intended to list all the type theories that have been used for HoTT.
(By the way, the software has actual redirects, you don’t need to make one page that say explicitly to “See” another one.)
1 to 26 of 26