Defining judgments on strings is certainly possible. It would be some kind of generative grammar. If algorithmic, it would be some kind of parser. But that's not what people usually do. Isn't type theory hard enough without simultaneously worrying about parsing ambiguities? :) ]]>

It may help if I note that, by contrast with some earlier comments, I see a big difference between working at the level of strings and working at the level of a set of atoms, precisely because a string is at the same primitive ontological level as that at which one actually writes/types when working in a type theory, i.e. it is all ultimately just writing in the ordinary human sense.

]]>I should definitely clarify that! I mean that it is not a mathematical object in any pre-existing sense of mathematical as a formal system. It is mathematical in the deeper/human sense of defining and studying some entity according to certain rules, and everything expressed *in it* is mathematics. This is how it should be for a foundations that can stand alone!

But if that is too philosophical, one can also take the following working definition: the formal language can be defined in a programming language. That is unarguably at least as rigorous as defining things on paper/on a screen at the formality level of, say, #19.

]]>It seems to me contradictory to say that what you’re doing is not a mathematical object and yet is completely formal and rigorous. I am not familiar with any non-mathematical notion of rigor that could be applied here.

]]>Re #29: thanks very much for your reply, Mike! I find it very interesting, because I think the objections you raise are not in fact obstructions to understanding type theory as in #2. This is perhaps most clear in the following.

I still don’t see #2 as lower level or more general, though; in order to make it precise you need to say what you mean by “string”, which essentially means specifying a set like $Atom$.

The main aspect of what I was saying was that a set like $Atom$ is not needed to say precisely what a type theory is. I am taking a string as a primitive concept, at the level of human agreement: literally consecutive characters on a page (or a screen). This is why I say that what I am doing is lower level, and has no priori mathematical significance: I am literally working at the level of writing symbols on a page/screen. In particular, I was not trying to define type theory as a mathematical object in some existing foundations: in that case, I completely agree that #19 is better.

Moreover, I think that substitution, variables, and all that are *not* needed to understand type theory formally. Of course they are being used in #19, but if one works at a lower, completely syntactic level (manipulating strings), as I do in #2, they are not needed, and in my opinion one obtains a considerable simplification ontologically.

Regarding #5, I don’t see how to regard $x : A, B(x) : Type, y : B(x) \vdash C(y) : Type$ even as well-formed raw syntax. All the typing declarations on the the left side of a turnstile must be

variabledeclarations, and $B(x)$ is not a variable. Nor, in this type theory, are we dealing with universes yet, so “$: Type$” can’t appear on the left side of a turnstile either.

As above, the objections here are not fundamental to understanding what a dependent type theory is. If we write instead $x : A, y : B(x) \vdash C(y) : Type$ or whatever one’s favourite way of writing it is, the meaning is the same. Note that I referred to this kind of syntax as shorthand/notation (’we may write’), i.e. exactly what one chooses to write is not significant. The only thing that is really important is that one defines how to form new pieces of syntax given previous ones: whichever shorthand one chooses just has to allow one to understand why what is meant is legitimate.

…do in fact describe actual primitive rules that must be present in type theory. The first is the rule

$\frac{ \quad } {\Gamma \vdash \mathtt{A} \, type}$which I mentioned in #17. Every element of the set $Atom$ must come with such a rule, otherwise there is no way to use it.

As above, there is no set $Atom$ in#2. What is a type in #2 is defined inductively (allowing primitive/axiomatic declarations), and is defined purely by being legimitate syntax which is a valid judgement. Once it is judged a type, it can be used. And again, I consider this (radical?) simplicity to be a much clearer way to proceed.

Similarly for the other rule: it is bookkeeping needed in a certain way of saying what a type theory is, but is not fundamentally necessary; and is not necessary if one proceeds as in #2.

In conclusion, I would like to strongly counter the objection that what is in #2 is not precise or formal. It is *completely* formal and rigorous, and completely precise: it is easy to write down code which will only allow one to make syntactical constructions/validity judgments as in #2, using one’s favourite language’s notion of a string.

To summarise, what I would say (and this is what I was trying to convey in #9) is that type theory is ultimately a formal language, it is something that one writes down. What one cares about are judgements that a certain term belongs to a certain type, and one has rules for how to make such judgements. From #2, one can always produce a written down derivation that is the same as one which one would produce as in #19. The difference is that everything in #2 is completely explictly part of the same syntax, there are no pre-existing notions of ’variable’, ’atomic type’, and so on.

]]>Oh, I forgot to reply to this:

do you call the judgments that appear “above the bar” in the application of a rule

premises, orhypotheses(or either/or)? And do you call the judgment “below the bar” theconclusionof that rule? Unless I’m advised differently, I think I would tend to say “premises” and “conclusion” (of the rule-application).

I think that “premises” and “conclusion” is also what I’ve been taught for the judgments above and below the bar of a rule, although I still have to stop and think every time I use them to make sure I’m not mixing them up with the “hypotheses” or “antecedents” on the left-hand side of a turnstile and the “consequent” on the right-hand side of a turnstile.

]]>Right. It’s a choice whether to define all the raw types+terms together like that in one inductive definition, or to define the raw types and the raw terms separately by a pair of mutual inductive definitions. At this point in my life I am gravitating towards the former, partly because it seems to more naturally support universes a la Russell where the same syntax is used for a term belonging to a universe and for its corresponding type.

]]>I guess #19 then is laying out the well-formed strings in the object syntax – or at least describing in a conventional shorthand the construction trees. Then it’s sort of left to the rules (for deriving judgments) to sort out what is what (which linguistic items correspond to terms, which to types, etc.).

]]>It’s not just you!

]]>I also appreciate that Todd started this thread.

Maybe for outsiders like myself —who were not trained to think carefully about syntax, let alone to treat it as a mathematical object— it might help to use some notational conventions that distinguish more clearly between syntax and meta-syntax. I’m imagining things like different fonts (or font colors) for metavariables, for substitution or for the stronger equality symbol that Mike used in #32.

Personally, it took me quite some time until I understood that the notation for substitution was not part of the syntax of type theory. And still now I’m not always sure if certain symbols are part of the “object syntax”, part of the “judgement syntax” or part of the even higher meta-level language that’s used to talk about judgements. But maybe it’s just me who gets confused by these things.

]]>As you might guess, that’s also why I started this thread: to prepare myself for the Initiality Project. The Martin-Löf page is one of the most developed pages we have that treats the parts of type theory that CwF’s directly interact with.

]]>I’m glad we’re having this conversation right now, by the way, because in the Initiality Project we’ll want to explain all these aspects of syntax as clearly as possible for newcomers/outsiders, since the goal is to be as expository as possible for a wide audience.

]]>Thanks for those clarifications Mike, that helped me.

]]>So about those brackets. There seems to be a common (though not universal) convention whereby round parentheses are used for actual syntax and square brackets are used for meta-level operations on syntax. So for instance when we write $\lambda (x:A). a$, the parentheses are part of the syntax: pre-parsing they are actually tokens in a string, post-parsing they are just part of the notation for a node of an abstract syntax tree with the label $\lambda$ and the three children $x$ (a variable), $A$ (another syntax tree), and $a$ (another syntax tree). Similarly, when I wrote $\mathtt{B}(a)$, the parentheses are part of the syntax: $\mathtt{B}$ is an atomic type which is declared as part of the signature to depend on one argument of type $\mathtt{A}$ (say), and $\mathtt{B}(a)$ is a notation for a node of an abstract syntax tree with the label $\mathtt{B}$ and one child $a$ (another syntax tree).

Similarly again, $apply(f,a)$ is likewise a notation for a node labeled by $apply$ and with two children $f$ and $a$. But it gets confusing because often we simplify our notation by writing $apply(f,a)$ as just $f(a)$ (to match common practice in mathematics) or $f a$ (to match common practice in functional programming and $\lambda$-calculus). This is fine as long as we don’t get confused. In particular, my notation $\mathtt{B}(a)$ was *not* a simplification of $apply(\mathtt{B},a)$; the latter doesn’t even make sense, because $\mathtt{B}$ is not well-formed syntax on its own. Remember there are no universes at the moment. If there *were* a universe, called $Type$ say, then *instead* of an “atomic type” $\mathtt{B}$ depending on one argument of type $\mathtt{A}$, we could instead postulate an atomic *term* $\mathtt{B}$ of type $\mathtt{A} \to Type$, and then we could indeed write $apply(\mathtt{B},a)$ to obtain an element of $Type$, which we could then coerce (explicitly or implicitly) to an actual type. The HoTT Book uses Russell universes, meaning that the latter coercion is implicit, and an exhaustive tower of such, meaning that every type belongs to some universe, and thus we can use the latter mode almost exclusively. But they are distinct.

Now something that we often do with syntax is substitute for variables, and the common notations for this involve square brackets. I usually write $M[a/x]$ for the result of substituting the term $a$ for the variable $x$ in the term $M$; other notations include $M[x/a]$, $[a/x]M$, and $M[x\mapsto a]$. This is not part of the actual object-level syntax: it doesn’t represent any node of an abstract syntax tree. (Well, there are type theories in which “explicit substition” is an object-level syntactic construction, but let’s keep it simple and not go there right now.) Instead it’s an operation that acts *on* such trees. For instance, if we substitute $apply(f,a)$ for $x$ in $\lambda(x:\mathbb{N}). (x+x)$, we get $\lambda(x:\mathbb{N}). (apply(f,a) + apply(f,a))$. We can thus write

$(\lambda(x:\mathbb{N}). (x+x))[apply(f,a) / x] = \lambda(x:\mathbb{N}). (apply(f,a) + apply(f,a))$

as long as we remember that this $=$ denotes a *syntactic equality of terms*, which is distinct from (and stronger than) both the object-level equality type and even the judgmental equality judgment.

Under these notational conventions, there’s nothing for $B[x]$ to mean. We can’t substitute something without knowing what we’re substituting it for. I think that probably when Todd wrote $B[x]$ he had something in mind more like my $\mathtt{B}(x)$. However, there are a couple of meta-level meanings that are sometimes given to notation like $B[x]$. One is that we might choose to write $B[x]$ to mean “a term which contains $x$ as a free variable”, allowing us to then write $B[a]$ instead of $B[a/x]$. More precisely, one could in this case consider $B$ to be a meta-variable for a “term with a hole”, and the notation $B[-]$ indicates the term obtained by substituting something in for that hole. We can thus regard a “term with a hole” as a presentation of a meta-level *function* from terms to terms, and $B[a]$ as the meta-level *application* of this function to the term $a$. If we go even further and represent terms using higher-order abstract syntax, then we could use the same notation $B[a]$ for the literal meta-level application of a function to a term, to distinguish it from the object-level notations like $f(a)$ for $apply(f,a)$.

Finally, regarding “depends on”: I would include the variable $x$ in the phrase if and only if the variable $x$ actually appears freely in the term under discussion. So, as Michael said, a term $B$ of type $A\to Type$ could be said to “depend on $A$”, as there is no variable $x$ around. On the other hand, a type $B$ in which $x$ occurs freely, and for which $x:A \vdash B \, type$ (or perhaps $x:A \vdash B : Type$) is valid, can be said to depend on $x:A$ because the variable name matters. We can get from the first to the second by application, and the second to the first by abstraction. In all of this, $B$ is a metavariable standing for a term which could contain free variables.

However, when I said that the atomic type $\mathtt{B}$ “depends on $\mathtt{A}$”, I meant something different from both of them: that the signature declares that $\mathtt{B}$ depends on one argument of type $\mathtt{A}$. The resulting syntax allows us to write $\mathtt{B}(a)$ whenever $a:\mathtt{A}$, and in particular if $x:\mathtt{A}$ is a variable then we can write $\mathtt{B}(x)$ to obtain a type that “depends on $x:\mathtt{A}$” in the second sense above. But $\mathtt{B}$ itself is not yet a well-formed piece of syntax, it’s an element of the signature which only becomes well-formed in the construction $\mathtt{B}(a)$. So in particular $\mathtt{B}$ does not contain any free variables, nor does it belong to a function type.

]]>Regarding notation: I was indeed using a different typeface $\mathtt{B}$ to distinguish atomic/axiomatic types (which belong to a set specified as part of the signature, and have no “internal structure” from the perspective of the theory) from arbitrary types (i.e. abstract syntax trees such that $B \, type$ is either valid or hoped to be valid). Although technically I believe *both* are metavariables: $\mathtt{B}$ is a metavariable ranging over atomic types, while $B$ is a metavariable ranging over syntax trees.

(More later about $B(x)$ and $B[x]$…)

]]>I think one thing blocking my understanding was thinking primarily in terms of “pure” MLTT, the free structure generated from a discrete collection of (I suppose unparametrized) types. I kept thinking that insofar the whole subject is about dependent/parametrized types, there should be an introduction rule that introduced a parametrization

This is a good point. Although the rules *permit* dependent types, most of them do not *introduce* dependency where it was not already present. I believe the basic ways that type dependency arises in MLTT are:

- Axiomatic/atomic types with dependency
- Equality/identity types: $x:A, y:A \vdash (x=y) \, type$
- Universes: $X: Type \vdash el(X) \, type$

When I first read #2, I found it vague and confusing; but now after writing #19, I can see that #2 is expressing something of the same idea (though less precisely). In particular, the phrases “any string you like” and “permitted to write” seem vague to me and I wasn’t sure what they meant. Now I think I see that by “any string you like” you meant to refer to the elements of a set like the $Atom$ in #19, and that by “permitted to write” you meant that it is a well-formed string, i.e. parses to an abstract syntax tree in the sense of #19. Is that right? I still don’t see #2 as lower level or more general, though; in order to make it precise you need to say what you mean by “string”, which essentially means specifying a set like $Atom$.

Regarding #5, I don’t see how to regard $x : A, B(x) : Type, y : B(x) \vdash C(y) : Type$ even as well-formed raw syntax. All the typing declarations on the the left side of a turnstile must be *variable* declarations, and $B(x)$ is not a variable. Nor, in this type theory, are we dealing with universes yet, so “$: Type$” can’t appear on the left side of a turnstile either.

Regarding #7, the two lines above “Binary product types”:

For any context $\Gamma$, if $A$ is an atomic type, $\Gamma \vdash A : \mathrm{Type}.$

If $x : A$ is a typing declaration in $\Gamma$, then we have $\Gamma \vdash x : A.$

do in fact describe actual primitive rules that must be present in type theory. The first is the rule

$\frac{ \quad } {\Gamma \vdash \mathtt{A} \, type}$which I mentioned in #17. Every element of the set $Atom$ must come with such a rule, otherwise there is no way to use it. It sounds like maybe you would rather allow elements of $Atom$ (what you call “strings” in #2) that don’t necessarily come with such a rule; there seems little point to that, since then they cannot appear in any valid judgments, but anyway in that case you could just redefine $Atom$ to be the subset of atomic types that *do* come with such rules.

The other is the “variable rule”

$\frac{ (x:A) \in \Gamma} {\Gamma \vdash x:A }$Both of them are necessary in order for any typing derivation to “get off the ground”.

]]>Well, let me just say that all the responses have been useful to me, in one way or another. In some cases, pertinent points had to be made by more than one person in more than one way, but together they formed a kind of Indra’s net in which the multiple reflections eventually clarified matters to me.

It’s probably not too important at this point, but I think one thing blocking my understanding was thinking primarily in terms of “pure” MLTT, the free structure generated from a discrete collection of (I suppose unparametrized) types. I kept thinking that insofar the whole subject is about dependent/parametrized types, there should be an introduction rule that introduced a parametrization. I began to get unconfused on this point after pondering #23, even though aspects of this point were brought up earlier in the discussion.

I think maybe an example or two on the page would help future readers.

Speaking of notation… I’d seen $B[x]$ (as opposed to $B(x)$) in Seely’s old paper and thought nothing of the notational discrepancy, and still do regard it as an inessential difference of notational convention. Am I wrong? What do I need to be aware of here?

More interesting to me are the subtle changes in font, which evidently carry meaning. It seems the ordinary $B$ in mathmode is being used for a meta-variable, whereas $\mathtt{B}$ “is an axiomatic type that is specified, as part of the “signature”, to depend on one copy of the axiomatic type $\mathtt{A}$”. Assuming I’m reading things correctly here, certainly that kind of subtlety could slip past unnoticed (or even regarded as some personal idiosyncrasy or affectation) unless carefully pointed out.

]]>Re #15: Thanks Alexis! Yes, this is exactly what I had in mind.

Re #19: I do not see this as vastly different from #2. Indeed, I would say there are the following differences (there may be more).

What is in #2 is lower level, working directly with syntax

What is in #2 is more general

What is in #2 has a distinction between terms and types built into it

While it doesn’t really matter, I am not sure which parts of #2 and #5 would be considered to be misleading or beside the point? When I read #17, #22, and #26, for instance, I see a lot of confusion ensuing, and a higher level of sophistication. What is in #2 is very simple, and from that perspective, I gave in #5 what I would consider to be very clear answers to Todd’s questions, clearer to my mind than the alternative answers that have been suggested. I don’t mind at all if the page is rewritten to use something like #19, but it would be interesting to discuss precisely what is deficient about #2 (I did not write down all permissible syntax constructions, e.g. for forming dependent product types, but I considered it evident how to continue the pattern of #2 to do so) and #5.

]]>Maybe some of the confusions stem from blurring syntax with meta-syntax as happens with notations like $B(a)$, $B[a]$, $\mathrm{apply}(B,a)$?

For instance, the nlab article uses the notations $B(a)$ and $\mathrm{apply}(B,a)$, which as far as I can tell mean different things (correct me if I’m wrong): $\mathrm{apply}(B,a)$ is part of MLTT syntax, and the $B$ and $a$ appearing in it are metavariables standing for terms of MLTT. On the other hand, $B(a)$ is not part of MLTT syntax, but meta-syntax standing for a term of MLTT that contains a free variable $a$. (Or is $a$ a meta-variable standing for an arbitrary MLTT term? so that $B(a)$ stands for a term that contains $a$ as a subterm? Later in the article a general term $t$ is substitute for $a$ in $B(a)$ which supports this view.) Assuming my interpretation of $B(a)$ is correct, then strictly speaking the $B$ in $B(a)$ cannot stand for a term of MLTT. Rather it would stand for a mapping that transforms terms to terms. This would be a meta-object, not part of the terms of MLTT. Todd is using the notation $B[a]$ and I suspect he means the same as $B(a)$?

Another related aspect that I have found somewhat confusing when reading the literature on MLTT is the slightly different use of the expression “$B$ depends on…” (or “$B$ is parametrised by…”).

I see Mike use it as “type $B$ depends on type $A$” (actually he writes $B$ is dependent on $A$). I assume this translates to the formal judgement $\Gamma \vdash B\colon A \to \mathrm{Universe}$ (I think this is also how the HoTT book uses the “depends” terminology). But I have also seen it used differently as “type $B$ depends on $x:A$”, which translates to $\Gamma, x\colon A \vdash B\colon \mathrm{Universe}$. One can pass from one formulation to the other using lambda abstraction resp. application, but conceptually there is a difference between saying $B$ is dependent on $A$ and saying $B$ is dependent on $x:A$. (I think of it as the difference between saying “$f$ is a function” and “$f(x)$ is a function of $x$”.) Anyway, I hope these remarks are not too much off track. I’m no expert in type theory, just someone trying to learn it mostly on my own.

]]>Yes, that’s exactly the idea.

]]>Oh, okay – maybe it’s beginning to percolate down to my brain. So we could take something mundane like the MLTT of categories (which roughly speaking is the syntax for the theory of categories in the doctrine of locally cartesian closed categories), where you have a type $O$ and a dependent type $M(x, y)$ where $x, y$ are variables of type $O$. That dependency is expressing the part of the signature of the theory which in ordinary parlance deals with the pairing $\langle dom, cod \rangle: Mor \to Ob \times Ob$. Does that seem more on the right track?

]]>I was under the impression that Martin-Löf dependent type theory (which I’ll abbreviate to MLDTT) was well-defined enough so that one could speak of the theory,

Yes, I think that’s basically right. (The usual abbreviation is just MLTT; I don’t think any important non-dependent type theories are attributed to Martin-Löf.) What I meant was, the page as currently written describes *some* dependent type theory (with, as I see now, some omissions), but it’s not clear to me that this type theory coincides with MLTT.

Perhaps the best way to clarify things would be for me to rewrite the page the way I would write it, and then you could read that and see what questions remain. Unfortunately it might be a little while before I have the time for such an exercise, so let me try to respond quickly.

I’m not actually sure how “type theorists” use the word “atomic”; maybe a “real” type theorist can weigh in. It could be that some of them use it with an implication of non-dependency, which maybe was what the author of the page had in mind by distinguishing atomic from “parametrised”. Maybe “axiomatic” is a better word for what I meant? In any case my point was that whatever words we use for it, the important *distinction* is between types (possibly including dependent types) that are not assigned any “meaning” by the theory (hence could be interpreted by anything at all in a model) and type forming *operations* (including the nullary case) that are assigned a particular meaning (and hence must be interpreted by something with some universal property). If we think of syntax as a free model of some (essentially) algebraic theory, then the atomic/axiomatic types are the *generators* of the free structure (akin to the generators of a free group), while the operations/rules are the *operations* of the theory (akin to the group multiplication in a free group).

So does this mean that we can consider something like $(A \times B)(x)$ as being a well-formed expression [where I’m using $A, B$ as meta-variables for which we can substitute general types], and, I presume importantly, that this expression is considered an atomic type?

No. (-:

We can’t “introduce new dependencies” willy-nilly. There’s no way to start with a type that doesn’t depend on $x$ and “make it depend on $x$”. Each piece of syntax comes with information about what arguments it can be applied to, which includes the case of types depending on variables. For instance, take the “cases” construct that eliminates out of a sum type: we can only write it as $cases(s,c,c')$ with three arguments, we aren’t allowed to write $cases(s)$ or $cases(s,c,c',c'')$ or just $cases$. Similarly, if $\mathtt{B}$ is an axiomatic type that is specified, as part of the “signature”, to depend on one copy of the axiomatic type $\mathtt{A}$, then we can only write $\mathtt{B}(a)$ (where $a$ is of type $\mathtt{A}$); we aren’t allowed to write $\mathtt{B}(a,a')$ or $\mathtt{B}(a)(a')$ or just $\mathtt{B}$. This is what the rule

$\frac{\Gamma \vdash a:\mathtt{A}}{\Gamma \vdash \mathtt{B}(a) \, type}$means: here $\mathtt{B}$ is a *specific* axiomatic dependent type, not a metavariable representing an arbitrary type, and the presence of this rule comes from a “declaration in the signature” that $\mathtt{B}$ depends on one copy of $\mathtt{A}$. A different signature for $\mathtt{B}$ would give rise to a different rule for forming types involving it, for instance if we declared $\mathtt{B}$ to depend on one copy of $\mathtt{A}$ and one copy of $\mathtt{C}$ then the rule would *instead* be

But for any given signature, each axiomatic type has only *one* formation rule, and there’s no general rule allowing us to “add parentheses” to an arbitrary type.

More later…

]]>And now let me interact with Mike’s responses (I’m very glad to be hearing from you, Mike! – thanks for such a careful and thoughtful response). Please forgive any lawyer-like tone in what I might say; I sometimes feel as if I’m interrogating (well, I guess I am in a way).

Also, this is really my response to Mike’s #17. Mike’s #19 makes me question to some degree what I’ll be writing here and whether I’ve managed to confuse myself further, but let me just go ahead and expose those possible confusions.

I believe this article is intended to give a complete specification of

atheory.

I was under the impression that Martin-Löf dependent type theory (which I’ll abbreviate to MLDTT) was well-defined enough so that one could speak of *the* theory, modulo the issue of treating equality extensionally versus intensionally. It certainly appears that the present article intends to describe the *extensional* version. Well, I should be more careful since I guess HoTT in all its variants are forms of MLDTT. So I guess what I mean is that I thought there was an acknowledged “core” of MLDTT that the page means to describe, which would include dependent products, dependent sums, and identity types, and maybe nothing more [such as positing inductive types]. Should that view of mine be amended?

Also I thought there was indeed something missing. I’ve scanned the page a few times and never saw any rule that looks anything like

$\frac{\Gamma \vdash a:\mathtt{A}}{\Gamma \vdash \mathtt{B}(a) \, type}$where a dependent type gets introduced in the conclusion of a derivation. And that to me was missing something important, “tautology” or not. (Again, I’m trying to pay hard-nosed attention to the *formalism as written*, and not to intuitions along the lines of “tautology” or “it goes without saying that…”)

Oh, by the way: it’s been such a long time since I’ve used the language (that I dimly remember from sequent calculus) that I may have forgotten. In a derivation (is that what you call it in type theory?), do you call the judgments that appear “above the bar” in the application of a rule *premises*, or *hypotheses* (or either/or)? And do you call the judgment “below the bar” the *conclusion* of that rule? Unless I’m advised differently, I think I would tend to say “premises” and “conclusion” (of the rule-application).

writing just $\Gamma \vdash s=t:A$; writing $(s=t):A$ looks like you are judging “$s=t$” to be a term of type $A$

Thanks for that warning!

instead saying $\Gamma \vdash A \, type$

Ah, good; that comes as a slight relief. As long as the page is dealing with what I’m calling “core MLDTT”, and in particular not with universes, I’d feel better having it this way.

I would not regard $\top,\bottom$ as “atomic”, just as in propositional logic I would not regard them as part of the “theory”. In the latter case they are nullary connectives; in type theory they are nullary type forming operations. I would not use the word “variable” for atomic types either, to avoid confusion with the “variables” that appear in a context and also the “meta-variables” (see below) that are used when specifying the theory. Sometimes people say “axiomatic” instead of “atomic”.

Okay, fair enough; thanks. I agree now that there was something old-fashioned about the way I was trying to use “atomic” there. (It reminds me: I don’t like how I sometimes see people use the term “literal”, as in both a variable $x$ and its negation $\neg x$ are called “literals” in discussions of for example disjunctive normal forms in the propositional calculus.) I’m happy to go along with however way the type theorists use the word “atomic”.

I guess I could explain it to myself [oh, but I’ll have to take this back in a minute by something further you say! so take the following merely as a first approximation] by saying that “atomic type” means a type “without connectives, including the nullary ones”, “without identity/equality” (as a type-forming operation), and “without quantifications”, by which I mean without dependent sums and products in the formula.

But: the current page does appear to make a clean distinction between “atomic types” and “parametrized types”. And according to what you say for example here:

I think what is meant here (at least, what

shouldbe meant) is that in a dependent type theory, some of the “atomic types” can bedependenton other atomic types, or on other types built from them.

it sounds at the very least that something important to get straight on is being sloughed over. Well, I guess we’ll get there in time.

But it seems one really has to pay *very* close attention! You said “atomic types can be dependent… on other types built from (other atomic types)”. Ah-hah! So does this mean that we can consider something like

as being a well-formed expression [where I’m using $A, B$ as meta-variables for which we can substitute general types], and, I presume importantly, that this expression is considered an *atomic* type? To put it in slightly picturesque form: each time we freshly introduce a dependency in type-formation, that new type $B(x)$ is deemed atomic?

Well, I don’t know that I want to go too much farther down this path of pursuing what is meant by “atomic type”, because I’m beginning to wonder if that vocabulary isn’t simply an excellent way of engendering further confusion in the reader. I think the most I might venture at this point is my second approximation: the idea that a type $X$ could be considered “atomic” as long as the *last* application used to form $X$ is not one of the connectives, or formation of an identity type $id(s, t)$, or a “quantification”. In particular, if we start with any type expression $A$ and form the expression $A(x)$, then that denotes an atomic type. But that wouldn’t quite fit the scheme on the page, where it says “For any [valid] context $\Gamma$, if $A$ is an atomic type, [then] $\Gamma \vdash A$ [is a valid judgment]”. It would have to be a little more subtle than that in the case of *dependent* atomic types. (Hope it’s clear what I’m saying.)

Well, I’m sure you’ll let me know if I misinterpreted you here.

I don’t know that that usage is standard however.

I had been under the impression that any expression $t: A$ was called a type declaration, declaring that $t$ is of type $A$. Although maybe that would be considered silly: if you’re doing everything correctly, you never need to “declare” any such thing – only at the outset where you start with a variable term do you say, “okay, I declare that this $x$ here is going to be of type $A$”.

When being careful about this, people sometimes include a separate judgment for “being a valid context”, say “$\Gamma\,ok$”

Something about this I like. It almost feels like an interaction with a proof assistant (which I’ve never used, so I mean how I imagine a proof assistant behaving): you type in an expression, and the proof assistant lets you know if it’s happy with what you wrote.

However, another approach is to regard the type- and term-validity judgments as ranging over judgments with possibly-invalid contexts. In that case, the notion of valid context does not have to be defined mutually-inductively; instead it can be defined simply on its own

afterthe (mutual) type and term judgments have been stated. And (assuming the rules are stated correctly) we can prove, after the fact, that in anyvalidjudgment $\Gamma \vdash A\,type$ or $\Gamma \vdash t:A$, it happens that $\Gamma$ is a valid context in this sense.

Okay, I think I sort of follow this – at least I easily believe something like that could be done. Not sure which approach I’d find more compelling or approachable. If the latter, then this remark definitely deserves to be made.

Before I try to respond to the next part of what you wrote, an understanding of which looks critical in dealing with some of the questions I’m asking, I’ll move to the last part:

this judgment is usually written without a colon as “$A\, true$”, with “$true$” just part of the judgment syntax (like the colon and the turnstile)

Okay, great. Might $true$ (or something like along such lines) be a type in one of the HoTT elaborations of MLDTT? The usage here does remind me of the discussion surrounding propositional types.

(A naive and casually put question, surely – I admit I’m not trying hard to think it through on my own.)

There are really two issues here. The first is one of notation: your conclusion would generally be written as simply $\Gamma, x: A \vdash B: Type$.

Okay, I find myself starting to get confused by what you’re saying in that paragraph.

Maybe it would be wise or productive to assume or pretend that I didn’t know what I was talking about when I wrote the thing you are responding to here. At the time of writing, what I was trying to do is imagine what sort of rule (that seemed to me missing from the page) would allow one to have as its conclusion a judgment of the form $\Gamma \vdash B(x)\, Type$. (Because, after all, there are rules on the page like “If $\Gamma, x: X \vdash A(x)\, Type$, then $\Gamma \vdash (\prod x: X) A(x)\, Type$” – that premise had to come from *somewhere*.)

Now I think you probably already answered that question earlier in your response: that type theorists have a rule

$\frac{\Gamma \vdash a:\mathtt{A}}{\Gamma \vdash \mathtt{B}(a) \, type}$and so, assuming $x: A$ is a typing declaration in $\Gamma$, we may infer $\Gamma \vdash x: A$ and hence we could derive $\Gamma \vdash B(x)\, Type$ as a special case of this rule. Are there other subtleties that need pointing out?

Again, thanks for taking time out of your day to answer questions!

]]>