What confused me about it at first was not so much using an example to denote an equivalence class, but the consequences for the rest of the type system of not explicitly passing to an equivalence class. Every open term actually refers to an equivalence class, and while you can always ask if an object is self-related by a PER (membership), there is generally not a meaningful membership question for an equivalence class. The result is that Nuprl is mostly like extensional type theory, but with clever additions based on a theory of membership which is AFAIK completely unique.

]]>Philosophy gets itself very confused by this quotienting, how to switch between the groupoid of an equivalence relation and the set of equivalence classes. So you say ’this direction’ pointing to a line, and it seems hard to realise that you can point to a specific line but as existing in a type where identity is parallelism.

]]>Wow, I really missed a good conversation (starting at around #60) in which to plug Nuprl-style systems. Better late than never? Conveniently, I had put up a bunch of stuff on prototype CLF last year, which is my work-in-progress Nuprl-style system. Informally, I distinguish between “members” and “elements” of types. Members are (relatively) intensional, while elements are extensional. Quotient types have the same members as the original type, but typed functions can only distinguish members when they denote the same element. (That is, they need to respect typed equality.) Membership reasoning is also allowed, subject to the constraint that a membership predicate is a typed function, so it needs to respect equality of some “prior” type. This does not preclude materialish subset types $B \coloneqq \{x:A\,|\,P[x]\}$ so that given $a\,:\,A$, $a \in B$ means $P[a]$.

I put some discussion on the difference from material set theory at PER theory.

Bishop-like choice principles don’t seem to pop up. I think those are due to the interaction between Bishop-style quotients and Bishop-style universal quantifiers, whose proofs aren’t required to respect typed equality.

]]>I added a mention of the axiom of extensionality.

]]>I have no quarrel with your comments about predicates, except that I would argue that the “narrow” definition of “predicate” is the *correct* one. So I would rather teach students that all predicates respect equality, and thereby why certain things that seem like predicates are not actually predicates. That’s the same thing we do with functions, after all; a predicate is just a truth-value-valued function.

Yes, I did want to make quotient sets new types, and in particular to take quotients of sets that might not be types. It didn’t occur to me at the time, but now I guess that that already entails the ability to make subsets into new types as well, by quotienting by the identity equivalence relation on a subset (or equivalently by a coreflexive PER on a type).

]]>I would rather teach students the idea that

allpredicates respect equality.

The problem is that this is just not true, or at least not true until you've defined ‘predicate’ sufficiently narrowly. Take the example of the numerator of a rational number (or the whether the numerator is odd to make it truth-value-valued). Besides teaching them that all predicates respect equality, you also have to teach them why that is *not* a predicate (in the narrow sense). And to my mind, the reason *why* it's not is ultimately that it doesn't respect equality! (That's how you define a predicate on a quotient set: define a predicate on the original set, and then check that it preserves equality in the quotient set.)

I wonder whether there could be some formal “abstraction mechanism” to define new types, like $\mathbb{R}$, which we eventually definitely want to treat as a type, but foundationally we want to construct as a subset or quotient of something or other else.

Although you didn't write it down, it seems as if you wanted to make quotient sets new types. You can do that with subsets too, and indeed did, with the concept of tabulations in SEAR. But if we do that *all* of the time, then we've lost some of the automatic simplification of notation. It needs to be something that we can choose to do or not according to how we want to think about things.

somewhere there is this idea that, pedagogically, you have to point out whenever a predicate respects equality.

I would rather teach students the idea that *all* predicates respect equality.

For the sorts of math classes that I teach now (remedial Algebra through multivariable Calculus for engineers), I take $\mathbb{R}$ as a type, with $\mathbb{Z}$ and $\mathbb{Q}$ merely sets (subsets of, in this case, $\mathbb{R}$), while fancier sets such as $\mathbb{C}$ and $\mathbb{R}^3$ must be constructed from $\mathbb{R}$ (and so are also types). But in my Discrete Math courses from a decade ago, the only primitive type would have been $\mathbb{N}$, which everything else constructed from that.

Yes, this is an interesting issue. I wonder whether there could be some formal “abstraction mechanism” to define new types, like $\mathbb{R}$, which we eventually definitely want to treat as a type, but foundationally we want to construct as a subset or quotient of something or other else.

]]>Re #80:

That looks very elegant. Everything has a type, but that type is a sort of ambient set out of which we may carve smaller subsets. As the ambient set is a subset of itself, it is both a set and a type; but on the other hand, not every set is a type, some of them are just subsets of something larger.

For the sorts of math classes that I teach now (remedial Algebra through multivariable Calculus for engineers), I take $\mathbb{R}$ as a type, with $\mathbb{Z}$ and $\mathbb{Q}$ merely sets (subsets of, in this case, $\mathbb{R}$), while fancier sets such as $\mathbb{C}$ and $\mathbb{R}^3$ must be constructed from $\mathbb{R}$ (and so are also types). But in my Discrete Math courses from a decade ago, the only primitive type would have been $\mathbb{N}$, which everything else constructed from that.

]]>@Mike #79:

Re: #76

I assume that you mean #75.

I think that — especially for pedagogical purposes — I would prefer to come down on the side of using a language that makes all predicates respect equality.

This is probably sound. Treating quotient sets similarly to subsets would be interesting to try but more difficult for the beginner.

But notice that a lot of math at the pre-undergraduate level is written in such a way that the substitution property of equality is not taken as a logical tautology but rather as a series of more specific properties. Thus Algebra textbooks will speak of the Additive Property of Equality separately from the Multiplicative Property of Equality and in essentially the same way that they treat the Additive Property of Inequality (but the Multiplicative Property of Inequality is more complicated). So somewhere there is this idea that, pedagogically, you have to point out whenever a predicate respects equality.

]]>FWIW, this wasn’t intended to be a rigorous presentation of any formal system, just an informal description of how such a system might look.

]]>Looks nice, but I can’t make any mathematically informed comments/criticism.

]]>How about this as a basic framework?

- Every mathematical object has a type, which is another mathematical object. We write $x:A$. “Being a type” is a property of mathematical objects.
- There is an operation taking every type $A$ to a type $P A$ that is its type, i.e. $A:P A$.
- The objects of type $P A$ are called subsets of $A$, and there is a specified relation $\in$ between them and objects of type $A$, which satisfies the separation axiom. Moreover, for every $x:A$ we have $x\in A$; the latter is well-typed since $A:P A$.
- There is a monoid structure $(-,-)$ on all mathematical objects, and a monoid structure $\times$ on all sets (i.e. objects whose type is of the form $P A$) that preserves types (i.e. if $A$ and $B$ are types, so is $A\times B$). If $x:A$ and $y:B$, then $(x,y):A\times B$; and if $S:P A$ and $T:P B$ then $S\times T : P(A\times B)$. Note that these are associative, so that $S\times (T\times U) = (S\times T)\times U$ (and is written $S\times T\times U$), and $(x,(y,z)) = ((x,y),z)$ (and is written $(x,y,z)$). The elements of $S\times T$ (for any sets $S,T$, including but not restricted to types) are exactly those of the form $(x,y)$ for a unique $x\in S$ and $y\in T$, and $()$ is the unique element of $\mathbf{1}$ (the unit of $\times)$.
- Mathematical objects are equipped with three partial binary operations $+$, $inl$ and $inr$. The object $S+T$ is defined iff $S$ and $T$ are sets, preserves types, and if $S:P A$ and $T:P B$ then $S+T:P(A+B)$. The object $inl_T(x)$ is defined iff $T$ is a set, and if $x:A$ and $T:P B$ then $inl_T(x):A+B$ and moreover $inl_T(x)=inl_B(x)$. Similarly, $inr_S(y)$ is defined iff $S$ is a set, and if $S:P A$ and $y:B$ then $inr_S(y):A+B$ and $inr_S(y)=inr_A(y)$. For any sets $S:P A$ and $T:P B$, the elements of $S+T$ are exactly those of the form $inl_B(x)$ or $inr_A(y)$ (but not both) for a unique $x\in S$ or $y\in T$. In addition, $+$ has a two-sided unit $\emptyset$. (I suppose we could consider making $+$ associative too.)
- Mathematical objects are equipped with two partial binary operations denoted by $S,T \mapsto PF(S,T)$ (“the set of partial functions”) and application $f,x \mapsto f(x)$. The object $PF(S,T)$ is defined iff $S$ and $T$ are sets, preserves types, and if $S:P A$ and $T:P B$ then $PF(S,T):P(PF(A,B))$. If application $f(x)$ is defined, then $f:PF(A,B)$ and $x:A$ and $f(x):B$ for some types $A,B$. Moreover, for any sets $S:P A$ and $T:P B$, the elements of $PF(S,T)$ are precisely those $f:PF(A,B)$ such that if $f(x)$ is defined, then $x\in S$ and $f(x)\in T$. Partial function comprehension holds: for any property $\phi(x,y)$ relating elements $x:A$ with $y:B$ such that for any $x$ there is at most one $y$ such that $\phi(x,y)$, there exists a unique $f:PF(A,B)$ such that $f(x)$ is defined iff there exists a $y$ such that $\phi(x,y)$ in which case $\phi(x,f(x))$. We write $T^S$ for the subset of $PF(S,T)$ consisting of the total functions.

You get the idea. I haven’t written out quotient sets in this style yet, but you can guess how they would go; they could take as input either an equivalence relation on a set, or a partial equivalence relation on a type (both defined as a particular subset of a cartesian product).

]]>Re: #76, I think that — especially for pedagogical purposes — I would prefer to come down on the side of using a language that makes all predicates respect equality. If that means that we can’t formalize the abuse of notation where by the elements of a quotient set are literally elements of the original set, then I’m okay with that; I was dubious about such an abuse already.

]]>Treating $\mathbb{Z}$ as a subtype of $\mathbb{Q}$ as a subtype of $\mathbb{R}$ as a subtype of $\mathbb{C}$ (and treating $2$, for example, as simultaneously an element of all of them) can lead to confusion even in something as basic as the College Algebra classes that I teach. Consider this instruction:

Factor $2 x^4 + x^2 - 1$.

This actually has a different answer in each context:

$(2 x^2 - 1) (x^2 + 1) ,$ $2 (x^2 - 1/2) (x^2 + 1) ,$ $2 (x - \sqrt{2}/2) (x + \sqrt{2}/2) (x^2 + 1) ,$ $2 (x - \sqrt{2}/2) (x + \sqrt{2}/2) (x - \mathrm{i}) (x + \mathrm{i}) .$The first (factoring over the integers) is what one normally does in Algebra; the second (factoring over the rational numbers) doesn't really come up, but I included it for the sake of the transition; the next two (factoring over the real numbers and factoring over the complex numbers) are things that the students are asked to do, but only in special cases.

Obviously, the problem is that the instruction ‘Factor’ is incomplete, and one should add ‘over the integers’ or whatever. But leaving that bit out is exactly the same kind of abuse as writing ‘$2$’ without specifying what system this $2$ is taken from.

]]>Blass’s theory T

That's more or less what I meant by

the imprecisely defined naive set theory of the ordinary mathematician

in #39.

]]>Thinking about it some more, I’m having trouble imagining how we could prevent non-structurality errors without somehow having a notion of “typing” separate from membership. For instance, if we only want to consider $A\subseteq B$ when $A$ and $B$ are given as subsets of some ambient set $X$, then what is it that justifies our considering (in order to be “given” them) the statements $A\subseteq X$ and $B\subseteq X$, unless they are a sort of “typing judgment” with a different status?

Right, if $x \in A$ does double duty as $x\colon A$ (the judgement that the term $x$ *is an element of* the set $A$) and as $x \in_X A$ (the proposition that the element $x$ of the set $X$ *belongs to* the subset $A$ of $X$), so $A \subseteq B$ also does double duty as $A\colon \mathcal{P}B$ (the judgement that the term $A$ *is a subset of* the set $B$) and as $A \subseteq_X B$ (the proposition that the subset $A$ of the set $X$ *is contained in* the subset $B$ of $X$). Somewhere around here, there's a discussion where somebody introduced $\colon\subseteq$ (which should be `\joinrel\colon\subseteq`

, but iTeX won't parse `\joinrel`

) for the typing judgement. (I thought that it might be at SEAR#SEPS, but it's not.)

In practice, you can usually tell which is meant. If $x$ is a term that has appeared before in the context and whose type you already know, then $x \in A$ should be the proposition (and you should also already know the type of $A$ and it should be the case that $x$ is an element of some set $X$ and that $A$ is a subset of $X$). On the other hand, if $x$ is a new variable, then $x \in A$ serves to enrich the context with a new variable $x$ for an element of the set $A$ (and you should already know the type of $A$ and it should be the case that $A$ is a set). Similar remarks apply to $A \subseteq B$. It gets trickier when $x$ is a term whose type you haven't yet established but which is more than just a new variable. In #51, I mentioned an ambiguous example with $\leq$ (rather than $\in$ or $\subseteq$).

]]>I think there are other problems with allowing the elements of a quotient to be elements of the original set. For instance, suppose $A \subseteq B \twoheadrightarrow C$ and we have $x=_C y$. How do we prevent the substitution law of equality from allowing us to conclude $x\in A \to y\in A$?

Back in #49, I wrote things like

a $1$-place predicate $P$ about elements of $X$ that respects equality in $X$

I wasn't trying to formalize a system with a flexible notion of equality when I wrote that (it was a follow-up to #45, after all, in which I used $[x]_R \in X/R$ to distinguish this from $x \in X$). But clearly that was in the back of my mind somewhere.

Anyway, the substitution rule of equality does *not* allow you to conclude $P[x] \to P[y]$ from $x =_X y$ for *any* $1$-place predicate $P$ about elements of $X$, but only for a predicate *that respects equality* in $X$. Which is hardly even a rule, since the validity of this conclusion is what it *means* for $P$ to respect equality in $X$. (An example of a predicate about rational numbers that does *not* respect equality is ‹has an odd numerator›. Note that $\mathbb{Q}$ is a subquotient of $\mathbb{Z} \times \mathbb{Z}$, where that predicate *does* respect equality.)

To make math easy, you don't want to be checking all the time that your predicates respect equality. This is similar to how in analysis you don't always want to have to check that your functions are measurable. In both cases, you can rely on metatheorems saying that anything that you can write down in a particular language must have the desired property. We're not used to even thinking about this when it comes to equality, but we do often have to think about it at higher levels, with isomorphism in groupoids or equivalence in higher groupoids. (One thing that's great about HoTT is that it's a language that handles all of those.)

But there's a trade-off between using a restricted language that only lets you write down well-behaved things and using abuses of notation that let you write down all kinds of crazy whatever. In a formal system in which the abuses are legitimized, the rules of inference have to get more fiddly to compensate.

]]>I suppose you could represent at least some of the structure in that way, but it doesn’t seem to me like how we would want to approach formalizing it. The complaint about structural set theory is, roughly, that sometimes we want *more* than just functions between sets.

It is a many-sorted first-order theory. The sorts include numbers (natural, real, complex), sets, ordered pairs and other tuples, functions, manifolds, projective spaces, Hilbert spaces, and whatnot. There are axioms asserting the basic properties of these and the relations between them. … There are no axioms that attempt to reduce one sort to another. In particular, nothing says, for example, that natural numbers or real numbers are sets of any kind.

I imagine a collection of types, with the axioms hard coded via a sum type, and the relations between those types also being explicitly realized, so I add a selections of arrows between the types.

]]>What?

]]>Sounds like a category with types with* $\sum_X Axioms(X)$ as objects and embeddings as arrows.

*edited

]]>By the way, I just stumbled across this comment by Andreas Blass in which he says

Mathematicians generally reason in a theory T which (up to possible minor variations between individual mathematicians) can be described as follows. It is a many-sorted first-order theory. The sorts include numbers (natural, real, complex), sets, ordered pairs and other tuples, functions, manifolds, projective spaces, Hilbert spaces, and whatnot. There are axioms asserting the basic properties of these and the relations between them. For example, there are axioms saying that the real numbers form a complete ordered field, that any formula determines the set of those reals that satisfy it (and similarly with other sorts in place of the reals), that two tuples are equal iff they have the same length and equal components in all positions, etc.

There are no axioms that attempt to reduce one sort to another. In particular, nothing says, for example, that natural numbers or real numbers are sets of any kind. (Different mathematicians may disagree as to whether, say, the real numbers are a subset of the complex ones or whether they are a separate sort with a canonical embedding into the complex numbers. Such issues will not affect the general idea that I’m trying to explain.) So mathematicians usually do not say that the reals are Dedekind cuts (or any other kind of sets), unless they’re teaching a course in foundations and therefore feel compelled (by outside forces?) to say such things.

So maybe what we are doing is trying to formalize Blass’s theory T (which I think is not nearly so “large and unwieldy” as he seems to think it is).

]]>Thinking about it some more, I’m having trouble imagining how we could prevent non-structurality errors without somehow having a notion of “typing” separate from membership. For instance, if we only want to consider $A\subseteq B$ when $A$ and $B$ are given as subsets of some ambient set $X$, then what is it that justifies our considering (in order to be “given” them) the statements $A\subseteq X$ and $B\subseteq X$, unless they are a sort of “typing judgment” with a different status?

]]>I think there are other problems with allowing the elements of a quotient to be elements of the original set. For instance, suppose $A \subseteq B \twoheadrightarrow C$ and we have $x=_C y$. How do we prevent the substitution law of equality from allowing us to conclude $x\in A \to y\in A$?

]]>