Reference added to

Peter Freyd,

*Abelian Categories*, Harper & Row New York 1964. (Reprinted with author’s comment as TAC reprints no. 3 (2003))Duško Pavlović,

*On completeness and cocompleteness in and around small categories*, APAL**74**(1995) pp.121-152.

Thanks, that seems right. I have tried to summarize this in the article. I wonder two things.

(1) In these articles, the effects/recursion make things a little bit more complicated. If we drop them, as I tried to do in the nlab article just now, does it become clearer / kind-of obvious?

(2) In the interpretation of $\forall$, does it still work if we just use bijective relations instead of arbitrary relations? Lemma 5.2 in Rosolini-Simpson seems to only use the graph of a bijection, not an arbitrary relation. (And if so, is the interpretation really living in something like presheaves on the underlying groupoid of $\mathbf{C}$?) Obviously relational parametricity might not hold, but it might still be a model.

]]>More about the Rosolini-Simpson model

]]>Ah, I think I see! Their interpretation of $\forall X.B$ builds in invariance under (iso)morphism: instead of $\forall X.X$ being the product of all objects in the category, it’s more like the initial object of the category (the limit of the identity functor). And the way they do this is by feeding the relational-parametricity part of the model (the interpretation of types as relations) back into the interpretation of types as objects/sets, so that unlike in an ordinary relational model the two levels are all mixed up inductively. In particular, there’s no “underlying” interpretation of types as sets that they’re then building a relational model on top of; the simultaneous relational interpretation is crucial to making the set interpretation work. Does that seem right?

]]>Regarding #7. I think there may be a typo in Fig 4 of the Møgelberg-Simpson paper. If you look instead at this Rosolini-Simpson paper, which the Møgelberg-Simpson paper cites, they instead interpret

$[[ t(\tau)]] _{\gamma,\rho} = gpd(\sigma, \gamma, i)(([[ t]]_{\gamma,\rho})_D)$where $D$ is in the small category and $i:D\multimap [[ \tau]]_\gamma$ is iso. Of course, there is something to check about the groupoids working out, but they seem to have thought about it. (Lemma 5.4, says everything is well-formed; Lemma 5.2 says that the iso is canonical; proofs are omitted, though.) Does that help?

]]>Added some more references, and mentioned that to model polymorphic type theory one doesn’t really need completeness, only products over the set of objects.

]]>I’m not convinced that this “alternative approach” of using a replete full subcategory of $Set$ actually works. The repletion of a weakly complete full subcategory of $Set$ is certainly a strongly complete one, which is indeed still essentially small if the original one was actually small. However, it seems to me that to model polymorphic type theory you need the category to be *actually* small, not just essentially so. Doesn’t a polymorphic type like $\forall X. B$ have to be modeled by the product over *all* objects in the category, not just a skeleton thereof? it seems to me that it does, because a term $t : \forall X. B$ has to have an application $t(A) : B[A/X]$ for *any* type $A$, not just those whose interpretation belongs to the skeleton.

The Møgelberg-Simpson paper seems to gloss over this: they define the interpretation of $\forall X.B$ (Figure 3, page 12) as a product over all objects in their skeleton $\mathbf{C}$ (so that this product exists), but then they define the interpretation of $t(A)$ (Figure 4, page 19) by evaluating the interpretation of $t$ at the interpretation of $A$, and I can’t find any claim that the latter must in fact be in the skeleton (nor would I expect it to be in general). Of course it’s *isomorphic* to an object of the skeleton, but the whole point is that we don’t have a *function* choosing such an isomorphism.

Added discussion of strong vs weak completeness, and a reference to the important Hyland-Robinson-Rosolini paper.

]]>Added two references and a slightly longer discussion of models of impredicative type theory.

]]>I just realized that the proof of Freyd’s theorem, in addition to being non-constructive, at least appears to be evil. We need the collection of all morphisms in the category to be a set (or a class, or something with an equality relation), which appears to require that the collection of all objects in the category be also a set, i.e. that the category be strict.

It looks even eviller if you beta-reduce the application of Cantor’s theorem by diagonalization. Then you get something like this: assuming $r,s \colon x\to y$, define a morphism $g\colon x \to \prod_{f\in Mor(D)} y$ whose $f^{th}$ component is $r$ if $f$ is a morphism $x \to \prod_{f\in Mor(D)} y$ whose $f^{th}$ component is $s$, and $s$ otherwise. Then the $g^{th}$ component of $g$ is $r$ if it’s $s$ and $s$ if it’s $r$, hence $r=s$. But now in the definition of $g$ there is an explicit equality test on the domain and codomain of each $f\in Mor(D)$.

Is this evil irredeemable?

(Personal story: last night I read the corresponding beta-reduced proof of the generalized version for factorization structures, and spent a while feeling ill about its evilness. Eventually I realized it was a beta-reduced version of a generalization of Freyd’s theorem, which simultaneously made me feel more comfortable with the proof for factorization structures, and less comfortable with Freyd’s proof.)

]]>Thanks.

]]>I added to the latter that the adjoint functor theorem for posets still holds constructively, despite the classical reasoning that precedes it. (After all, the classical stuff was only going the *other* way, but it still confused me for a minute as to whether you had actually proved the theorem constructively.)

Created complete small category, and moved the proof of Freyd’s theorem to there from adjoint functor theorem.

]]>