Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 26 of 26
Consider the following axiom:
(Yoneda's axiom)
If C is a category, there exists a pair consisting of a free cocompletion of C and a distingiuished full and faithful embedding of C into this free cocompletion.
Say that we adjoin this axiom to something like intuitionistic SEAR or intuitionistic ETCS (heck, even ETCC or FOLDs or something) (probably include ordinary ZFC as well just as a test case). Will this axiom
a.) necessitate a contradiction (make the theory inconsistent)
b.) imply the axiom of choice
c.) be any weaker than the universe axiom?
d.) even imply Yoneda's lemma for all categories?
I am purposely leaving out the smallness conditions because it is implicit in our definition that there is no such thing as a proper class.
I assume that by your final sentence, you mean to indicate that “If C is a category” should be read as “If C is a small category,” i.e. a category whose objects and morphisms are both sets, and that the free cocompletion should likewise be small. In that case, I’m pretty sure the answer is (a), because the free cocompletion of the terminal category is the category of sets, which it is pretty sure to be inconsistent to assume to be small. It’s certainly inconsistent with classical logic, since then any cocomplete small category is a poset, whereas the category of sets is not a poset.
Is there any sort of theory where we can bypass the notion of size?
Warning: non-rigorousness ahead (ignore this if you like, Harry :-)
It seems to me that at a foundational level there will always be things that are ’too big to obey the rules’. For example, you can’t treat ’collections’ (such as the collection of objects of a large category - genuinely large, no universes, no proper classes) as sets. Just apply the ’…and all of them’ operator, potentially an infinite number of times, until the resulting thing is too big.
Well, universes are a solution (as is strong feferman set theory). It just seems strange that size doesn't really matter, but it is hard to avoid talking about it.
The collection of all universes…? But you probably won’t need these in practice.
No, you just relativize and say that talking about "all sets" just means "U-Sets" for some fixed (perhaps very large) universe U. The benefit there is that every category is strictly a mathematical object. Mike also described strong feferman set theory in a blog post on the café, which is essentially an internal model of grothendieck universes (I may not be using the term internal model correctly.) with a constant that replaces quantification over all U.
One doesn’t always want to relativise. One can talk about the category of all sets (really all sets), using non-strict categories and dependent types and such like. Whether you can then talk about the category of presheaves with values in that category is another question. The thing about being flexible with foundations is that you can’t be flexible and then always say ’and then use universes to solve size issues’. As for your question
Is there any sort of theory where we can bypass the notion of size?
I would take this to mean that there is no such tool as universes available, since that is more than implicitly referring to size. Strong Feferman set theory uses, as you are aware, universes only in an intermediate step, but without the presence of universes/inaccessible cardinals I don’t know if this is possible.
Hey David, just a pet peeve of mine, but if you quote an obvious typo that I've made, would you mind correcting it? I would really appreciate it =).
One doesn't always want to relativise. One can talk about the category of all sets (really all sets), using non-strict categories and dependent types and such like. Whether you can then talk about the category of presheaves with values in that category is another question. The thing about being flexible with foundations is that you can't be flexible and then always say 'and then use universes to solve size issues'. As for your question
I don't see why we wouldn't want to relativize. Consider the following:
You give me a non-relativized category of all sets. I will spit back a relativized form embedded in a new universe. I don't see why category theory in proper classes shoudl be seen as anything more than two fixed stages in the continuum of universes.
Sorry about the typo: it wasn’t a dig.
One might as well say ’I don’t see why we don’t assume choice’ or, more appropriate to the present situation, Vopenka’s principle or any other large cardinal axiom. I don’t see how you can take my category of all sets and embed it in a new universe: Surely given that universe, I then turn around and give you back a set not contained in either of the two universes you implemented. If I have all sets (say, ZF sets) then surely you cannot capture all of them in a single universe. hmm….. Unless you are implicitly doing something like Godel-Bernays set theory (in spirit if not in practice. Or perhaps Moerdijk-type algebraic set theory), and replace ’proper classes’ with ’elements of the bigger of two universes’. But still this is assuming more axioms. I imagine your two-step universe situation is equiconsistent with my universeless setup, but you can certainly prove more (like the existence of a presheaf category).
But one thing I like about SEAR and other dependent-type-style foundations is that one has collections, about which you cannot assume anything, or do anything with, except form a category of them (Toby’s nonstrict categories, for example - in practice I like choice and equality and so forth, but if I am being philosophical that sort of thing appeals to me)
Getting back to the original thought of the post: I like this Yoneda-type axiom, if it can be straightened out so that Mike’s concerns are fixed.
I disagree with #5 that “size doesn’t really matter.” Size does really matter. Try stating the adjoint functor theorem without any reference to size. Size distinctions are fundamental to category theory.
However, you might be interested in this.
Sorry about the typo: it wasn't a dig.
I wasn't implying it was. Sometimes it's hard to convey exactly the right tone on the internet. That's the reason for smily faces. =D
Try stating the adjoint functor theorem without any reference to size. Size distinctions are fundamental to category theory.
If I knew what the problem in the AFT was, I doubt I Would have asked the question! Would you m8ind explaining a bit further?
Edit: Mike, I started reading the argument on that page, and I still don't accept the premise that we need honest large categories instead of "pseudo-large" or "relatively large" categories. In particular, we can avoid that entirely by replacing everything about the category of all sets to "For all universes U, U=Set...".
Are you familiar with the statement and proof of the AFT? Take for instance the SAFT version: if is a functor which preserves small colimits, and where C is complete, locally small, and well-copowered with a generating set and D is locally small, then F has a right adjoint. The notions of “complete” and “locally small” and “well-copowered” and “generating set” all involve a size distinction, and if you get rid of it then the theorem becomes fairly trivial, since the only complete small categories are posets.
I am not taking any position in this discussion on whether we need “honest large categories” versus “relatively large” ones. My philosophical preference is to allow honest ones, but for purposes of applications, relatively large ones seem sufficient. But size distinctions do matter, whether they are relative or absolute. I thought you wanted to get away from using universes or anything of that sort to make size distinctions of either type, which is what the link is about and seemingly what your original question was about. Sorry if I misunderstood.
Oh, you were right that I was wondering if we could get rid of size restrictions! I was really exhausted when I wrote my last message and only read the first paragraph of that article. (I have yet to read the whole thing!) If it seems like I'm being argumentative, it's not on purpose. I really do want to know the answer.
Anyway, I'll get back to you when I read the whole article later today.
(Also, I never remember the whole AFT. I remember the SAFT for presheaf categories, but that seems to be by far the most used one (at least with respect to fields in which I have experience).)
Why is this discussion in category n-Forum rather than General?
This is in [general]. At the moment, Andrew has not split up the categories into different forums yet!
My mistake, sorry. I was reading the 'category' marker incorrectly.
Hi, I’m just returning after over a week of life away from the internet. Still sifting through hundreds of comments.
I must admit that I’ve very often felt size issues to be a nuisance, and I find it sort of fun to contemplate formal category theory without such constraints. On my personal web I’ve started something under “Epistemologies”, which is still very much under construction, but which is ultimately meant to pursue this.
For experts (especially looking at you, Mike (-: ), an epistemology is a compact closed symmetric monoidal 2-category such that the locally full inclusion of the bicategory whose 1-cells are maps (= left adjoints in ), , has a KZ right biadjoint . Here should be thought of as something like a 2-category of profunctors, as something like a 2-category of (cauchy-complete) categories and functors, and then the unit would be like a formal Yoneda embedding, and the KZ-ness would mean that behaves like a formal free cocompletion.
The compact closed structure on can be exploited to yield an equivalence in , where ( being the monoidal unit of ). It may be shown that is an internal symmetric monoidal closed object in , and it’s possible to develop a lot of formal enriched category theory within (which behaves something like a formal version of -.
It’s not hard to show that nontrivial (i.e., non-locally-posetal) epistemologies exist, but the 10000 dollar question for me is whether there exist cartesian epistemologies which aren’t locally posetal. (“Cartesian” is in the sense of cartesian bicategories, seen as symmetric monoidal bicategories.) My instincts say they should exist, and in fact I think pretty big swaths of formal category theory could be developed within cartesian epistemology theory, especially as one begins to add on finite 2-completeness conditions.
I’d love to know how far this idea could be pushed!
Very cool. This was exactly the type of thing that I was thinking about. Edit: I also love the name "formal category theory".
Interesting! I would like to see your examples of nontrivial epistemologies. I feel like you’re likely to run into some of the same problems Mathieu and I had with the category of all sets. In particular, the argument that without size constraints, every endofunctor of Set has an initial algebra and hence a fixed point, contradicting Cantor’s theorem for powersets, seems to me like it is fairly robust and should pop up whenever you have good notions of limit and colimit.
Of course, as usual, I also have to complain about the identification of functors with modules and the consequent Cauchy-completeness restriction. (-: To me it would be much more natural to talk about a proarrow equipment having a right lax-idempotent adjoint, than to consider only equipments of the form .
One source of nontrivial epistemologies is a symmetric monoidal 2-category where every 0-cell has a dual and every 1-cell has an (ambidextrous) dual, so for example the one where the 2-cells are 2-cobordisms. This may seem in some sense like a silly example (because there every 1-cell is already a map!), but its mere presence opens the door to other examples which may be less silly. For example, you could take a product of this with V-Prof where V is a commutative quantale, to get something slightly less silly or, perhaps better still, the mere fact that this shows that parallel 2-cells are not forced to be equal should mean that the same is true in term models, which therefore ought to have an extremely rich structure.
However, you’ve zeroed in on a lingering worry I’ve had with all this, especially for cartesian epistemologies , so I’d like to pursue this. If admits the construction of for an arbitrary 1-cell , as should be the case under a mild 2-limit assumption on , then you can indeed construct an initial -algebra. (Can you get this without some such completeness assumption? A while back I asked a question about using ends to construct initial algebras, and the secret origin of that question is precisely here.) Then you’d want to see how fatal fixed points for are, say for . Well, I don’t know! If is an internal topos, then sure, this is fatal by taking to be the truth-value object or even . But what if not?
In a general cartesian epistemology, one can construct , but I don’t see how to get from a fixed point of to a “fixed point” of , nor do I see that needs to be a subquotient of . Even if it is, that’s not yet fatal, since the worst you could conclude is that every morphism has a fixed point.
Ultimately I’m hoping to find a denotational semantics for cartesian epistemology theory, roughly analogous to the role that Scott models or domains play for untyped lambda calculus. But I’d like to go a lot further with this! Ideally, I’d like to have the freedom to construct, given say a compact cartesian bicategory such as (profunctors between small categories), a full embedding into a cartesian epistemology, or something like that, so that plays the role of “virtual sets” (even if it can’t be an internal topos).
As for your last point: absolutely. Somewhat to my chagrin, I speak of proarrow equipments or virtual double categories at best stammeringly, but for the moment I’m treating this as a technical point which I’m happy to incorporate at a later stage, after some of these (to me) more pressing concerns have been settled.
Then you’d want to see how fatal fixed points for f are, say for f=hom(hom(−,b),b). Well, I don’t know! If v is an internal topos, then sure, this is fatal by taking b to be the truth-value object or even 2=1+1. But what if not?
For the truth value object, sure. But for 2 = 1 + 1, this doesn’t seem fatal in a non-Boolean topos (though perhaps I am misunderstanding you, since I just sighted upon this statement out of context without yet understanding the whole framework of “epistemologies”); for example, if I remember correctly, in the effective topos, there is an isomorphism between and .
Oh! Thanks for pointing that one out, Sridhar; I jumped ahead, just considering the fact that the switch map has no fixed point, without considering that is (apparently) not a retract of in the effective topos.
So is it that the canonical “double dual embedding” is an iso?
No, it would have to be trickier than that; rather, I believe it’s that (even in the internal logic of the effective topos) the range of the double dual embedding, when closed under the finitary Boolean operations, freely generates the entirety of (by the Rice-Shapiro theorem in some fashion), establishing as the free Boolean algebra on countably many generators, whence a bijection with can be easily exhibited.
Bah, that was a very dumb question in retrospect (but I thank you for your answer!).
Good questions all; I don’t know. The analogy with denotational semantics is interesting; I guess the idea would be maybe that if you require enough “continuity” on everything in sight, then you can have a presheaf category of the same size as everything else?
It reminds me of doing math in the world of accessible categories, actually, where “continuous” means “accessible” i.e. preserving sufficiently highly filtered colimits. In playing around with the category of all sets I was especially intrigued by the theorem of Makkai-Pare that an accessible functor to Set gives rise to a discrete fibration whose total category is again accessible, and vice versa; thus in the “world of accessible things”, Set really is “the category of all sets.” But I wasn’t quite able to extract a concrete model of anything useful from that. Perhaps it would work for cartesian epistemologies, though, which may require less of a theory than the 2-pretopos structure I was looking for?
It would be very interesting and convenient if one could build a consistent “sizeless” theory in this way, especially if the appropriate theorems could then be soundly interpreted back into ordinary “sizeful” mathematics by inserting size qualifiers in the appropriate places.
1 to 26 of 26