Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 8 of 8
Let me say at the outset that, while I grok strict n-categories perfectly well, my grasp on weak n-categories is embarrassingly poor, and similarly, I have no real familiarity with homotopy either. So please forgive my blazing ignorance/naivete here, which I hope to begin to ameliorate somewhat.
It often seemed to me that one of the main motivations for an account of weak n-categories, in generality beyond a definition essentially equivalent to strict n-categories, is to be able to give an account of homotopy n-types. I gather the idea is that, if paths are thought of as maps out of [0, 1], then composition of paths is not, as one might naively assume, strictly assocative (as a side effect of the reparametrization of composed paths into length 1), but only associative up to homotopy, and this presents enough room for there to be fundamental n-groupoids which are not equivalent to any strict n-category (e.g., I’m under the impression the automorphism braided monoidal category on the identity 1-cell of any 0-cell of the fundamental 3-groupoid of the 2-sphere is not symmetric, and thus this weak 3-category is not equivalent to any strict one).
Which is all well and good. Except, it seems to me, in my naivete, that there is a sense in which path composition is strictly associative (including strict identity laws), so long as one allows paths to have domain [0, k] for any length k, so that the composition of paths of length x, y, z, …, is a path of length x + y + z + … . In thinking along such lines, it seemed to me, in my rough thoughts unhindered by experience, that perhaps the more appropriate way to think about homotopy would be via strict double (more generally, n-fold) categories, as opposed to mere n-categories (in particular, my inchoate thoughts were that [I apologize if the following doesn’t make any sense; remember, I don’t actually know what I’m talking about, so feel free to skip this parenthetical] such a framework is usefully able to encompass possibilities like that there can be a homotopy square all of whose edges are constant paths of length 1 at point p which is not equivalent to any square all of whose edges are the identity path of length 0 at point p, even though there is a homotopy between the constant path of length 1 at point p and the identity path of length 0 at point p, and that this sort of thing keeps the Eckman-Hilton argument for braiding from implying the braiding’s symmetry in the cases where it oughtn’t).
Anyway, in looking for more information on thinking about higher homotopy using strict n-fold categories, on the nlab I see it mentioned that strict n-fold groupoids can indeed model all homotopy n-types. Which brings me to my main question: I’m not sure exactly what that statement technically amounts to, but since it seems to indicate that higher homotopy can be perfectly well modelled with suitable strict higher-dimensional structures, I’m left wondering what the motivation is again for using weak n-categories instead. Why do we want weak n-categories and not just the strict n-fold categories which can apparently model the motivating applications in homotopy as well with the added benefit of having presumably a much simpler/better understood/universally accepted definition? In a nutshell, as someone who never quite fully got it (a terrible position to be in around here), remind me why weakness is really, inescapably important?
@Sridar. The Moore category of a space (which is what you describe) is a strict category, but it has multiple copies of the same geometric thing, such as constant paths of all lengths as you mention. To give a more useful gadget one forms the fundamental groupoid, dividing out by the reindexing that gave the possibility of the multiple copies. If you try to do the same for ’2-dimensional paths’, life gets interesting. there is a construction by Hardie, Kamps, and Kieboom. What becomes clear is that the reindexing process is giving a space of choices for composites, reverses, etc., and that that space is contractible, so one naturally is inching toward the weak version of things. The constructions then start looking more and more like the singular complex of the space. There the composites are defined by the filling process and you naturally get a weak infinitiy groupoid in the process. You can work with Moore cubes (recent preprint by Ronnie Brown on this) but there is a lot of technical complexity in that.
A different interpretation of your vague and rambling question(s) is that the strict models are good, but if you need to move around from one model of a space to another, you immediately need the extra room of weak structures to make you life easier. Using strict things the whole time is a constraint. Whitehead’s combinatorial homotopy was an attempt to do homotopy theory in a combinatorial way, similarly to the combinatorial group theory of Reidemeister. It hits the problem that there are homotopy equivalences that are not ’constructible’ from elementary moves.
Another partial answer is we want weak n-categories, because they are there and we need to understand them. (I don’t thing we yet understand them at all deeply! I come back time and time again to saying that the way the higher coherences of n-category theory should be linkable more closely with analogues of the Whitehead products, and then the higher homotopy operations. Perhaps that would provide a neater way of understanding those structures, ’cause I don’t fully understand them from the classical viewpoint of algebraic topology!)
The result of Loday (3rd para) that you mention is interesting because if you try to work with the crossed squares modelling a 3-type, it seems hard to determine what way to get from one such to another in an elementary way. That is worth playing with, but is infuriating as the ideas are quite simple but the calculations hard. If you use the bisimplicial groups that correspond to encoding the compositions etc. then it is easier. You have given your self more space to work in, but have also weakened the models to do so.
These ’answers’ are as rambling and vague as your question, but i hope they suggest some answers (or some refinements of the question).
One answer, hinted at by Tim, and (presumably) sufficient for any mathematical structure, is that there are examples, and we want to think about them.
So for example, even while one can use Moore paths, one sometimes wants maps from , and then one has a weak structure.
Or, many examples of monoidal categories, perhaps all of the usual ones, are weak, and these are weak -categories. Of course, every monoidal category is at least equivalent to a strict one, and indeed every -category is equivalent to a strict one, one one still needs weak -functors to make this work. And then Cat itself is an example of a weak -category!
For those of us who are interested in keeping the foundations of mathematics weak, another motivation is that making weak structures strict sometimes requires impredicative reasoning or the axiom of choice. (I don’t know if this is the case with modelling weak homotopy types using strict -fold categories, but I suspect not.) This reason is important to me, but I appreciate that it doesn’t appeal much to most people.
I suppose that people talk about nonassociativity of path composition since it is easy to explain, but it is a poor example for the necessity of weakness, since as you note, path composition can be made strictly associative. This is more or less an instance of the same fact that every bicategory is equivalent to a strict 2-category. On the other hand, not every tricategory is equivalent to a strict 3-category, and similarly if you try to make a 3-groupoid out of paths and paths-of-paths and paths-of-paths-of-paths, you’ll find that you can’t make everything strict at once. If you make associativity and units all strict, such as with Moore paths, then the exchange law fails to hold on the nose – this leads you to the semi-strict notion of a Gray-category.
Regarding strict n-fold groupoids, to expand on Tim’s comment: it’s not at all clear to me that they “model higher homotopy” in the same way that weak n-groupoids do. I think this is an aspect of the homotopy hypothesis which is insufficiently emphasized: the way in which weak n-groupoids model homotopy n-types is by treating the n-groupoids categorically. In particular, the notion of “equivalence” of weak n-groupoids that is relevant (i.e. corresponds to the notion of (weak) homotopy equivalence) is the categorical notion of equivalence. If you’re willing to allow wackier kinds of “equivalence” then already strict 1-categories “model all homotopy types” as in the Thomason model structure. It’s not clear to me whether the type of “equivalence” used to get strict n-fold groupoids to model homotopy types is more “categorical” or “Thomason-like.”
By the way, even Moore paths are not completely strict; the associativity and units are strict, but the inverses are not! We have the theorem that the fundamental -groupoid of a space is equivalent (as every -groupoid is) to a strict -groupoid, but the Moore paths don’t give this directly.
Sorry for not replying earlier; my Internet access is very spotty at the moment. I just wanted to say that I think these replies have actually managed to be quite helpful despite the vagueness of my opening questions, particularly the last part of Mike Shulman’s post. Thanks! I still have some (well, a lot) more questions I’d like to ask, but I’ll wait till I return home from winter break to more stable Internet access.
My own views are influenced by having tried for 9 years to define a homotopy double groupoid of a space and then finding with Philip Higgins in 1974 that we could do nicely with a homotopy double groupoid of a pointed pair of spaces, prove theorems, and yield new explicit results on non abelian 2nd relative homotopy groups, and in fact calculate some crossed modules, i.e. homotopy 2-types, as colimits. Further work was needed to calculate second homotopy groups from this, thus turning around traditional approaches (first calculate and and somehow calculate the -invariant).
This led to the fundamental strict cubical -groupoid of a filtered space, quite tricky to do but it works just nicely, and so to the idea that one gets nice invariants from certain structured spaces. Also filtered spaces are quite a common occurrence, for example on manifolds with a Morse function. Also replacing a space by some kind of singular complex leads to a filtered space.
This fitted well with Loday’s approach using -cubes of spaces, and his cat-groups, which are special kinds of strict -fold groupoids, and allowed new calculations and descriptions, but again proving you get the right structure is non trivial.
In the proofs of the van Kampen type theorems, the strictness is used in an essential way, it seems. And one can calculate exactly with these algebraic gadgets, because one has colimit type theorems, involving structures coveting a range of dimensions. In particular, one can calculate some -types.
All this leads to a methodology different from the approach via weak structures, and one which does link with other areas of mathematics, cf the non abelian tensor product.
It will be interesting to see here it all leads!
To what extent is the (strict) Moore hyperrectangle -category “equivalent” to the (weak) fundamental -groupoid ? is strict only as an -category and not as an -groupoid, since its -arrow “inverses” are only up to -homotopy. On the other hand is “fully weak” in inverses, units, and associativity. They definitely are not equivalent since there are constant maps of every length in so any functor will not be faithful, but are they equivalent if both are taken modulo reparametrization (or some variant thereof)? I.e. what are the “smallest” congruences on and on such that ? (“smallest” congruence in the sense that it induces a “least forgetful” functor.)
1 to 8 of 8