Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
[New thread because, although it existed since 2012, pasting scheme appears not to have had a LatestChanges thread]
Started to expand pasting schemes. Intend to do more on this soon, in an integrated fashion with digraph and planar graph.
PLEASE note: ACCIDENTALLY a page pasting schemes was created too, as a result of some arcane issues with pluralized names of pages-still-empty. Please delete pasting schemes.
Deleted.
I’m now taking a look at A 2-categorical pasting theorem by Power.
Contrary to what is suggested at digraph, I think that by “directed graph” Power means quiver. After all, the basic picture of a 2-cell consists of two parallel arcs between a source and target, and surely Power wants his pasting schemes to be general enough to include this basic example. This is further indicated by his referring to the computad associated to a pasting scheme: a computad consists of sets $C_0, C_1, C_2$, a quiver structure $Q = (s_0, t_0: C_1 \rightrightarrows C_0)$, and (letting $F(Q)$ be the free category generated by $Q$) source and target functions $s_1, t_1: C_2 \to Mor(F(Q))$ such that $dom(s_1(c)) = dom(t_1(c))$ and $cod(s_1(c)) = cod(t_1(c))$ for all $c \in C_2$.
Thus I believe the discussion of Power’s theorem at digraph is misdirected. I am going to start some work soon on pasting scheme to clarify what Power means by the term.
Todd, Re 3,
thanks for this comment. I had a look at what the author of JAlg129 assumes as background, and yes, your comment was an eye-opener: if one takes the author at his word, then, precisely,
So, you are right in
Contrary to what is suggested at digraph, I think that by “directed graph” Power means quiver
right up to equivalence (again, BondyMurty’s formalization is unusual in that it resorts to functions and relations at the same time).
Thus I believe the discussion of Power’s theorem at digraph is misdirected.
You are right, if the JAlg129 proof indeed is correct for BondyMurty-digraphs. I was not aware of that even half an hour ago. I was misled by the conjunction of
the illustrations JAlg129 provides as examples, which are digraphs
JAlg129 never saying explicitly that parallel arcs are allowed,
I had heretofore always only skimmed the accompanying article and had skimmingly mistaken the author’s use of “loop”, and his forbidding “loops”, to mean that he forbids “loops in the usual sense”; but no, in article the word “loop” is used in the unusual sense of “directed cycle”.
all of which led me down the garden path that JAlg129 works precisely with digraphs.
The essence of the whole project should not be affected by this terminological aberration though. Thanks again for pointing out this inaccuracy.
I have now stated the definition of pasting scheme (in Power’s sense), and stated his pasting theorem, over at pasting scheme.
Todd,
Re 5,
The formulation via inhabited hom-sets of the free category generated by the quiver is very nice (I think).
One small thing right away: to me it seems not advisable to adopt the author’s terminology “source” and “sink”. I have given reasons elsewhere, I think, yet here again: there are two usual meanings of “source” and “sink” nowadays, both unrelated to the JAlg129’s “source” and “sink”, unrelated already for the strong reason that JAlg129-sources are a global notion in that you have to know all of the quiver in order to decide whether a specified vertex $s$ is a JAlg129-source, while both the following usual variants are local notions in that you only need to know the information about the immediated neighborhood of $s$ in order to decide whether it is a source. In the following, I omit the dual notions, for brevity.
Usual “source” in digraph-theory $=$ vertex without any in-neighbors $=$ vertex with in-degree zero. (cf. Bang-Jensen–Gutin, 2nd ed. p. 4 or Schrijver, Volume A, p. 30)
Usual “source” in flow-theory $=$ vertex at which the balance-vector of the flow has a strictly positive value (cf. B-J–G, p. 129)
I seem to recall that someone working in digraph-theory confirmed that they think using “source” for “vertex from which any other vertex is reachable” might confuse people.
I have an email from someone saying he finds my suggestion of using “king” the right choice, and “intuitive”. The only reason against “king”, it seems, is that the word “king” can be mildly offensive to some, yet to me the pros seem to outweigh the cons here. Being totally true to the JAlg129 should not be a criterion, it seems, e.g. you took the liberty to use the hom-sets of the free category, which is not to be found in JAlg129.
Using “$\omega$-king” instead of source, and $\omega$-coking instead of “sink”, to me, seems the right technical term, in tune with the $k$-kings of digraph-theory.
to me it seems not advisable to adopt the author’s terminology “source” and “sink”.
…
Using “$\omega$-king” instead of source, and $\omega$-coking instead of “sink”, to me, seems the right technical term, in tune with the $k$-kings of digraph-theory.
Myself (without literature justification), I’ve used the terminology root and coroot to mean elements mapping to initial and terminal elements of the quiver’s skeleton. One might have to explain that these are not necessarily unique. I see no need to involve Peter’s digraph terminology when digraph theory is not directly involved.
Thanks for the responses. I don’t much care for the “$\omega$-(co)king” terminology, partly for aesthetic reasons and partly because unlike “source/sink” it doesn’t immediately suggest its intended meaning. (The best I came up with is that a king in chess can move to any neighboring square.) “Root” and “coroot” I like a little more aesthetically, but there too it’s not immediate from the word that arrows point out, not in, at a root.
I’m inclined to make up terminology, e.g. call them “global source” and “global sink”. I’d like it even more if it were a single apt word. Open to other suggestions.
Re 7 and 8
thanks for the comments.
Personally (but I’m just repeating myself here), think $\omega$-(co)king the right term.
Re:
because unlike “source/sink” it doesn’t immediately suggest its intended meaning. (The best I came up with is that a king in chess can move to any neighboring square.)
To me, it’s the opposite, and the intuitive meaning of source is just the local one. But this is terminology/psychology.
Re the rationale for “king”: to describe this would probably do a disservice to the proposed term.
But of course, the term should more or less be decided by consensus.
The only important thing is that, negative thought this may be, I I really caution against “source” and “sink” here. Of course, this would not be wrong, it is just confusing to just about the same degree as any terminological confusion: not harmful if a reader reads up on all the terms and does not let themselves be misled by their pre-experience, but not in tune with current usage.
The strongest reason (to my mind) is that there is highly relevant graph-theoretical literature, even on progressive planar acyclic digraphs, in which “source” and “sink” are invariably used only for the local notion.
In particular, there is a notion of “source-sink-connected digraph”(= digraph in which for each source s and each sink t there is at least one directed s-t-path), and herein, “source” and “sink” of course mean only the local variant. Incidentally, with the global variant this notion would be vacuous:
“digraph in which for each global-source s and each global-sink t there is at least one directed s-t-path” = “digraph”
Because of this, sticking to “source” and “sink”, in particular if parenthetically defined as in the current pasting scheme would be really confusing to readers coming from a more combinatrial background. (Of course, this sociological issue is the only thing that is ’wrong’ with “source” and “sink” here.)
Literature examples for the usual “source-sink-connected” meaning: here
Also, root of course usually means a distinguished vertex $=$ specified map $1\rightarrow V$, so root also seems not the best term.
(Todd, small thing re the term “progressive”, which I had never seen so far, and which seems a rather rare usage: a usual term in combinatorics, in particular in the graph-drawing research literature, for this, is upwards planar, and there is much literature on this.
Currently, strict 2-category contains an en-passant mention of “progressive” as if this would go without defining.
Spending a half-sentence in strict 2-category, defining “progressive”, and *mentioning the technical term “upward planar”, to me would seem an improvement, and could lead to progress by way of researchers in graph-drawing theory picking up on the topic of string diagrams.
Two relevant references:
A very recent one
Rod, Todd,
it occurred to me that
and this reason is also something of an answer to
“Root” and “coroot” I like a little more aesthetically, but there too it’s not immediate from the word that arrows point out, not in, at a root.
in 8. Reason:
The choice of the spanning out-tree seems arbitrary, but if one such tree is chosen, then there is the root of this tree.
Therefore, I am now for
“(which we call the root and the coroot, respectively, these being the roots of any of the at least one spanning out-trees or spanning in-trees, respectively, which can be found by a depth-first search starting at the respective vertices).
(Todd, Re 10 I should clarify that my suggestion is to mention “upwards planar” as an alternative for “progressive in the sense of Street”, not as an alternative for “progressive as in the analogous new variant meaning of progressive in pasting scheme”. For the latter, which per se does not have anything to do with planarity, a good term remains to be found.)
FWIW, source and sink have another category-theoretic meaning too.
What about weakly initial/final?
David, I thought about that too (referring to the free category $F(Q)$ on the quiver). I’m still hoping for something snappier.
As another data point: MathWorld speaks of local sinks vs. global sinks, and says the latter is often shortened to ’sink’. Harary is cited.
(Re
MathWorld speaks of local sinks vs. global sinks
FWIW: wow. But you see from my surprise: personally I still think this is very rare (and confusing); the Wolfram link you gave is only the second instance after JAlg129 that I can remember to have encountered this use of “sink”)
(Todd, and please remember: with the term sink Mike kindly reminded us of in 13, the count now stands at 3 for usual meanings of “sink”, one in each of “digraph theory”, “flow theory”, “category theory”. I think that this speaks against overloading “sink” even more.
Incidentally, sieve and sink seem not to know one another, let alone talk. The most basic connection of all: each sieve is a sink EDIT: the latter is wrong if “sieve” here is taken to mean ” ’sieve’ as in Definition 2.1 of sieve version November 13, 2016 00:25:34, it is true though if “sieve” is interpreted as “sieve on a single specified object, which is still a rather common use of “sieve” in the literature ENDOFEDIT not each sink is a sieve. I for one knew sieve for long, but not sink before Mike’s comment, probably because the two articles don’t communicate. My natural impulse is to suggest changing the emphasis in sink from “sink is collection of morphisms with common codomain (incidentally, sieve mentions the term “codomain” only, while sink emphasizes “target”; this seems mismatched; in an ideal world at least, the sieve and sink should look to correspond whereever they reasonably can when put side by side) yet we do not require smallness to what seems the important thing: that one does not require closedness w.r.t. precomposition. The emphasis on size seems wrong emphasis. I might be missing a reason why it is presented this way, needless to say.)”
The term “global sink” is used here. I see nothing wrong with it.
Sink is also used in dynamics and differential equations. So what? There is nothing to fear from “global sink”. One introduces the term on the page, the context is circumscribed, no one gets confused.
Anyway, I’m not asking permission. If someone has other suggestions, I’m still listening (I don’t plan on using “$\omega$-coking”, though).
Regarding #18, of course if you are comparing a sink to a sieve then precomposition is one important difference. However, the relevant thing to compare it to (as regards the way sinks are used) is not a sieve but rather a cone.
Your comment does point out to me that the page sink was wrong, or misleading, in one way, namely that where it said “collection” it should really say family, i.e. an indexed family. I’ve fixed it. This is another important difference between a sink and a sieve, and so the concepts are really so different that I don’t think it is worth interlinking them.
The question of overloading terminology is a tricky one. I’ve never thought about this explicitly before, but I think once a word has enough similar but distinct meanings, then giving it a new meaning that still falls in the same “similarity class” is perfectly acceptable, and indeed may be less confusing than introducing a totally new term, since the reader who has seen the word before will have some intuition for what it suggests, yet (at least if they have seen it in multiple places before) not expect the new meaning to coincide exactly with any they have seen before. Here “sink” is an example, but other examples I can think of in category theory include “cartesian” (cartesian square, cartesian morphism, cartesian transformation, cartesian monad) and “functor” (in modern terminology, the morphisms between categories, bicategories, etc. are often all called simply “functors”).
Actually, even more importantly, the morphisms in a sink must all have the same codomain, which is certainly not true of a sieve. So it’s not true that every sieve is a sink, hence there is no implication between the two.
I’ve been speaking with a lot of people who are working with pasting schemes recently, and they are unanimous that Steiner’s presentation is by far the absolute most useful and easy to work with. It is a modernization and improvement of all of the other previous methods (Street’s parity complexes, Crans’s pasting schemes, etc). It has also been used to generalize and extend a ton of the constructions of Street and others. I propose that it get a more prominent mention than the absolute bottom of the page.
Parity complexes, Power’s definition of pasting schemes, Crans’s versions, etc. are nowadays pretty much only of only historical interest. Computads/Polygraphs are what you want to use when you have loops, and if you’re loop free, you want to be working with directed complexes.
That’s very interesting, Harry. Would you be able to work up something for directed complex?
Mike, re 22
importantly, the morphisms in a sink must all have the same codomain, which is certainly not true of a sieve. So it’s not true that every sieve is a sink,
many thanks for this comment. I do not quite understand it though. All definitions of “sieve” that I have encountered rather explicitly imply that a sieve is a collection of morphism all of which have the same codomain (and the collection has the relevant ideal-like property). Giving literature references seems too much of a statement.
The explanation however probably is simply this
you were referring to the current nLab definition of “sieve” simpliciter at sieve,
you are right that my claim “each sieve is a sink” is wrong with “sieve” interpreted to mean “sieve” simpliciter,
most literature references only ever define “sieve” within the composite “sieve on a specified object $o$”,
So yes, I was a victim of a dicto secundum quid ad dictum simpliciter here, sorry.
I am not asking for explanations, it seems clear that with “which is certainly not true of a sieve” you were referring to the most general nLab definition, not the most common one, so strictly speaking you were right. I will edit my comment accordingly.
Todd, re 15,
I’m still hoping for something snappier.
I am only suggesting the following because of this comment of yours, in view of the possibility that you like the suggestion, and since one can take the view that a “totally new term” is the way to got (I recognize that this is opposite to what Mike is suggesting; I tent to agree with Mike’s conceptual comment, yet am then led by Mike’s suggestion not to “global source” but rather to “outroot” and “inroot”, which are a refinement of Rod’s suggestion and are my current favourites):
Personally, “glink” and “glource” are not my favorites.
Todd, Mike, re Mike’s general ideas on whether to overload or not,
I think one should keep two ’dimensions’ separate
the more contingent/statistical one of what is currently frequently used
the less contingent ’dimension’ of what is the intuitive content of the term.
To my mind, “sink” has something against it in both these dimensions, but the “intuitive content” dimension is where “sink” scores particularly low: to me, a sink is in particular something out of which not much comes, yet, evidently, for any cardinal number $\kappa$, there exists a quiver $Q_\kappa$ in which there is a sink such that
the out-degree of said sink $\kappa$
the in-degree of said sink $1$
Example: the quiver with vertex set $V=2\sqcup\kappa$ and edge-set $\{ a_i \colon i\in\kappa \}\sqcup \{ b_i\colon i\in\kappa \} \sqcup \{c\}$ and images of the walking-quiver-morphisms given by, for all $i\in\kappa$, $s(a_i)=1$, $t(a_i)=i$ $s(b_i)=i$, $t(b_i)=0$, $s(c)=0$, $t(c)=1$.
Here, $1\in V$ is a ’global sink’ with in-degree 1 and out-degree $\kappa$.
Now that’s a ’global sink’ in the sense of e.g. arXiv:0801.3306v4, yet seems counter-intuitive.
If you adopt ’global sink’, I think it is a fact that then this will be a term validating the red herring principle to a rather extreme degree, quantifiable degree, a ’global sink’ not necessarily being a ’sink’, and possibly being arbitrarily far from a sink degree-wise.
[removed philosophizing comment]
Todd,
Yeah sure. I have to go back through it and re-familiarize myself with it all again anyway, since I’ve been away from mathematics for a spell. Also, Ara-Maltsiniotis in a really amazing paper that gets us the elusive omega-join (which had been as elusive as the lax tensor product until Steiner), tighten the conditions on Steiner’s complexes to ones with a strong Steiner system. I’m aware only of their result, but I haven’t had time to sit down and read through the proof.
Basically, here’s why they are magical:
If you have two directed complexes $A,B$ a la Steiner, the underlying chain complex of the lax tensor product is just
$A\otimes B$
If you have two augmented directed complexes $A,B$ equipped with strong Steiner systems a la Ara-Maltsiniotis, the underlying augmented chain complex of the join is given by
$\Sigma^{-1}(\Sigma A \otimes \Sigma B)$
I’m aware that Crans gave a definition of the lax tensor product in his thesis using his generalization of Power’s pasting schemes, but have you ever tried reading the paper? It’s basically the most confusing thing I’ve ever seen in my life. In Steiner’s version, you know that this is the right notion and pretty much has to be, because the lax tensor product, if you really think about it, just has to behave like the tensor product of chain complexes. It’s the only product I know of in mathematics that takes a pair of 1-cells to something containing one 2-cell. Also, the boundary map in chain complexes always splits the boundary into positive and negative parts. It’s just such an obviously correct presentation of loop-free pasting schemes, much of it for free.
If you’re doing pasting computations and you’re not using directed complexes, you’re missing out. It reduces a lot of these pasting problems to linear algebra.
(Harry, Todd: incidentally, since it can be stated in two lines and Harry’s comment reminded me of is, and someone reading this may like to solve this: I think it is still a difficult open problem whether Crans-semistrict-4-categories are equivalent to Trimble-tetracategories.)
To whom it may concern, I plan to write something summarizing and conceptual about pasting theorems, yet to prevent myself from burying the following apparently unadressed specific question in more general comments:
(One could imagine some pasting theorems being nicer mathematically while at the same time harder to apply decision-problem-wise.)
I am not asking this in the negative hope of undecidability results (as in undecidability of the word problem), rather in the positive hope for practical algorithms to decide whether a given higher-dimensional expression is sensical. (Of course, by brutally cutting down on the admissible structures G one can artificially get such algorithms, the question is about the more usual pasting theorems on record, which allow quite a large class of expressions.
EDIT: and of course, the complexity is at least as high as deciding acyclicity of a given finite digraph, yet this is not much of a barrier, since, roughly speaking, the complexity of this is linear in the size of the structure, and also practically so, no huge hidden constants here.)
re 29: Harry, you note: ’It’s the only product I know of in mathematics that takes a pair of 1-cells to something containing one 2-cell. ’ There is also the closely related Brown-Loday tensor product of crossed modules. This is best viewed as taking its values in the category of crossed squares. I think also that ordinary tensor products of chain complexes do something similar, depending in what way you look at them.
I tried for some time to construct a tensor product of simplicially enriched groupoids, but the complexity of the Dwyer-Kan way of going from simplicial sets to such gadgets made the formulae unlikely to give rise to an Eilenberg-Zilber theorem which is why I needed/wanted them.
(To whom it may concern, and sort-of-Re comment 23: does anyone think that the fact that every bicategory is equivalent to a strict 2-category can fruitfully be used to simplify a presentation of the topic of pasting-theorems-up-to-and-including-bicategories? I have not seen this aspect stressed in any of the pasting-theorem-papers, in particular not in Steiner’s, which have unifying aspirations. Since there exists a tricategory not equivalent to any strict 3-category, committing oneself to such an approach does not scale, as they say, yet it may have some pragmatic value. Someting like writing an exposition of the most general pasting theorem for strict 2-categories (what that is remains to be determined), then precisely using some strictification theory to derive a pasting theorem for bicategories from that?))
Re #29, #32: I’m not sure what Harry meant either. Wouldn’t the category of cubes and its free cocompletion cubical sets qualify as well?
@Tim I tried the exact same idea in 2011 or 2012 with the Dwyer-Kan/Eilenberg-Zilber comparison, inducing another tensor product on directed complexes, and I couldn’t get anything useful or interesting out of it.
Also, I agree that it’s no surprise that the Brown-Loday tensor product also looks like a tensor product of chain complexes. I do find it strange that it’s popping up here, but it works! I once tried asking a question on Mathoverflow about ’why’ it works, but I couldn’t come up with anything that didn’t sound stupid. I’ll have to look into this crossed squares idea. Maybe it will shed some light!
@Todd Chain complexes have a certain pattern to their tensor product with an interval, which you should just be able to recognize. Basically, you get a suspended copy of the thing you’re tensoring with the interval in the middle of the cylinder, and it alternates orientation forwards and backwards based on dimension (because the sign of the righthand chain complex’s boundary map alternates). The simplest example of such a tensor product that has that property is the tensor product of chain complexes of Abelian groups, like you saw in your very first algebraic topology class. I don’t know of a more elementary or well-known example of a tensor product with that property. The tensor product of cubical sets has a property of going up in dimension as you say, but it doesn’t have that weird alternating property on the orientation, as far as I know at least. Also cubical sets are hard to use to model lots of different shapes of pasting diagrams, but what do I know.
Hi Harry,
My last comment on cubes was in response to this sentence:
It’s the only product I know of in mathematics that takes a pair of 1-cells to something containing one 2-cell.
Signs and orientations are of course a different matter. Ross Street also discussed products of parity complexes in which the sign conventions familiar from products of chain complexes come into play, and this applies for example to parity complex structures on cubes. The “magic” mentioned in #29 applies also to parity complexes and the functor which takes a parity complex to the chain complex associated with it.
Why are parity complexes considered now to be only of historical interest, and what are the decisive advantages of directed complexes over them?
Hi Todd,
The problem with parity complexes is they’re just tough to work with. Directed complexes are actual on-the-nose chain homomorphisms of free abelian groups that restrict to actual on-the-nose commutative monoid homomorphisms. The boundary maps are compatible and the monoid structure of positive elements allows you to distinguish between sources and targets (positive elements (in the monoid) are sources, negative ones are targets). Everything you can do with a parity complex, you can also do with a directed complex, but the additional structure makes things much easier to work with, reducing it to linear algebra of chain complexes of finite free abelian groups. That’s why Steiner was able to construct the lax tensor product so easily for directed complexes and why Dimitri Ara and Georges Maltsiniotis were equally able to easily construct the (op)lax join of strict $\omega$-categories with them (because these constructions happen to be related to the respective underlying operations on chain complexes).
Parity complexes offer no advantages and significant disadvantages, but hey, use what you like I guess. To build the orientals with parity complexes, you have to figure out some difficult combinatorics. For directed complexes, it comes from the familiar chain complex associated with a representable n-simplex.
Thanks, Harry: that’s helpful. Actually my experience is that indeed parity complexes are not easy to work with (for example verifying the axioms in nontrivial cases), and the combinatorics surrounding them, as in Street’s papers, are also not easy to read.
But I don’t think I ever really knew the definition of directed complexes. Can we just state that easily here and now?
@Todd: Sure:
An Augmented Directed Complex is a chain complex of abelian groups $K_n$ concentrated in nonnegative dimension together with a distinguished submonoid $K^*_n \subset K_n$.
A morphism of ADCs is a homomorphism of chain complexes $f:K\to L$ such that $f(K^*_n) \subseteq L^*_n$.
From there, Steiner moves on to the case of ADCs with a basis, then continues classifying more nice properties that the basis should have, so in particular, in practice, the chain complexes in question are always chain complexes of free abelian groups. The elements of the distinguished submonoid play the role of “nonnegative elements” vis-a-vis the basis. Using the submonoid, this gives us a partial order on the abelian group $K_n$ where $y\geq x$ if and only if $y-x \in K^*_n$.
Then, we want a conditions on bases relating to loop-freeness, which amounts to being able to refine that order structure mentioned above in different ways.
Eventually, Steiner shows that the ADCs with unital strongly loop-free basis form a full dense subcategory of strict $\omega$-cat.
There’s quite a bit more structure to it than parity complexes, but the additional structure makes keeping track of everything a dream in comparison.
The paper in question:
Homology, Homotopy and Applications, vol.6(1), 2004, pp.175–200
OMEGA-CATEGORIES AND CHAIN COMPLEXES
RICHARD STEINER
(communicated by Ronald Brown)
The paper of Dimitri Ara and Maltsiniotis is on the arXiv and is something along the lines of “Joins and slices for strict $\omega$-categories” (in this case they mean (op)lax).
Very helpful indeed, Harry. Thanks once again. I’ll have a look.
FWIW, the ordinary cartesian product of topological spaces also takes a pair of 1-cells to a 2-cell. As does Gray tensor product of 2-categories, the smash product of spectra, the cartesian product of simplicial sets (well, there the product has two 2-simplices, but the topological content is the same), etc. etc. This sort of thing is all over the place.
@Mike It’s hard to describe it, but I mean where it increases dimension while keeping an orientation, up to a sign. The cartesian products of simplicial and cellular sets do not carry orientation data in their riffles, nor do the products of topological spaces.
The only cases that truly do compare are the (lax) Gray tensor product of 2-categories (obviously a special case) and the smash product of spectra (I think closely related to the tensor product of chain complexes by the correspondence between A-infinity-modules and dg-modules).
@Todd: I was showing a family member some of your old tetracategory diagrams, and I reread the attached introduction. In that introduction, you mentioned the difficulty of working with pasting diagrams (I assume parity complexes). Would something like this help at all for formulating the coherence diagrams (perhaps in a more computable form than gigantic 4-dimensional commutative diagrams patched together with parity complexes)?
In particular, if there is a natural way to represent the Stasheff polytopes as chain complexes, you might end up with a crazy result (like how the orientals fall out for free by letting the basis at each level $0\leq k\leq n$ be the nondegenerate k-simplices of $\Delta^n$) that gives exactly the correct orientation (though this is not much more than speculation at this point).
Harry, yes it might help.
There are undoubtedly many parity/directed complex structures $S_n$ that can be put on the associahedra $K_n$, but I never developed a complete set of rules that would specify more that a few terms of such a sequence $S_n$. The cellular structure of the higher associativity and higher unit structures have long been known to me, but sensible choices of the orientations are still somewhat mysterious to me except for a few ad hoc rules.
When I say there should be many possible parity/directed complex structures, I mean for example that if we have a description of the $K_{n+2}$ as polyhedra in $\mathbb{R}^n$ (as done by e.g. Loday), then one might be able to read off negative and positive boundary cells relative to a generic choice of coordinates (a basis $e_1, \ldots, e_n$ ) for $\mathbb{R}^n$. So let’s see, we could for example call an $(n-1)$-dimensional face $c$ of $K_{n+2}$ “negative” if every line $f: t \mapsto v_1 + t e_1$ that intersects $c$ does so at the point $f(s)$ where $s = \min \{t: v_1 + t e_1 \in K_{n+2}\}$, and positive if $s = \max \{t: v_1 + t e_1 \in K_{n+2}\}$. (Some genericity of the basis is used to ensure that every $(n-1)$-dimensional face is transverse to line segments parallel to $e_1$.) Then we project each such cell $c$ onto the space spanned by $e_2, \ldots, e_n$, and proceed inductively to define which cells on its boundary $\partial c$ are negative and which are positive.
My guess is that such a prescription does give a parity/directed complex structure, for any polyhedron in $\mathbb{R}^n$ (which I define to be a finite intersection of closed half-spaces which is compact and $n$-dimensional). If Steiner’s directed complexes give a reasonable way to interpret and prove that guess, I’d be interested!
No idea. I do know that Dimitri Ara wrote a small Haskell program when he was trying to find a “non-orientalable” object of $\Theta$ that was able to check for loops while bruteforcing orientations. He told me he found such a counterexample around four years ago (it was never published because the avenue of research he had been pursuing ended up not working at a later stage, something involving normalized lax n-functors), so as far as automating computations go, it might be useful. I don’t even know how you’d do those kinds of computations with parity complexes.
I was just wondering if the associohedra already had a description as a chain complex already haha. But life is never so easy I guess!
Edit: He was trying to find an example of an object $[t] \in \Theta$ that couldn’t be turned into a higher-dimensional oriental. That is, there is no extension of the oriental functor $\Delta \to \omega\operatorname{-cat}$ to $\Theta$.
Edit 2: Do you know a source that has a drawing of the oriented $K_5$?
Re edit 2: well, there’s page 3 here.
Ah, excellent thanks! But yeah it doesn’t look like there’s a straightforward way that this gets turned into a chain complex by magic haha. I was sorts hoping someone might have given a chain complex description of it say as some kind of simplicial complex.
It’s good to see you back, Harry.
Just out of a general interest: Where is the work on strict $\omega$-categories that you mention headed? I know people in “rewriting theory” regard strict $\omega$-categories as a tool for modelling thei purposes, but this is not what you are after, is it? Is there some tacit argument/hope that the constructions on strict $\omega$-categories are going to be useful for the case of genuine higher categories?
Re 48
Where is the work on strict $\omega$-categories that you mention headed? I know people in “rewriting theory” regard strict $\omega$-categories as a tool for modelling thei purposes, but this is not what you are after, is it?
A perception of mine is that one direction in recent work on strict $\omega$-categories is
This view of mine is a little supported by
the additional structure makes things much easier to work with, reducing it to linear algebra of chain complexes of finite free abelian groups.
in 37. Correct me if you think this is clearly a misapprehension.
This comment does not claim that this is the direction where the theory of $\omega$-categories is headed.
I also recognize that this comment does not say anything about the question
Is there some tacit argument/hope that the constructions on strict $\omega$-categories are going to be useful for the case of genuine higher categories?
in 48.
@Urs We already have a genuine model of weak omega-categories now, but the main problem in extending the theory to be as useful as the infinity,1 case is understanding things a lot better in the strict case. A big motivation for this is that Joyal and I seem to both have hit on a new comparison functor between the weak and strict cases in the past two months independently, according to Andrea Gagna (it uses a modified adjunction of Metayer to produce “un-orientals” then uses that as the basis for a new nerve construction). This new nerve fixes many of the apparent defects of the $\Theta$-Quasicategories.
Also, Ara and Maltsiniotis have used Steiner’s theory to extend the theories of Lax transfors and lax slices into the weak case as well, so it is already a richer theory than simplicial quasicategories. There’s a conference at Marseilles in September where we will all be comparing notes, and Joyal is staying in France for a month to work with Dimitri on making a concerted push forward on things like translating higher fibrations into a language that works with $\Theta$. In my opinion at least, straightening/unstraightening is the fundamental theorem of HTT, and if we have it for $\Theta$-quasicategories, there’s a lot of open country in front of us to advance upon.
Basically the strict case works as a staging area to figure things out, and we can then transfer them to the homotopical case by decoding things as lifts. Usually what’s true in the strict case is also true in the weak case, up to homotopy at least.
Dimitri and his student Andrea Gagna are also working heavily on a Thomason theorem for n-categories, so they also have direct interest in rewriting, but I’m no expert. It involves lax functors, joins, and transfors somehow though.
Registration for the conference is closed, but maybe email Dimitri if you want to go? He was able to register me after it closed too.
The conference is at CIRM, Higher categories and rewriting sept 24-29.
Also, Urs! I’m moving to Regensburg right after the conference. I don’t know where you are in Germany, but hit me up!
Also, Peter, no, that’s wrong. The Steiner complexes are just a very useful computational tool for computing and working with “parity complexes” (you can iirc go between parity complexes and steiner complexes by looking at a parity complex as the basis for a Steiner complex). The main difference is you can do arithmetic to find the boundaries and whiskerings, and the tensor product and join of complexes happens to be exactly the (op)lax version.
Everyone I talk to who has used Steiner complexes in a computation is confused how they went so long without hearing about them. They’re pretty awesome.
Everyone I talk to who has used Steiner complexes in a computation is confused how they went so long without hearing about them. They’re pretty awesome.
From what I recall from about 1993 or so, the older version of Steiner’s directed complexes was definitionally more complicated than the new algebraic version, so it could be that many people who had heard of directed complexes didn’t know there was a more pliable and conceptually simpler version which had been worked out in the meantime.
Ah yeah. I have that paper. It’s just as rough to use as the Power, Street, and Crans versions. He probably shouldn’t have called them something that sounds almost identical to the earlier one, when the new version is so much better.
(And this helps to explain why I at any rate hadn’t bothered trying to dig out the definition and put it on the nLab: I was behind the times!)
Harry, Todd,
there is a publication saying it is a formal verification of “the theory of parity complexes”.
Some of you are very likely already aware of that.
It nevertheless seems on-topic to mention Buckley’s work, in view of
Re
nowadays pretty much only of only historical interest
in 23
Why are parity complexes considered now to be only of historical interest, and what are the decisive advantages of directed complexes over them?
in 36
and
The problem with parity complexes is they’re just tough to work with.
and
Parity complexes offer no advantages and significant disadvantages, but hey, use what you like I guess. To build the orientals with parity complexes, you have to figure out some difficult combinatorics. For directed complexes, it comes from the familiar chain complex associated with a representable n-simplex.
in 37
and
experience is that indeed parity complexes are not easy to work with (for example verifying the axioms in nontrivial cases), and the combinatorics surrounding them, as in Street’s papers, are also not easy to read.
in 38.
An article like the one cited in the beginning of course does not contradict any of the opinions expressed about parity complexes, new though it is, quite the contrary. (One can argue it both ways, either emphasizing that parity complexes are ’only’ of historical interest, or that Buckley’s article shows that they are still of interest.) To my mind, such a work support all of the opinions expressed (they are a consistent method, yet difficult to the point of necessitating machine help).
To me, it just seemed that a reader not very well versed in this would find it useful to be told of the existence of a verification effort for parity complexes.
Peter, I don’t understand what you’re talking about. Parity complexes might attract a formal proof of verification because they are unwieldy, difficult to use, and easy to make mistakes with. Because ADCs have additional structure that is well-understood, all of the difficult algorithms for excision and other things that you need to use for parity complexes are just graded arithmetic in free modules over the integers. They don’t need to be formally verified because they are pretty straightforward to work with and the proofs are rigorous enough to satisfy any professional mathematician.
Harry, thanks for the informative reply!
We already have a genuine model of weak omega-categories now
Which one is this? Are you referring to weak complicial sets?
Has it meanwhile been proven that suitably restricted weak complicial sets are equivalent to (any of the many models for) $(\infty,n)$-catgories, for given $n$?
@Urs: No, I’m talking about the Quillen-equivalent pair of model categories given by “complete $\Theta$-Segal” spaces (Rezk/Bergner) and $\Theta$-quasicategories (Joyal/Cisinski/Ara/Gindi). We don’t have a model yet for cellularly enriched categories, but one way I’m looking at is to go the easier way of comparison by looking at complete Segal objects enriched in cellular sets (weakly-enriched categories) first. I think the fundamental theorem for these guys is going to be straightening/unstraightening, from which we can get representability, weak $\omega$-Yoneda, and all that good stuff.
Has it meanwhile been proven that suitably restricted weak complicial sets are equivalent to (any of the many models for) (∞,n) (\infty,n)-catgories, for given n n?
Also, yes, at least David Oury has done something like that using fragments of $\Theta_n$ for finite $n$. We also know exactly what we want to localize to model $(\infinity,n)$ in cellular sets (the embedding of the $n$-disk into the freestanding isomorphism of a pair of parallel $n-1$ disks (that is, the fibrant objects will end up being the ones where the projection of the space of n-equivalences of X onto the space of all n-cells of X is a trivial fibration of cellular sets). So a comparison between the $\Theta_n$-sets model structure and the localized $\Theta$-model structure should be a Quillen equivalence (I think this will be extremely straightforward to show).
Re 56:
Harry, thanks for the explanations.
Peter, I don’t understand what you’re talking about. Parity complexes might attract a formal proof of verification because they are unwieldy, difficult to use, and easy to make mistakes with.
Harry, I think this is just what I was trying to say in 55: formal verficiation efforts can be seen to be evidence supporting your view on parity complexes.
I in turn do not quite understand why you are emphasizing
Because ADCs have additional structure that is well-understood, all of the difficult algorithms for excision and other things that you need to use for parity complexes are just graded arithmetic in free modules over the integers. They don’t need to be formally verified because they are pretty straightforward to work with and the proofs are rigorous enough to satisfy any professional mathematician.
since I did not say anywhere that ADC were being formally verfied, or in need of being verified.
Please do not take this as ’talking back to you’ or getting into an argument about who said what, I am just trying to understand.
I take it that you made the remark on ADCs to emphasize that ADC are simpler and in no need of machine verification.
I recognize you did not say I said ADC are (in need of) formal verfication.
Thanks for the explanations again.
Harry,
Re
Also, Peter, no, that’s wrong. The Steiner complexes are just a very useful computational tool
Thanks for the explanations.
Could you please briefly say whether you mean ’wrong in the sense of this not being the main thrust of current research on strict $\omega$-categories’ or ’wrong in the sense that the scheme
to find some perceived-to-be-natural full subcategory $C$ of the category of strict $\omega$-categories, then show that $C$ is equivalent to some even more perceived-to-be-natural category $D$ which is traditionally equipped with extra structure $S$, and the let $S$ inform the research on $C$, or even on all strict $\omega$-categories. This is the important theme of avoiding making arbitrary definitions. (paradigm)
I outlined in 49 has something essentially mathematically wrong with it.’
I take it that
Since I am trying to keep it shorter, I will not try to summarize why I think (example) is an example of (paradigm), unless asked to do so, in particular since you probably all know this well.
I would appreciate something like ’Yes, it roughly fits the scheme, but simplicial nerves of omega-categories are ten years old and just not to be considered a new direction’, or something like that.
Dominic Verity and Emily Riehl are the only people I know working with weak complicial sets, and I don’t know anything about it. Not to cast aspersions but I am personally allergic to the way that stratified simplicial sets work.
That’s the simplicial nerve route.
There is a Theta nerve route that works pretty well by building a homotopy theory of its presheaves (or simplicial ones), but Theta is a much more complicated reedy category than the simplex one. It’s not generated by the join operation, and the join is not monoidal in Theta itself (you have to consider these Steiner complexes for the actual monoidal category). It has to be more complicated because every object of Theta represents the composite of some directed family of composable cells in a strict omega category.
However, the associated Theta-nerve functor does not work because it doesn’t translate higher equivalences in the folk model structure on strict omega-cats. So for a while we thought this was a big problem but it is actually an important property, since if it did do so, it would mean our theory only captured strict rather than pseudofunctors.
I know Mitch Buckley personally and he wrote that paper during his PhD thesis because he was interested in Coq, at Macquarie university (where Street is) and needed a relevant project. It’s not an active arm of research as far as I can see (unlike, for comparison, the UniMath or Mathematical Components projects).
David, Harry, Todd, Urs, thanks for these comments, they were helpful, to me at least.
While I think it nearly impossible that this is new to you, the following can be stated in two lines and seems almost remiss not to add at this point of the discussion, being so new and relevant: there is this two-months old MO thread on comparing various approaches to higher-dimensional pasting.
It’s not an active arm of research as far as I can see (unlike, for comparison, the UniMath or Mathematical Components projects).
by “It” I meant formalisation of pasting schemes of any sort; this should be clear by my comparison to the other formal proof large-scale projects.
@Peter Thanks for the heads up! I went to the MO question and threw in my tuppence ha’penny.
1 to 65 of 65