Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 66 of 66
In talking and thinking about second derivatives recently, I’ve been led towards a different point of view on higher differentials. We (or Toby, really) invented cogerm differential forms (including cojet ones) in order to make sense of an equation like
with an object that’s actually being squared in a commutative algebra. But it seems to be difficult to put these objects together with -forms for in a sensible way. Moreover, they don’t seem to work so well for defining differentiability: the definition of the cogerm differential in terms of curves is an extension of directional derivatives, but it’s not so clear how to ask that a function (or form) be “differentiable” in a sense stronger than having directional derivatives.
The other point of view that I’m proposing is to use the iterated tangent bundles instead of the cojet bundles: define a -form to be a function . A point of includes the data of tangent vectors, so an exterior -form in the usual sense can be regarded as a -form in this sense. And for each , such functions are still an algebra and can be composed with functions on , so that we still have things like and as 1-forms.
However, the difference is in the differentials. Now the most natural differential is one that takes a -form to a -form: since is a manifold (at least if is smooth), we can ask that a function be differentiable in the sense of having a linear approximation, and if so its differential is a map that is fiberwise linear.
I think it would make the most sense to denote this differential by , because in general it is neither symmetric nor antisymmetric. For instance, if is a 1-form, then , where is the 2-form that multiplies the -component of the first underlying vector of its argument by the -component of the second underlying vector, and is the 2-form that takes the -component of the “second-order” part of its argument (which looks in coordinates like another tangent vector, but doesn’t transform like one). However, if is a twice differentiable function, then is symmetric, e.g. we have
So this amounts to regarding the second differential as a bilinear form (plus the bit) rather than a quadratic form, which appears to be necessary in order to characterize second-differentiability.
In general, we can symmetrize or antisymmetrize , by adding or subtracting its images under the action of the symmetric group on . The first gives a result that we might denote ; while the second restricts to the exterior derivative on exterior forms, so we may denote it . So in this way, the symmetric and the antisymmetric differential fit into the same picture.
I think this is basically the same approach taken in the reference cited by the answer here.
Wow, this might be just what I've been looking for all along! I was early on very suspicious about having just a symmetric notion of higher differential, but it was working, so I decided to accept it. But I did once hope for a non-symmetric version whose antisymmetric part would give exterior forms.
I’m glad you like it! I’m also enthusiastic about this point of view. I’m having some expositional/pedagogical issues in thinking about how to use it, though.
What are these forms called? I want to call them “co-X differential forms” where “X” is the name of a point of an iterated tangent bundle. But what is that called? The only thing I can think of is than in SDG they are “microcubes”, i.e. maps out of where is the space of nilsquare infinitesimals; but do they have a name in classical differential geometry?
How to notate the nonsymmetric differential of a 1-form (for example)? I think I can say “given a 1-form like , we take its differential by treating and as separate variables, just like , only now is .” But in this setup, the differential of that appears when we take the second differential is different from the previous variable . We could call them and , or and , but nothing I’ve thought of looks very pleasant to me.
Perhaps the most consistent thing would be to say that is the differential from -forms to -forms, but that we omit the subscript when it is zero. So we would have , and we could abbreviate by and observe that (where on a pair of 1-forms means the 2-form that applies them to the two underlying vectors and multiplies the result). But this notation is still heavier than I’d prefer.
Yeah, it's all still a bit crazy. Sorry, I don't have any answers. Although it's good to see that ; that was tripping me up when I tried to do this before.
I’m thinking about ways to finesse (2). Suppose we restrict attention to “differential polynomials” of the sort that one gets by repeated differentiation of a function, and then just say that multiplication of differentials doesn’t commute. So for instance we have
in which (being secretly a ). This already breaks for general 3-forms, however: in
the two “” terms are actually different. If we coordinatize by , then one of them gives while the other gives .
However, if we symmetrize, then these two terms become equal, and the same ought to be true more generally. So for taking symmetric differentials, maybe we don’t need to worry about heavy notation. And possibly the same is true for antisymmetric ones, once we have a Leibniz rule for the wedge product. I’m not entirely positive, but now I have to run off and give a final…
For (1), so far my best invention is “(co)flare” — like a jet, but less unidirectional.
I think it does basically work.
The symmetric and antisymmetric -forms are vector subspaces of all -forms, and symmetrization and antisymmetrization are linear retractions onto them. There is a tensor product from -forms -forms to -forms, obtained by restricting along two projections and and then multiplying. Symmetrizing and antisymmetrizing this makes the symmetric and antisymmetric forms into algebras as well.
In a coordinate system , the coordinates of are , where (the case giving the coordinate functions ). We may write for . Say that a (degree- homogeneous) differential polynomial is a -form that is a sum of terms of the form
where is a partition of . The differential of a function is a symmetric differential polynomial.
Each determines maps , allowing us to unambiguously regard a symmetric degree- differential polynomial as a degree- differential polynomial. In particular, for any change of coordinates we can substitute the differential of for each in a differential polynomial in , obtaining a differential polynomial in . Since this is how -forms transform, the notion of “differential polynomial” is well-defined independent of coordinates.
Now the differential of a differential polynomial is again a differential polynomial, as is the symmetrization or antisymmetrization. The symmetrization of is
and terms like this are a basis for the symmetric degree- differential polynomials (with functions as coefficients). However, this symmetrized term is also the product
in the algebra of symmetric forms, and if we use this notation then the symmetric differential polynomials look much cleaner.
The antisymmetric case is even easier: since the antisymmetrization of is if , the antisymmetric differential polynonmials are just the ordinary exterior -forms, and the antisymmetric differential is the usual exterior differential.
I don’t think we’ve run into any situations yet where we want to take differentials of forms that aren’t differential polynomials. We want to integrate forms like and , but we haven’t found a good meaning/use for their differentials yet.
So for taking symmetric differentials, maybe we don't need to worry about heavy notation. And possibly the same is true for antisymmetric ones, once we have a Leibniz rule for the wedge product.
Yes, that's the conclusion that I came to. So they live in different systems. But you are saying that maybe you can make them work together after all.
It's interesting that exterior differentials are already known to be the antisymmetric part of a more general algebra. But that algebra (consisting of what physicists call covariant tensors) acts on , not on . And there is no natural notion of differential of a general form in this algebra; but if you pick a notion (using an affine connection), then the antisymmetrization of the differential of an exterior form always matches the exterior differential. I was always afraid of recreating that, which would have to be wrong.
But you are saying that you have a natural notion of differential of the general form; it's a different kind of form but the antisymmetric ones are still precisely the exterior forms.
Yes, I agree with all of that. Your observation about covariant tensors and connections in #9 is interesting, and makes me wonder how the general “coflare differential” might be related to affine connections. Can a connection be equivalently formulated as some way of “restricting the coflare differential to act on tensors”?
Oh, of course: a connection is just a fiberwise-bilinear section of the projection , so we can obtain the covariant derivative of a 1-form by taking the coflare differential and precomposing with the connection. Presumably something analogous works for other covariant derivatives.
Brilliant!
[Edit for people reading through the discussion history: Note the dates; the conversation has been resumed after a pause of nearly a year, and in fact I'd kind of forgotten about everything in this thread during that break!]
Wow, apparently I never clicked through to the M.O answer that you mention in the top comment, where they quote somebody quoting anonymous ‘classical calculus books’ as writing
This is our !
Yep!
So how do we integrate these “coflare” forms? For coflare forms the order is bounded above by the rank (terminology from here), so a rank-1 coflare form (a function on ) is roughly the same as a 1-cojet form, and can be integrated in either the “genuine” or “affine” way as proposed on the page cogerm differential form. For a rank-2 coflare form, a function on , my first thoughts in the “affine” direction are to cover the parametrizing surface by a grid of points , regard our form as a function on a point and a triple of vectors (though the third one doesn’t transform as a vector), and sum up the values of
where
I suspect that there won’t be many forms other than exterior ones that will have convergent integrals. Are there any that we should expect?
Well, I'd really want the absolute forms to converge, and to the correct value. Anything that doesn't recreate integration of exterior and absolute forms isn't doing its job.
Right, I agree, thanks. I think it’s reasonable to hope the absolute forms to work too, although it’ll probably be some work to check. Maybe there is a Lipschitz condition like in the rank-1 case.
Here’s a proposal for how to integrate coflare 2-forms, which I conjecture should suffice to integrate exterior and absolute forms, and also be invariant under orientation-preserving twice-differentiable reparametrization for Lipschitz forms. Let a “partition” be an grid of points such that the quadrilaterals with vertices partition the region disjointly and have the correct orientation. Tag it with points in each such quadrilateral. If is a gauge (-valued function) on the region, say the tagged partition is -fine if for all :
The Riemann sum of a 2-form over such a tagged partition is defined as in #15, and the integral is the limit of these over gauges as usual.
Note that because grids can be rotated arbitrarily, there’s no way that a form like is going to be integrable with this definition. We essentially need to perform some antisymmetrization to make the pieces rotationally invariant, but we should be free to insert norms or absolute values afterwards to get absolute forms. I don’t know whether this integral will satisfy any sort of Fubini theorem, though.
I’m not finished looking at this integral, but I want to record here a fact about second-order differentials for when we write an article on them:
+-- {: .num_theorem #2ndDTest}
###### The second-differential test
Given a twice-differentiable space , a twice-differentiable quantity defined on , and a point in . If has a local minimum at , then is zero and is positive semidefinite. Conversely (mostly), if is zero and is positive definite, then has a local minimum at .
=--
Here we may view as either a cojet -form (the question is whether is always (semi)-positive for any curve through ) or a coflare -form (the question is whether is always (semi)-positive for any vector (note the repetition) and second-order vector ).
Of course, this is well-known in the case of the second derivative (which does not act on , but of course the action on is zero when ).
Actually, this is still more complicated than it needs to be! The theorem is simply this:
+-- {: .num_theorem #2ndDTest}
###### The second-differential test
Given a twice-differentiable space , a twice-differentiable quantity defined on , and a point in . If has a local minimum at , then is positive semidefinite. Conversely (mostly), if is positive definite, then has a local minimum at .
=--
In terms of the cojet version, is (semi)-positive1 for all twice-differentiable iff is positive (semi)-definite and is zero. (Just look at examples where or is zero.) In terms of the coflare version, is (semi)-positive for all and all iff the same condition holds. (Inasmuch as the cojet version of is the symmetrization of the coflare version, then properties of values of the cojet form correspond precisely to properties of values of the coflare form when and are set to the same value.)
So one never needs to mention as such at all!
where ‘semipositive’ means , which hopefully was obvious in context ↩
Nice! The improved version of the theorem gives another argument for why the terms should be included in .
I like the word “semipositive”; maybe I will start using it in other contexts instead of the unlovely “nonnegative”.
It also works better when the context consists of more than just the real numbers. For example, is nonnegative, but it's not semipositive. (Another term that I've used is ‘weakly positive’, which goes with the French-derived ‘strictly positive’ rather than the operator-derived ‘semidefinite’.)
Maybe it’s because I’m more familiar with operators than with French, but “semipositive” is more intuitive for me than “weakly positive”. The word I’ve most often heard used for (in opposition to “strictly” for ) is the equally unlovely “non-strictly”.
Wow, ‘non-strictly’!? As far as I know, ‘weakly’ here doesn't come from French; in French, they say simply ‘positif’ for the weak concept (which is why they must say ‘strictement positif’ for the strict one). But it seems to me that ‘weakly’ is the established antonym of ‘strictly’.
Unless it’s “strongly” or “pseudo”…
In that funny way that sometimes happens on the forum/Lab/Café where separate conversations turn out to be related, is there anything you guys are finding out in these threads which makes contact with differential cohesion?
To the extent that this is about higher differentials, it might be (I haven’t followed closely) that the concepts here overlap with Kochan-Severa’s “differential gorms and differential worms” (arXiv:math/0307303). To the extent that this is so, then this is seamlessly axiomatized in solid cohesion.
Well that paper
From a more conceptual point of view, differential forms are functions on the superspace of maps from the odd line to M
seems to relate to our discussion here, where I was wondering about how the shift to the super gave rise to differential forms.
I may have misunderstood your aim there. It seemed to me that you were wondering there about the role of “-Euclidean field theory” in this business, to which I replied that I find it a red herring in the present context.
That the -graded algebra of differential forms on a (super-)manifold is is a standard fact of supergeometry, and the observation that the natural -action on this gives the refinement to -grading as well as the differential originates in Kontsevich 97, and was long amplified by Severa, see also at odd line – the automorphism supergroup.
In the article on gorms and worms they generalize this by replacing by .
Re #29, I returned the discussion to the original thread.
I think we just had a drive-by categorification. (-:
The “higher” in this thread refers essentially to roughly the same number that indexes the usual 1-forms, 2-forms, -forms; the generalization is rather to eliminate the requirement of antisymmetry. It sounds from the abstract like Kochan-Severa instead keep the “antisymmetry” (encoded using super-ness) and generalize in a different direction.
On the other hand, re #26 as stated, coflare differential forms and their differentials make perfect sense in SDG, whereas integration is always tricky in such contexts.
It sounds from the abstract like Kochan-Severa instead keep the “antisymmetry”
They also get commuting differentials: the algebra is spanned over by the two odd-graded coordinate functions and as well as their even graded product . Under the identification of with second order differential forms, the first two generators give anticommuting forms, but the second gives commuting second order forms.
This seems close to what you start with in #1, but I haven’t really thought through it.
And another issue that I've been thinking of:
I've occasionally seen the claim (and now I wish that I knew where, and it may have only ever been oral) that the higher differentials are the terms in the Taylor series; that is (for analytic at with a radius of convergence greater than ),
This works if , but our claim is that is much more complicated. However, the extra terms all involve higher differentials of , so if we evaluate with not only and but also for , then it still works.
@Toby: Yes, I agree. In general, the sum of the higher differentials seems like some generalization of a power series in which you have not only a “first-order-small variable” but also a separate th-order-small variable for all .
Please help me: to which extent is the following definition (first full paragraph on p. 4 of arXiv:math/0307303) what you are after, or to which extent is it not what you are after:
for a Cartesian space, say that its -bigraded commutative differential algebra of second order differential forms is the smooth algebra freely generated over the smooth functions on in even degree from two differentials and . In particular for then
where is in bidegree , is in bidegree and is in bidegree .
Re #35: here bidifferential means that the two differentials anti-commute right?
Another approach to higher differentials, which I don’t know if it had been mentioned already, is in Laurent Schwartz “Geometrie differentielle du 2ème ordre, semi-martingales et equations differentielles stochastiques sur une variete differentielle” p.3. There he defines , for a scalar valued function on some manifold , to be the k-th jet of f “modulo 0-th order information”. More precisely: if vanishes at , then is the equivalence class of modulo all functions that vanish to order at . If does not vanish at , then is the equivalence class of modulo functions vanishing to order at . In coordinates this should amount to the same as considering the k-th Taylor polynomials of , but forgetting the constant term.
In this approach it is not immediately clear (to me), what the product of two higher differentials is and in which sense say etc. But I thought I’ll add it to the mix.
Urs, I need you to back up a moment as I’m not used to thinking of differential forms in this way. Even for ordinary differential forms, how do you get the partial derivatives of appearing from only algebraic assumptions? I can see it if is a polynomial, but in general, might not even be analytic!
I found Schwartz’s paper here: http://www.numdam.org/item?id=SPS_1982__S16__1_0.
It seems to me that Schwartz's differentials contain the same information as mine (if I may lay claim to them even after seeing citations to them in 19th-century books). In particular, the space of order- forms at a given point in a manifold of dimension is , that is for the order- differentials of the local coordinates and for the products of two order- differentials. (These are the symmetrized products, producing my cojet forms, not Mike's coflare forms.)
Perhaps Schwartz is one of the “classical calculus books” mentioned in the answer linked in #1 and referred to in #13.
Schwartz has the good point that the space of order- forms (at a given point) is a quotient of the space of order- forms; the kernel consists of those forms which appear in the order- terms of a Taylor series.
Re #40: is that something special about 1 and 2, or is it true more generally?
Mike, the reasoning with the graded infinitesimals here is the same as for the non-graded ones familiar from SDG, e.g. here.
In local coordinates, the quotient map is
In dimension, an analogous quotient map from order- forms to order- forms would be
but that is not coordinate-invariant (most simply, if , , and ). And I don't even know how I would write down a map from order to order in dimensions (in particular, for ).
However, it seems to me that we can make a quotient map from order- forms to order- forms for any , so it's just special about . But in this case, I don't see anything especially special about the kernel.
H'm, but this map from order to order is invariant:
(note the factor of ). I'm not sure what to make of that!
Thanks Urs, that was the reminder I needed.
The formula for the second differential does indeed look just like the one Toby and I are working with. But is Michael also right that the differentials anticommute? and/or ? And does their setup include nonlinear differential forms such as ?
From the coflare perspective, the general linear order-2 form is
where is shorthand for , and so on. From this perspective the factor of 3 arises because your map to order-2 forms would have to give something like
whereas under a coordinate change , , and all get their own correction term involving .
Just making stuff up, I notice that sending the above order-2 form to
will also be coordinate-invariant, and there seems to be some sense in which there are two ways to “forget” from to (“send to , and send to either or ”) and six ways to “forget” from to (the six injective maps from a 2-element set to a 3-element set).
Thanks, Mike, I tried to figure out something like that, but I wasn't coming up with the right combinatorics. I like your version. It's not immediately obvious why it should work that way, but it does seem to work!
Here’s an even better explanation, again from the coflare POV. At the 2-to-1 stage, there is a map that sends a tangent vector to . Second-order tangent vectors don’t transform like vectors in general, but they do if the associated first-order tangent vectors are zero, so this makes sense. The quotient map from 2-forms to 1-forms is just precomposition with this map.
Now at the 3-to-2 stage, there are six natural maps , sending to or or or the same with and switched. The formula is the sum of the precomposition with all six of these maps. (But it would probably be more natural, in a general theory, to consider all these maps separately rather than only when added up.)
But is Michael also right that the differentials anticommute?
Yes.
And does their setup include nonlinear differential forms such as ?
Only for the second order differentials , yes, since only these are commuting forms in their setup. In section 4.4 you see them consider Gaussians of these.
(Do you want first order differentials to not square to 0? That would sound a dubious wish to me, but I don’t really know where you are coming from here.)
tl;dr: We seek an algebra that encompasses both exterior forms and equations such as the first displayed equation in the top post of this thread.
The main motivating examples for what Mike and I have been doing (beyond the exterior calculus) are these from classical Calculus:
(which is wrong but can be fixed) and
(which is correct except for the implication that is the differential of some global ). Anything that treats as zero is not what we're looking for.
Having determined that
is correct, we're now looking more carefully at and deciding that (if we wish to fit exterior forms into the same algebra) the two s are not exactly the same operator, and that furthermore it matters which of these appears in .
The current working hypothesis is that
(the previous equation, rearranged) is the symmetrization (in the algebra of cojet forms) of
(in the algebra of coflare forms).
And does their setup include nonlinear differential forms such as ?
Only for the second order differentials , yes
What does that mean? Does exist or doesn’t it? Here and are first-order differentials, but their squares are of course second order, and then when we take a square root we get back to “first order” in a suitable sense.
By second order differentials I mean those with second derivatives, as in . In we have but no power of vanishes.
So in their setup, because ?
I’m also confused because seems to contradict the formula cited in #35, which ought to have a nonzero term with coefficient when .
Wait, I’m now in doubt if indeed in Severas setting we have that differentials , anti-commute. Wouldn’t the commutation rule in a graded commutative algebra read where is the bi-degree of and , for ? If that’s the case then elements of bi-degree commute with things of bi-degree . ?
Sorry, yes, I suppose you are right.
Somehow I am not fully focusing on this discussion here…
I think I am just suggesting that if in search of a generalization of anything (here: exterior calculus) it helps to have some guiding principles. What Kochan-Ševera do has the great advantage that by construction it is guaranteed to have much internal coherence, because in this approach one doesn’t posit the new rules by themselves, but derives them from some more abstract principle (of course one has to derive them correctly, I just gave an example of failing to do that :-)
Which “you” of us is right, #53 or #54?
I think that what Toby and I are doing has perfectly valid guiding principles. One of those principles is, I think, that it should be as simple and concrete as possible, so that it can be taught to calculus students who don’t know what a graded commutative algebra is. (-:
Those two guiding principles (a structure with internal coherence, simple calculations) should really be in concert.
Sure; I think coflare forms have plenty of internal coherence as well. We aren’t “positing” any rules, just taking seriously the idea of iterated tangent bundles and dropping any requirement of linearity on forms.
As long as I’m over here, let me record this observation here which I made in the other thread: the view of a connection in #11 gives a coordinate-invariant way (which of course depends on the chosen connection) to “forget the order-2 part” of a rank-2 coflare form, by substituting
wherever it appears. The transformation rule for Christoffel symbols precisely ensures that this is coordinate-invariant. Are there analogues in higher rank?
Michael, re #54 wait, no, the algebra in the end is the superalgebra and as such is -graded.
I’ll have a few spare minutes tomorrow morning when I am on the train. I’ll try to sort it out then and write it cleanly into the Lab entry.
it should be as simple and concrete as possible, so that it can be taught to calculus students who don’t know what a graded commutative algebra is.
Returning the favour, allow me to add the advise: first you need to understand it, only then should it be made simple for exposition. Not the other way around! :-)
The graded algebra is there to guide the concept. The concept in the end exists also without graded algebra.
I’ll try to find time tomorrow to extract the relevant essence in Kochan-Severa.
The simple Calculus exposition already exists, particularly if you go back to the 19th century. It is internally inconsistent, but perhaps it can be cleaned up to be consistent. (Already parts of it have been, but can the whole thing be bundled into a single system?) A unifying conceptual idea would be very valuable here, and so far the best idea that Mike and I have is coflare forms.
Urs re #59 you are probably right, I was trying to recall from vague memory how the formula for graded commutativity works. It should follow from the definition at graded commutative algebra once we agree how a graded vector space is graded, which is probably by mapping the bi-degree . Then I guess the right commutativity formula is .
I ran across a math help page that at least tries to justify the incorrect traditional formula for the second differential; it argues ‘When we calculate differentials it is important to remember that is arbitrary and independent from number [sic]. So, when we differentiate with respect to we treat as constant.’. Now, that's not how differentials work; you're not differentiating with respect to anything in particular when you form a differential, and so you treat nothing as constant. (If you do treat something as constant, then you get a partial differential.) But at least they realize that there is something funny going on here, and that it might be meaningful to not treat as constant.
The formulation on that page is indeed confusing. On the other hand, treating constant () does lead to the incorrect traditional formula. That’s also how Leibniz, Euler etc. arrived at the (generally incorrect) equation . Nowadays that equation is true by notational convention.
That MSE.stackexchange answer that you linked is interesting, as are the comments by Francois Ziegler. I'm glad to see that the early Calculators explicitly stated that they were making an assumption, equivalent to , in deriving that formula.
This seems to be the right thread to record something about notation for coflare forms.
Here is a diagram of the coordinate functions on (which generalizes to and then to a coordinate patch on any manifold, but I only want to think about one coordinate at a time on the base manifold right now), in the notation indicated by Mike in comment #7, where we write the differential operator with a finite set of natural numbers as the index:
Although I didn't label the arrows, moving the right adjoins to the subscript, moving outwards (which is how I visualize the arrows pointing down-right in the 2D projection, although you might see them as inwards depending on how you visualize this Necker cube) adjoins to the subscript, and moving downwards adjoins to the subscript.
I might like to start counting with instead of with , but I won't worry about that now (although Mike did that too starting in comment #46). I'm looking at convenient abbreviations of these instead. Some of these abbreviations are well established: is of course usually just called ; is usually called just ; is traditionally called ; and is traditionally called . These abbreviations also appear in Mike's comment #7, where we can also write for . Note that these examples are all coordinates on the jet bundle.
Another obvious abbreviation is that can mean . There is some precedent for this at least as regards and in discussion of exterior differential forms; for example, can be explained as (or half that, depending on your conventions for antisymmetrization). But can also be written as , and I'm not sure that there's much precedent for by itself.
What there really is no precedent for is something like . But I think that I know what this should mean; it should be , just as means .
Finally, also has an abbreviation that I've used before, as in this StackExchange answer; and which Mike used as well, way back in comment #1: it's (or to be even shorter). This is because the first (from the right) is , the next would be if it were just (aka ), but the tensor product pushes the order back one more.
Except that that's not what I was thinking when I wrote that answer! I was thinking more like (but not wanting to use the subscripts, since I hadn't introduced them in that answer). That is, I'm thinking of , , etc as operators that can be applied to any generalized differential form. They can't just adjoin an element to the subscript, since the subscript might already have that element. So , which is why must be etc.
So here's another way to label the vertices on the cube above:
Note that there's no contradiction here with the notation in the previous cube! The rule is that immediately in front of adjoins to the subscript set, while in the next position to the left adjoins to the subscript, etc. So repeating (which you can also abbreviate using exponents) never tries to adjoin something that's already in the set.
At least, that's the case if the subscript integers come in weakly decreasing order. But what if we apply to ; in other words, what is ? I think that the has to be , that is . So to interpret the operator properly, you first need to first rearrange all of the individual differentials in the proper order.
The other direction is easier. To interpret , where is a set of indices, as the result of applying some operations to , each element of is interpreted as , where is the number of elements of that are less than . So a special case is that is , as Mike used in #7 (but thinking in the reverse direction). But also is (or equivalently ), where the comes from and the comes from , because there's element of that's less than (namely, ), so we use .
In fact, we can define an operator for any set of indices. Again, each element of gives us an operator , where is the number of elements of that are less than . (And since these operators commute, we can apply them in any order.) For example, means . This is because the element of gives us (since nothing in is less than ), the element of gives us (since and ), the element of gives us (since nothing in is less than ), and the element of also gives us (since and ). And this is also equal to .
I don't know about these operators so much, but my intuition about coflare forms is largely based on the operators now. That's what I'm talking about in that StackExchange answer when I say that certain differentials represent velocities, accelerations, and so forth. There are potentially infinitely many different variations of (although we only consider finitely many at a time in a coflare form), and represents an infinitesimal change in the th of these. And such a change can apply to any coflare form.
1 to 66 of 66