This seems to be the right thread to record something about notation for coflare forms.

Here is a diagram of the $2^3$ coordinate functions on ${\mathrm{T}^3}\mathbb{R}$ (which generalizes to $\mathbb{R}^n$ and then to a coordinate patch on any manifold, but I only want to think about one coordinate at a time on the base manifold right now), in the notation indicated by Mike in comment #7, where we write the differential operator ${\mathrm{d}}$ with a finite set of natural numbers as the index:

Although I didn't label the arrows, moving the right adjoins $1$ to the subscript, moving outwards (which is how I visualize the arrows pointing down-right in the 2D projection, although you might see them as inwards depending on how you visualize this Necker cube) adjoins $2$ to the subscript, and moving downwards adjoins $3$ to the subscript.

I might like to start counting with $0$ instead of with $1$, but I won't worry about that now (although Mike did that too starting in comment #46). I'm looking at convenient abbreviations of these instead. Some of these abbreviations are well established: ${\mathrm{d}_{\emptyset}}x$ is of course usually just called $x$; ${\mathrm{d}_{\{1\}}}x$ is usually called just ${\mathrm{d}}x$; ${\mathrm{d}_{\{1,2\}}}x$ is traditionally called ${\mathrm{d}^{2}}x$; and ${\mathrm{d}_{\{1,2,3\}}}x$ is traditionally called ${\mathrm{d}^{3}}x$. These abbreviations also appear in Mike's comment #7, where we can also write ${\mathrm{d}^{0}}x$ for $x$. Note that these examples are all coordinates on the jet bundle.

Another obvious abbreviation is that ${\mathrm{d}_{i}}x$ can mean ${\mathrm{d}_{\{i\}}}x$. There is some precedent for this at least as regards ${\mathrm{d}_{1}}x$ and ${\mathrm{d}_{2}}x$ in discussion of exterior differential forms; for example, ${\mathrm{d}}x \wedge {\mathrm{d}}y$ can be explained as ${\mathrm{d}_{1}}x {\mathrm{d}_2}y - {\mathrm{d}_{2}}x {\mathrm{d}_{1}}y$ (or half that, depending on your conventions for antisymmetrization). But ${\mathrm{d}_{1}}x {\mathrm{d}_{2}}y$ can also be written as ${\mathrm{d}}x \otimes {\mathrm{d}}y$, and I'm not sure that there's much precedent for ${\mathrm{d}_{2}}x$ by itself.

What there really is no precedent for is something like ${\mathrm{d}_{2}^{2}}x$. But I think that I know what this should mean; it should be ${\mathrm{d}_{\{2,3\}}}x$, just as ${\mathrm{d}_{1}^{2}}x$ means ${\mathrm{d}_{\{1,2\}}}x$.

Finally, ${\mathrm{d}_{\{1,3\}}}x$ also has an abbreviation that I've used before, as in this StackExchange answer; and which Mike used as well, way back in comment #1: it's ${\mathrm{d}} \otimes {\mathrm{d}}x$ (or ${\mathrm{d}^{{\otimes}2}x}$ to be even shorter). This is because the first (from the right) ${\mathrm{d}}$ is ${\mathrm{d}_{1}}$, the next would be ${\mathrm{d}_{2}}$ if it were just ${\mathrm{d}\mathrm{d}}x$ (aka ${\mathrm{d}^{2}}x$), but the tensor product pushes the order back one more.

Except that that's not what I was thinking when I wrote that answer! I was thinking more like ${\mathrm{d}_{2}\mathrm{d}_{1}}x$ (but not wanting to use the subscripts, since I hadn't introduced them in that answer). That is, I'm thinking of ${\mathrm{d}_{1}}$, ${\mathrm{d}_{2}}$, etc as operators that can be applied to any generalized differential form. They can't just adjoin an element to the subscript, since the subscript might already have that element. So ${\mathrm{d}_{1}\mathrm{d}_{\{1\}}}x = {\mathrm{d}_{\{1,2\}}}x$, which is why ${\mathrm{d}_{2}\mathrm{d}_{\{1\}}}x$ must be ${\mathrm{d}_{\{1,3\}}}x$ etc.

So here's another way to label the vertices on the cube above:

Note that there's no contradiction here with the notation in the previous cube! The rule is that ${\mathrm{d}_{i}}$ immediately in front of $x$ adjoins $i$ to the subscript set, while ${\mathrm{d}_{i}}$ in the next position to the left adjoins $i + 1$ to the subscript, etc. So repeating ${\mathrm{d}_{i}}$ (which you can also abbreviate using exponents) never tries to adjoin something that's already in the set.

At least, that's the case if the subscript integers come in weakly decreasing order. But what if we apply ${\mathrm{d}_{1}}$ to ${\mathrm{d}_{2}x}$; in other words, what is ${\mathrm{d}_{1}\mathrm{d}_{2}}x$? I think that the has to be ${\mathrm{d}_{2}\mathrm{d}_{1}}x$, that is ${\mathrm{d}_{\{1,3\}}}x$. So to interpret the operator ${\mathrm{d}_{i}}$ properly, you first need to first rearrange all of the individual differentials in the proper order.

The other direction is easier. To interpret ${\mathrm{d}_{I}}x$, where $I$ is a set of indices, as the result of applying some operations ${\mathrm{d}_{i}}$ to $x$, each element $i$ of $I$ is interpreted as ${\mathrm{d}_{i-j}}$, where $j$ is the number of elements of $I$ that are less than $i$. So a special case is that ${\mathrm{d}_{\{1,\ldots,k\}}}x$ is ${\mathrm{d}_{1}^{k}}x$, as Mike used in #7 (but thinking in the reverse direction). But also ${\mathrm{d}_{\{1,3\}}}x$ is ${\mathrm{d}_{1}\mathrm{d}_{2}}x$ (or equivalently ${\mathrm{d}_{2}\mathrm{d}_{1}}x$), where the ${\mathrm{d}_{1}}$ comes from $1 \in \{1,3\}$ and the ${\mathrm{d}_{2}}$ comes from $3 \in \{1,3\}$, because there's $1$ element of $\{1,3\}$ that's less than $3$ (namely, $1 \lt 3$), so we use ${\mathrm{d}_{3-1}}$.

In fact, we can define an *operator* ${\mathrm{d}_{I}}$ for any set $I$ of indices. Again, each element $i$ of $I$ gives us an operator ${\mathrm{d}_{i-j}}$, where $j$ is the number of elements of $I$ that are less than $i$. (And since these operators commute, we can apply them in any order.) For example, ${\mathrm{d}_{\{1,3\}}\mathrm{d}_{\{2,3\}}}x$ means ${\mathrm{d}_{2}^{3}\mathrm{d}_{1}}x$. This is because the element $1$ of $\{1,3\}$ gives us ${\mathrm{d}_{1}}$ (since nothing in $\{1,3\}$ is less than $1$), the element $3$ of $\{1,3\}$ gives us ${\mathrm{d}_{3-1}}$ (since $\{i \in I \;|\; i \lt 3\} = \{1\}$ and ${|{\{1\}}|} = 1$), the element $2$ of $\{2,3\}$ gives us ${\mathrm{d}_{2}}$ (since nothing in $\{2,3\}$ is less than $2$), and the element $3$ of $\{2,3\}$ also gives us ${\mathrm{d}_{3-1}}$ (since $\i \in I \;|\; i \lt 3\} = \{2\}$ and ${|{\{2\}}|} = 1$). And this is also equal to ${\mathrm{d}_{\{1,3,4,5\}}}x$.

I don't know about these operators ${\mathrm{d}_{I}}$ so much, but my intuition about coflare forms is largely based on the operators ${\mathrm{d}_{i}}$ now. That's what I'm talking about in that StackExchange answer when I say that certain differentials represent velocities, accelerations, and so forth. There are potentially infinitely many different variations of $x$ (although we only consider finitely many at a time in a coflare form), and ${\mathrm{d}_{i}}$ represents an infinitesimal change in the $i$th of these. And such a change can apply to any coflare form.

]]>That MSE.stackexchange answer that you linked is interesting, as are the comments by Francois Ziegler. I'm glad to see that the early Calculators explicitly stated that they were making an assumption, equivalent to $d^2{x}=0$, in deriving that formula.

]]>The formulation on that page is indeed confusing. On the other hand, treating $d x$ constant ($d d x=0$) does lead to the incorrect traditional formula. That’s also how Leibniz, Euler etc. arrived at the (generally incorrect) equation $\frac{d}{d x}\frac{d y}{d x} = \frac{d^2 y}{d x^2}$. Nowadays that equation is true by notational convention.

]]>I ran across a math help page that at least tries to justify the incorrect traditional formula for the second differential; it argues ‘When we calculate differentials it is important to remember that $d x$ is arbitrary and independent from $x$ number [sic]. So, when we differentiate with respect to $x$ we treat $d x$ as constant.’. Now, that's not how differentials work; you're not differentiating with respect to anything in particular when you form a differential, and so you treat nothing as constant. (If you do treat something as constant, then you get a *partial* differential.) But at least they realize that there is something funny going on here, and that it might be meaningful to *not* treat $d x$ as constant.

Urs re #59 you are probably right, I was trying to recall from vague memory how the formula for graded commutativity works. It should follow from the definition at graded commutative algebra once we agree how a $\mathbb{Z}^2$ graded vector space is $\mathbb{Z}_2$ graded, which is probably by mapping the bi-degree $(n_1,n_2)\mapsto n_1 + n_2 mod 2$. Then I guess the right commutativity formula is $a\cdot b=(-1)^{(n_1+n_2)(k_1+k_2)}b\cdot a$.

]]>The simple Calculus exposition already exists, particularly if you go back to the 19th century. It is internally inconsistent, but perhaps it can be cleaned up to be consistent. (Already parts of it have been, but can the whole thing be bundled into a single system?) A unifying conceptual idea would be very valuable here, and so far the best idea that Mike and I have is coflare forms.

]]>it should be as simple and concrete as possible, so that it can be taught to calculus students who don’t know what a graded commutative algebra is.

Returning the favour, allow me to add the advise: first you need to understand it, only then should it be made simple for exposition. Not the other way around! :-)

The graded algebra is there to guide the concept. The concept in the end exists also without graded algebra.

I’ll try to find time tomorrow to extract the relevant essence in Kochan-Severa.

]]>Michael, re #54 wait, no, the algebra in the end is the *superalgebra* $C^\infty([\mathbb{R}^{0|2},X])$ and as such is $\mathbb{Z}/2\mathbb{Z}$-graded.

I’ll have a few spare minutes tomorrow morning when I am on the train. I’ll try to sort it out then and write it cleanly into the $n$Lab entry.

]]>Sure; I think coflare forms have plenty of internal coherence as well. We aren’t “positing” any rules, just taking seriously the idea of iterated tangent bundles and dropping any requirement of linearity on forms.

As long as I’m over here, let me record this observation here which I made in the other thread: the view of a connection in #11 gives a coordinate-invariant way (which of course depends on the chosen connection) to “forget the order-2 part” of a rank-2 coflare form, by substituting

$d^2x^i \mapsto \Gamma^i_{j k} d x^j \otimes d x^k$wherever it appears. The transformation rule for Christoffel symbols precisely ensures that this is coordinate-invariant. Are there analogues in higher rank?

]]>Those two guiding principles (a structure with internal coherence, simple calculations) should really be in concert.

]]>Which “you” of us is right, #53 or #54?

I think that what Toby and I are doing has perfectly valid guiding principles. One of those principles is, I think, that it should be as simple and concrete as possible, so that it can be taught to calculus students who don’t know what a graded commutative algebra is. (-:

]]>Sorry, yes, I suppose you are right.

Somehow I am not fully focusing on this discussion here…

I think I am just suggesting that if in search of a generalization of anything (here: exterior calculus) it helps to have some guiding principles. What Kochan-Ševera do has the great advantage that by construction it is guaranteed to have much internal coherence, because in this approach one doesn’t posit the new rules by themselves, but derives them from some more abstract principle (of course one has to derive them correctly, I just gave an example of failing to do that :-)

]]>Wait, I’m now in doubt if indeed in Severas setting we have that differentials $d_1$, $d_2$ anti-commute. Wouldn’t the commutation rule in a $\mathbb{Z}^2$ graded commutative algebra read $a\cdot b=(-1)^{|a|\cdot |b|}b\cdot a$ where $|a|=(n_1,n_2)$ is the bi-degree of $a$ and $|a|\cdot|b|:=n_1 k_1+ n_2 k_2$, for $|b|=(k_1,k_2)$? If that’s the case then elements of bi-degree $(1,0)$ commute with things of bi-degree $(0,1)$. ?

]]>So $\sqrt{d x^2 + d y^2}=0$ in their setup, because $d x^2 = d y^2 = 0$?

I’m also confused because $d x^2=0$ seems to contradict the formula cited in #35, which ought to have a nonzero term with coefficient $\partial^2 f / (\partial x^i)^2$ when $i=j$.

]]>By second order differentials I mean those with second derivatives, as in $d_1 d_2 x$. In $C^\infty([\mathbb{R}^{0|2},\mathbb{R}^1])$ we have $(d_1 x) (d_1 x) = 0$ but no power of $d_1 d_2 x$ vanishes.

]]>And does their setup include nonlinear differential forms such as $\sqrt{d x^2 + d y^2}$?

Only for the second order differentials $d_1 d_2 x$, yes

What does that mean? Does $\sqrt{d x^2 + d y^2}$ exist or doesn’t it? Here $d x$ and $d y$ are first-order differentials, but their squares are of course second order, and then when we take a square root we get back to “first order” in a suitable sense.

]]>tl;dr: We seek an algebra that encompasses both exterior forms and equations such as the first displayed equation in the top post of this thread.

The main motivating examples for what Mike and I have been doing (beyond the exterior calculus) are these from classical Calculus:

$f''(x) = \frac{\mathrm{d}^2f(x)}{\mathrm{d}x^2}$(which is wrong but can be fixed) and

$\mathrm{d}s = \sqrt{\mathrm{d}x^2 + \mathrm{d}y^2}$(which is correct except for the implication that $\mathrm{d}s$ is the differential of some global $s$). Anything that treats $\mathrm{d}x^2$ as zero is not what we're looking for.

Having determined that

$f''(x) = \frac{\mathrm{d}^2f(x)}{\mathrm{d}x^2} - f'(x) \frac{\mathrm{d}^2x}{\mathrm{d}x^2}$is correct, we're now looking more carefully at $\mathrm{d}^2$ and deciding that (if we wish to fit exterior forms into the same algebra) the two $\mathrm{d}$s are not exactly the same operator, and that furthermore it matters which of these appears in $\mathrm{d}x$.

The current working hypothesis is that

$\mathrm{d}^2f(x) = f''(x) \,\mathrm{d}x^2 + f'(x) \,\mathrm{d}^2x$(the previous equation, rearranged) is the symmetrization (in the algebra of cojet forms) of

$\mathrm{d}_1\mathrm{d}_0f(x) = f''(x) \,\mathrm{d}_1x \,\mathrm{d}_0x + f'(x) \,\mathrm{d}_1\mathrm{d}_0x$(in the algebra of coflare forms).

]]>But is Michael also right that the differentials anticommute?

Yes.

And does their setup include nonlinear differential forms such as $\sqrt{d x^2 + d y^2}$?

Only for the second order differentials $d_1 d_2 x$, yes, since only these are commuting forms in their setup. In section 4.4 you see them consider Gaussians of these.

(Do you want first order differentials to not square to 0? That would sound a dubious wish to me, but I don’t really know where you are coming from here.)

]]>Here’s an even better explanation, again from the coflare POV. At the 2-to-1 stage, there is a map $T X \to T^2 X$ that sends a tangent vector $(x;v)$ to $(x;0,0;v)$. Second-order tangent vectors don’t transform like vectors in general, but they do if the associated first-order tangent vectors are zero, so this makes sense. The quotient map from 2-forms to 1-forms is just precomposition with this map.

Now at the 3-to-2 stage, there are *six* natural maps $T^2 X \to T^3 X$, sending $(x;u,v;w)$ to $(x;u,0,0;0,0,v;w)$ or $(x;0,u,0;0,v,0;w)$ or $(x;0,0,u;v,0,0;w)$ or the same with $u$ and $v$ switched. The formula $2(B+C+D) \mathrm{d}_{1\cdot 0} x + 6E \mathrm{d}_{10} x$ is the *sum* of the precomposition with all six of these maps. (But it would probably be more natural, in a general theory, to consider all these maps separately rather than only when added up.)

Thanks, Mike, I tried to figure out something like that, but I wasn't coming up with the right combinatorics. I like your version. It's not immediately obvious *why* it should work that way, but it does seem to work!

From the coflare perspective, the general linear order-2 form is

$A \mathrm{d}_{2\cdot 1\cdot 0}x + B \mathrm{d}_{21\cdot 0}x + C\mathrm{d}_{2\cdot10}x + D \mathrm{d}_{1\cdot 20}x + E\mathrm{d}_{210} x$where $\mathrm{d}_{2\cdot 1 0}x$ is shorthand for $\mathrm{d}_2x \, \mathrm{d}_{1}\mathrm{d}_0 x$, and so on. From this perspective the factor of 3 arises because your map to order-2 forms would have to give something like

$(B+C+D) \mathrm{d}_{1\cdot 0} x + E \mathrm{d}_{10} x$whereas under a coordinate change $B$, $C$, and $D$ all get their own correction term involving $E$.

Just making stuff up, I notice that sending the above order-2 form to

$2(B+C+D) \mathrm{d}_{1\cdot 0} x + 6E \mathrm{d}_{10} x$will also be coordinate-invariant, and there seems to be some sense in which there are two ways to “forget” from $21\cdot 0$ to $1\cdot 0$ (“send $0$ to $0$, and send $1$ to either $1$ or $2$”) and six ways to “forget” from $210$ to $10$ (the six injective maps from a 2-element set to a 3-element set).

]]>Thanks Urs, that was the reminder I needed.

The formula for the second differential does indeed look just like the one Toby and I are working with. But is Michael also right that the differentials anticommute? $d_1d_2 = -d_2d_1$ and/or $d_1x \, d_2x = - d_2 x\, d_1x$? And does their setup include nonlinear differential forms such as $\sqrt{d x^2 + d y^2}$?

]]>H'm, but *this* map from order $3$ to order $2$ *is* invariant:

(note the factor of $3$). I'm not sure what to make of that!

]]>In local coordinates, the quotient map is

$\sum_{i\leq{j}} A_{i,j} \mathrm{d}x_i \,\mathrm{d}x_j + \sum_i B_i \,\mathrm{d}^2x_i \mapsto \sum_i B_i \,\mathrm{d}x_i .$In $1$ dimension, an analogous quotient map from order-$3$ forms to order-$2$ forms would be

$A \,\mathrm{d}x^3 + B \,\mathrm{d}^2x \,\mathrm{d}x + C \,\mathrm{d}^3 x \mapsto B \,\mathrm{d}x^2 + C \,\mathrm{d}^2x ,$but that is not coordinate-invariant (most simply, if $A, B = 0$, $C = 1$, and $x = t^2$). And I don't even know how I would write down a map from order $4$ to order $3$ in $2$ dimensions (in particular, for $\mathrm{d}^2x \,\mathrm{d}^2y$).

However, it seems to me that we can make a quotient map from order-$p$ forms to order-$1$ forms for any $p$, so it's just special about $1$. But in this case, I don't see anything especially special about the kernel.

]]>