Actually, the most interesting part of that paper is not the application near the end but the historical discussion at the very end, showing how this stuff was known to Leibniz and Euler before it was forgotten.
]]>Someone wrote a paper containing our work on the correct way to understand second derivatives, later than us but probably independently of us: Extending the Algebraic Manipulability of Differentials by Bartlett and Khurshudyan. Note that they also have an application at the end to solving some second-order differential equations.
]]>Re #87:
The coefficients appearing here are those that appear in Bell polynomials, and they are well known (although not by me, until yesterday) both to come from counting partitions and to give a formula for the higher derivatives of a composite function, Faà di Bruno's formula. This formula gives the higher cojet differentials of $f(x)$, where $f$ is a real-valued function of a real variable, differentiable at least $n$ times, and $x$ is a real-valued quantity (technically a real-valued function on some manifold), also differentiable at least $n$ times:
$\mathrm{d}^n\big(f(x)\big) = \sum_\pi f^{({|\pi|})}(x) \prod_{B\in{\pi}} \mathrm{d}^{|B|}x ,$where the sum is taken over the set of all partitions of $\{1,\ldots,n\}$, each partition $\pi$ being thought of as a subset of the powerset of $\{1,\ldots,n\}$ (so that both $\pi$ and any $B \in \pi$ have a cardinality given by ${|{\cdot}|}$).
A partly multivariable version of the formula may be adapted to coflare forms. First some notation: if $A = \{i_1,i_2,\ldots,i_n\}$ is a finite multisubset of $\mathbb{N}$, then write $\mathrm{d}^A{u}$ for $\mathrm{d}_{i_1}\mathrm{d}_{i_2}{\cdots}\mathrm{d}_{i_n}u$ (which is unambiguously defined if $u$ is at least $n$ times differentiable). Also, if $B \subseteq \{1,\ldots,n\}$ (a set, not any multiset), then let $i_B$ be $\{i_j \;|\; j \in B\}$ (a multiset). With this notation,
$\mathrm{d}^A\big(f(x)\big) = \sum_\pi f^{({|\pi|})}(x) \prod_{B\in{\pi}} \mathrm{d}^{i_B}x ,$a partial decategorification of the cojet version.
A fully multivariable version of the formula would also allow $f$ to be a function of $m$ variables, with $\mathrm{d}\big(f(x_1,\ldots,x_m)\big) = \nabla{f}(x_1,\ldots,x_m) \cdot \langle{\mathrm{d}x_1,\ldots,\mathrm{d}x_m}\rangle = \sum_{j=1}^m \mathrm{D}_j{f}(x_1,\ldots,x_m) \mathrm{d}x_j$ as the order-$1$ case, but I haven't tried to think that through yet.
ETA: You can take $A$ and $i_B$ to be tuples rather than multisets, if you prefer. But the order doesn't matter, just as with partial derivatives.
]]>Trying to make the previous comment work with second derivatives:
Suppose that $u$ is a function of $x$. Then
$\mathrm { d } u = \frac { \partial u } { \partial x } \, \mathrm { d } x ,$so
$\frac { \partial u } { \partial x } = \frac { \mathrm { d } u } { \mathrm { d } x } .$Thus,
$\frac { \partial ^ 2 u } { \partial x ^ 2 } = \frac { \partial \left ( \frac { \partial u } { \partial x } \right ) } { \partial x } = \frac { \mathrm { d } \left ( \frac { \mathrm { d } u } { \mathrm { d } x } \right ) } { \mathrm { d } x } ,$which expands to
$\frac { \partial ^ 2 u } { \partial x ^ 2 } = \frac { \mathrm { d } x \, \mathrm { d } ^ 2 u - \mathrm { d } u \, \mathrm { d } ^ 2 x } { \mathrm { d } x ^ 3 } .$On the other hand,
$\mathrm { d } ^ 2 u = \frac { \partial ^ 2 u } { \partial x ^ 2 } \, \mathrm { d } x + \frac { \partial u } { \partial x } \, \mathrm { d } ^ 2 x ,$so
$\mathrm { d } ^ 2 u \wedge \mathrm { d } ^ 2 x = \frac { \partial ^ 2 u } { \partial x ^ 2 } \, \mathrm { d } x \wedge \mathrm { d } ^ 2 x ,$so
$\frac { \partial ^ 2 u } { \partial x ^ 2 } = \frac { \mathrm { d } ^ 2 u \wedge \mathrm { d } ^ 2 x } { \mathrm { d } x \wedge \mathrm { d } ^ 2 x } .$Now suppose that $u$ is a function of $x$ and $y$. Then
$\mathrm { d } u = \frac { \partial u } { \partial x } \, \mathrm { d } x + \frac { \partial u } { \partial y } \, \mathrm { d } y ,$so
$\mathrm { d } u \wedge \mathrm { d } y = \frac { \partial u } { \partial x } \, \mathrm { d } x \wedge \mathrm { d } y ,$so
$\frac { \partial u } { \partial x } = \frac { \mathrm { d } u \wedge \mathrm { d } y } { \mathrm { d } x \wedge \mathrm { d } y } .$Thus,
$\frac { \partial ^ 2 u } { \partial x ^ 2 } = \frac { \partial \left ( \frac { \partial u } { \partial x } \right ) } { \partial x } = \frac { \mathrm { d } \left ( \frac { \mathrm { d } u \wedge \mathrm { d } y } { \mathrm { d } x \wedge \mathrm { d } y } \right ) \wedge \mathrm { d } y } { \mathrm { d } x \wedge \mathrm { d } y } ,$which unfortunately can't be expanded without abandoning the $\wedge$ notation.
On the other hand,
$\mathrm { d } ^ 2 u = \frac { \partial ^ 2 u } { \partial x ^ 2 } \, \mathrm { d } x ^ 2 + 2 \frac { \partial ^ 2 u } { \partial x \partial y } \, \mathrm { d } x \, \mathrm { d } y + \frac { \partial ^ 2 u } { \partial y ^ 2 } \, \mathrm { d } y ^ 2 + \frac { \partial u } { \partial x } \, \mathrm { d } ^ 2 x + \frac { \partial u } { \partial y } \, \mathrm { d } ^ 2 y ,$so
$\mathrm { d } ^ 2 u \wedge \mathrm { d } x \mathrm { d } y \wedge \mathrm { d } y ^ 2 \wedge \mathrm { d } ^ 2 x \wedge \mathrm { d } ^ 2 y = \frac { \partial ^ 2 u } { \partial x ^ 2 } \, \mathrm { d } x ^ 2 \wedge \mathrm { d } x \mathrm { d } y \wedge \mathrm { d } y ^ 2 \wedge \mathrm { d } ^ 2 x \wedge \mathrm { d } ^ 2 y ,$so
$\frac { \partial ^ 2 u } { \partial x ^ 2 } = \frac { \mathrm { d } ^ 2 u \wedge \mathrm { d } x \mathrm { d } y \wedge \mathrm { d } y ^ 2 \wedge \mathrm { d } ^ 2 x \wedge \mathrm { d } ^ 2 y } { \mathrm { d } x ^ 2 \wedge \mathrm { d } x \mathrm { d } y \wedge \mathrm { d } y ^ 2 \wedge \mathrm { d } ^ 2 x \wedge \mathrm { d } ^ 2 y } .$ ]]>On the subject of partial derivatives, John Denker makes the interesting point that
$\Big(\frac{\partial{u}}{\partial{x}}\Big)_{y,z} = \frac{\mathrm{d}u \wedge \mathrm{d}y \wedge \mathrm{d}z}{\mathrm{d}x \wedge \mathrm{d}y \wedge \mathrm{d}z}$at http://www.av8n.com/physics/partial-derivative.htm#sec-wedge-ratio. This is easy enough to verify by calculation, but also check out the pictorial explanation.
]]>Or simply that $\mathrm{d}_0 \neq \mathrm{d}_1$. Either will do, since the first nontrivial coefficient comes from combining $\mathrm{d}_2\mathrm{d}_1x \,\mathrm{d}_0x$, $\mathrm{d}_1x \,\mathrm{d}_2\mathrm{d}_0x$, and $\mathrm{d}_2x \,\mathrm{d}_1\mathrm{d}_0x$, where already for each pair there are two differences between them.
]]>Yes, that’s true; I think I meant to say something like $d_1d_0 \neq d_2d_0$.
]]>In coflare differentials, I don't think that $\mathrm{d}_0\mathrm{d}_1x$ makes sense at all; in any case, it doesn't show up in $\mathrm{d}^{n}f(x)$. That's just as well, since the Stirling number doesn't count $\{\{0,1\}\}$ and $\{\{1,0\}\}$ as distinct partitions of $2$ into $1$ nonempty subset.
]]>Re: #87, the sum of the coefficients of terms in $\mathrm{d}^n f$ involving $f^{(k)}(x)$ is the Stirling number of the second kind $S(n,k)$: the number of ways to partition an $n$-element set into $k$ nonempty subsets. The coefficients themselves are simply the further classification of these partitions according to the multiset of cardinalities of the $k$ nonempty subsets (which feels like it ought to have something to do with Young tableaux). This is more obvious if we use the coflare differentials where $d_1 d_0 \neq d_0 d_1$: then none of the terms can be combined, and each term like $\mathrm{d}_{2}\mathrm{d}_0 x \, \mathrm{d}_3 \mathrm{d}_1 x$ evidently represents a particular partition of an $n$-element set into $k$ nonempty subsets.
]]>Looks good! I discussed it in a thread dedicated to it. (Mike already noticed this, but I record it for the sake of future generations.)
]]>I thought it was about time to record some of this discussion, so I created cogerm differential form.
]]>It feels weird to me that we have the world of cogerm 1-forms with the commutative $d$, and the world of exterior forms with the exterior $d\wedge$, which agree in the world of linear degree-1 1-forms and the differential of functions, but are thereafter completely unrelated.
There is some more overlap if you look at symmetric bilinear forms (rather than only the antisymmetric ones that are exterior $2$-forms). Some cojet (or cogerm) forms are linear, and these agree with the exterior $1$-forms; but some cojet forms are quadratic, and these agree with the symmetric bilinear forms. Of course, these are viewed as functions of different things, but they are equivalent by the polarization identities. An arbitrary bilinear forms is then given by a quadratic cojet form together with an exterior $2$-form.
This doesn't go so easily into higher rank.
]]>Can an absolute 1-form be regarded as a cojet form like $|d x|$ defined by
$\langle {|\omega|} ; c\rangle = {\Big|\langle \omega ; c\rangle\Big|}?$
I would certainly accept this definition of ${|\omega|}$ in line with the previous discussion of $f(\omega)$ (where $\omega$ is a cojet form, or more generally a finite list of such, and $f$ is a differentiable function); there's no reason that $f$ has to be differentiable (we just can't conclude that $f(\omega)$ is differentiable).
So I guess that your question is: if $\omega$ is an exterior $1$-form, then is this ${|\omega|}$ the absolute $1$-form called ${|\omega|}$ on the absolute differential form page? And the answer is Yes; at least, it certainly does the right thing to a curve.
But not every absolute $1$-form arises in this way! Besides multiplying by an arbitrary $0$-form (so that an absolute $1$-form need not be positive semidefinite), even some positive definite forms, such as $\sqrt{\mathrm{d}x^2 + \mathrm{d}y^2}$, don't arise in this way.
Nevertheless, any absolute $1$-form does have an action on curves (via their tangent vectors, if you follow the definition at absolute differential form), and this is homogeneous of degree $1$, so your integration formula does integrate them.
]]>Probably “$|c|^2$” should be instead the area enclosed by $c$. But having thought about it a little more, I realized those limits don’t really make sense unless the integrals are invariant under reparametrization. So maybe the exterior differential doesn’t really make sense except for degree-1 1-forms? And is there any sort of commutative differential on 2-forms? Would we hope or expect it to behave in any particular way? It feels weird to me that we have the world of cogerm 1-forms with the commutative $d$, and the world of exterior forms with the exterior $d\wedge$, which agree in the world of linear degree-1 1-forms and the differential of functions, but are thereafter completely unrelated.
]]>Over in the other thread, David R posted a link to an MO answer which reminded me to look back at Arnold’s book on classical mechanics, which suggests the following definition of the exterior differential of a cojet (or perhaps “cogerm” would be more appropriate) 1-form:
$\langle d\wedge \eta {|} S \rangle = \lim_{c\to 0} \frac{1}{{|c|}^2} \oint_{S\circ c} \eta$where $c$ is a loop inside the parametrized surface $S$ which shrinks to nothing around $(0,0)$. (It might be a rectangle or parallellogram, but from the general perspective that restriction seems unaesthetic.)
Comparing this to the definition of the differential $d$ from cogerm 1-forms to cogerm 1-forms, and its relationship to the exterior differential acting from 0-forms to 1-forms, suggests the following operation from cogerm 2-forms to cogerm 2-forms:
$\langle d \omega {|} S \rangle = \lim_{c\to 0} \frac{1}{{|c|}^2} \int_{t=a}^b \langle \omega {|} S_{c(t)} \rangle$where $c$ is a loop as before, with domain $[a,b]$, and $S_{(u,v)}(s+u,t+v)$ is a shifted version of the surface. Is this a 2-form version of the cogerm differential?
Just throwing stuff out there at the moment, hoping sometime soon I’ll have time to think about it all carefully.
]]>Suppose I start with a function and take its cojet differential over and over again.
$d f(x) = f'(x) dx$ $d^2f(x) = f''(x) dx^2 + f'(x)d^2x$ $d^3f(x) = f'''(x) dx^3 + 3 f''(x) dx\cdot d^2x + f'(x) d^3x$ $d^4f(x) = f^{(4)}(x) dx^4 + 6 f'''(x) dx^2 d^2x + f''(x)(3(d^2x)^2 + 4 dx \cdot d^3x) + f'(x) d^4x$ $d^5f(x) = f^{(5)}(x) dx^5 + 5 f^{(4)}(x) dx^3 \cdot d^2 x + f'''(x)(15dx\cdot (d^2x)^2 + 10 dx^2 \cdot d^3x) + f''(x) (10 d^2x \cdot d^3x + 5 dx \cdot d^4x) + f'(x) d^5 x$It appears that each term in $d^n f(x)$ is of the form
$a f^{(k)}(x) d^{i_1}x \cdot d^{i_2}x \cdot \cdots \cdot d^{i_k}x$for some $k\le n$ and some (unordered) partition $i_1 + i_2 + \cdots i_k = n$. Are the coefficients appearing here some well-known combinatorial numbers associated to partitions?
]]>I still haven’t decided whether I can justify talking about exterior differential forms at all, given that our standard textbook does everything the traditional way in terms of vectors. Is there a good multivariable calculus textbook that uses differential forms?
I don't know of one; even Dray & Minogue don't go that far.
My justification is that they're already integrating differential forms; the classical expression $\int \mathbf{F} \cdot d\mathbf{r}$ is already the integral of a differential form; you just need to take it literally. All of the formulas are in my handout (where Page 6 is strictly time-permitting … which so far it hasn't been).
]]>Re: #80, the wedge product of two cojet 1-forms $\omega$ and $\eta$ ought probably to be the “cojet 2-form” defined on a surface germ $c$ by
$\langle \omega\wedge\eta {|} c \rangle =\langle\omega {|} \lambda s.c(s,0) \rangle \cdot \langle\eta {|} \lambda s.c(0,s) \rangle - \langle\omega {|} \lambda s.c(0,s) \rangle \cdot \langle\eta {|} \lambda s.c(s,0) \rangle$ ]]>One issue with my proposed notion of integration in #81 is that in general, it will depend on the parametrization of the curve, whereas the integral of an ordinary 1-form along a curve does not (though it does depend on its orientation). However, it does include integration with respect to $ds = \sqrt{dx^2+dy^2}$, which is also parametrization-invariant — I guess what matters for that is not linearity but “degree-1 homogeneity”.
Does it also include integration of absolute 1-forms? Can an absolute 1-form be regarded as a cojet form like $|dx|$ defined by
$\langle {|\omega|} ; c\rangle = {\Big|\langle \omega ; c\rangle\Big|}?$(I changed your notation $\langle \omega | c \rangle$ to $\langle \omega ; c \rangle$ to avoid confusion with the absolute value bars.)
]]>these operations do depend only on the jets, even when the germs differ
That’s true if by “these operations” you mean the ones constructed from functions by applying the cojet $d$ and algebra operations. In #72 you suggested generating a subring, so I guess this is what you’re thinking of. Although $e^{dx}$ wouldn’t be in that subring, nor would $\sqrt{dx^2 + dy^2}$; we’d need to close up under more functions than the ring operations. The whole ring of operations-on-germs, of course, might include operations that really do depend on the whole germ rather than only the jets, although I can’t think of any examples off the top of my head.
In my Calculus classes, I've been using $\mathrm{d} \wedge \eta$ for the exterior differential of $\eta$
That’s good! I might do the same when I get to exterior derivatives. (Although I still haven’t decided whether I can justify talking about exterior differential forms at all, given that our standard textbook does everything the traditional way in terms of vectors. Is there a good multivariable calculus textbook that uses differential forms?)
the main reason for using differential in class is that people use them in applied fields
Hmm, that’s one good reason, but I think another good reason is that they just make the concepts easier to understand and the computations easier to do. However, it’s not clear to me that higher cojet differentials would be much use in single-variable calc for either of those purposes either. The main advantage I see right now is if I could somehow avoid talking about derivatives at all and use only differentials, but to be really effective that would require a supporting textbook.
]]>You used $\langle{\eta{|}c}\rangle$ up in #74 here…
Oops, never mind, that was me, not you!
a germ is not determined by its $k$-jets for $k\lt\infty$, is it?
Ah, no, I must have been implicitly assuming that every function (or at least every smooth function) is analytic, and we wouldn't want to restrict to analytic curves. Still, these operations do depend only on the jets, even when the germs differ. But germs are a simpler concept.
if I teach my calc 1 or calc 2 students to calculate with cojet differentials, aren’t they going to be confused when they get to multivariable and I tell them that now $d^2=0$?
In my Calculus classes, I've been using $\mathrm{d} \wedge \eta$ for the exterior differential of $\eta$. They've already seen $\eta \wedge \zeta$ by this point, and this gives the right idea regarding skew-commutativity. (In particular, the signs in the product rule
$\mathrm{d} \wedge (\eta \wedge \zeta) = (\mathrm{d} \wedge \eta) \wedge \zeta + (-1)^{|\eta|} \eta \wedge (\mathrm{d} \wedge \zeta) = (-1)^{(1 + {|\eta|}){|\zeta|}} \zeta \wedge \mathrm{d} \wedge \eta + (-1)^{|\eta|} \eta \wedge \mathrm{d} \wedge \zeta$come out right that way. Not that I ever write down anything like this in that class.) So $\mathrm{d} \wedge \mathrm{d} \wedge \eta = 0$, but this is very different from $\mathrm{d}^2 \eta = \mathrm{d} (\mathrm{d} \eta)$.
I do tell them that people usually don't put the wedge in there (and that they sometimes don't put the wedge in the wedge product either), and this is OK because they're restricting attention to exterior differential forms.
But even though I don't actually use higher differentials in my Calculus classes^{1}, they do see differential forms that aren't exterior forms. There are the absolute differential forms, of course, but there's more; consider
$đs = \sqrt{\mathrm{d}x^2 + \mathrm{d}y^2} .$It would be criminal not to introduce that in class! But what is $\mathrm{d}x^2$? (or ${|\mathrm{d}x|}^2$). It can be thought of as a symmetric bilinear form, but it's also a cojet form. (The two operations, one on a pair of curves and one on a single curve, are related by polarization.)
Now that I understand them better, I might. But expressing, say, the second derivative test for extreme values in terms of differentials instead of derivatives looks so different that it may be too difficult, when it's not in the book. Anyway, the main reason for using differential in class is that people use them in applied fields, so it's not so justifiable to bring in something that you and I invented ourselves. ↩
Here’s another thought: can we integrate an arbitrary cojet form? Suppose $\omega$ is a real-valued operator on germs of curves, and let $c$ be a curve defined on $(a-\epsilon,b+\epsilon)$. Then we have a function $f:[a,b]\to\mathbb{R}$ defined by
$f(x) = \langle \omega {|} c_{x} \rangle$and we could define
$\oint_c \omega = \int_{a}^b f(x) dx$if the RHS exists. It seems like it ought to follow that
$\oint_c d\omega = \langle \omega {|} c_b \rangle - \langle \omega {|} c_a \rangle.$(where $d$ is the commutative cojet differential). But it’s late at night, so I could be spewing nonsense…
]]>I wonder whether cojet forms and exterior forms could be unified in a larger framework? In some sense, all these cojet forms are still only 1-forms: even though they involve higher derivatives, they only act on curves. But we could consider instead real-valued operators on germs of parametrized surfaces or hypersurfaces as well. For instance, if $\omega$ is an operator on germs of curves, we could define its exterior differential $\hat{d}\omega$ as an operator on germs of surfaces by
$\langle \hat{d}\omega {|} c \rangle = \lim_{t\to 0} \frac{ \langle \omega {|} \lambda s.c(s,0) \rangle + \langle \omega {|} \lambda s.c(t,s) \rangle - \langle \omega {|} \lambda s.c(s,t) \rangle - \langle \omega {|} \lambda s.c(0,s) \rangle }{t}$or perhaps in the case when $\omega$ might be nonlinear it would be better to say
$\langle \hat{d}\omega {|} c \rangle = \lim_{t\to 0} \frac{ \langle \omega {|} \lambda s.c(s,0) \rangle + \langle \omega {|} \lambda s.c(t,s) \rangle + \langle \omega {|} \lambda s.c(-s,t) \rangle + \langle \omega {|} \lambda s.c(0,-s) \rangle }{t}$I haven’t checked that this is at all sensible. But it also starts (unsurprisingly) to make me think of the Weil algebras that define the infinitesimal objects in SDG.
]]>You mean that $c$ is smooth at $0$
Yes, thanks.
Certainly you borrowed notation from an off-site file linked only in the other thread!
Really? What notation? You used $\langle{\eta{|}c}\rangle$ up in #74 here…
spaces of jets (the limit of which is the space of germs
Technicality again, but that doesn’t seem quite right to me; at least, I can’t see a sense in which it’s true. In particular, a germ is not determined by its $k$-jets for $k\lt\infty$, is it?
We certainly can call them differential forms
Okay, I see the point that it’s historically fine, but my experience is that nowadays mathematicians pretty universally say “differential form” to mean “exterior differential form”. I guess “cojet differential form” would suffice to clarify, which might get abbreviated to “cojet form”.
I think my main worry is using the same symbol $d$ for the cojet differential and the exterior differential. For instance, pedagogically speaking, if I teach my calc 1 or calc 2 students to calculate with cojet differentials, aren’t they going to be confused when they get to multivariable and I tell them that now $d^2=0$?
]]>