Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeMay 14th 2014

    In talking and thinking about second derivatives recently, I’ve been led towards a different point of view on higher differentials. We (or Toby, really) invented cogerm differential forms (including cojet ones) in order to make sense of an equation like

    d 2f(x)=f(x)dx 2+f(x)d 2x \mathrm{d}^2f(x) = f''(x) \mathrm{d}x^2 + f'(x) \mathrm{d}^2x

    with dx\mathrm{d}x an object that’s actually being squared in a commutative algebra. But it seems to be difficult to put these objects together with kk-forms for k>1k\gt 1 in a sensible way. Moreover, they don’t seem to work so well for defining differentiability: the definition of the cogerm differential in terms of curves is an extension of directional derivatives, but it’s not so clear how to ask that a function (or form) be “differentiable” in a sense stronger than having directional derivatives.

    The other point of view that I’m proposing is to use the iterated tangent bundles instead of the cojet bundles: define a kk-form to be a function T kXT^k X \to \mathbb{R}. A point of T kXT^k X includes the data of kk tangent vectors, so an exterior kk-form in the usual sense can be regarded as a kk-form in this sense. And for each kk, such functions are still an algebra and can be composed with functions on \mathbb{R}, so that we still have things like dx 2+dy 2\sqrt{\mathrm{d}x^2 + \mathrm{d}y^2} and |dx|{|\mathrm{d}x|} as 1-forms.

    However, the difference is in the differentials. Now the most natural differential is one that takes a kk-form to a (k+1)(k+1)-form: since T kXT^k X is a manifold (at least if XX is smooth), we can ask that a function ω:T kX\omega:T^k X \to \mathbb{R} be differentiable in the sense of having a linear approximation, and if so its differential is a map T k+1XT^{k+1} X \to \mathbb{R} that is fiberwise linear.

    I think it would make the most sense to denote this differential by dω\mathrm{d}\otimes \omega, because in general it is neither symmetric nor antisymmetric. For instance, if ω=xdy\omega = x \, \mathrm{d}y is a 1-form, then dω=dxdy+xd 2y\mathrm{d}\otimes \omega = \mathrm{d}x \otimes \mathrm{d}y + x\, \mathrm{d}^2 y, where dxdy\mathrm{d}x \otimes \mathrm{d}y is the 2-form that multiplies the xx-component of the first underlying vector of its argument by the yy-component of the second underlying vector, and d 2x\mathrm{d}^2 x is the 2-form that takes the xx-component of the “second-order” part of its argument (which looks in coordinates like another tangent vector, but doesn’t transform like one). However, if ff is a twice differentiable function, then ddf\mathrm{d}\otimes \mathrm{d}f is symmetric, e.g. we have

    ddf(x)=f(x)dxdx+f(x)d 2x. \mathrm{d}\otimes \mathrm{d}f(x) = f''(x) \mathrm{d}x \otimes \mathrm{d}x + f'(x) \mathrm{d}^2x.

    So this amounts to regarding the second differential as a bilinear form (plus the d 2x\mathrm{d}^2x bit) rather than a quadratic form, which appears to be necessary in order to characterize second-differentiability.

    In general, we can symmetrize or antisymmetrize dω\mathrm{d}\otimes \omega, by adding or subtracting its images under the action of the symmetric group S k+1S_{k+1} on T k+1XT^{k+1} X. The first gives a result that we might denote dω\mathrm{d}\omega; while the second restricts to the exterior derivative on exterior forms, so we may denote it dω\mathrm{d}\wedge \omega. So in this way, the symmetric and the antisymmetric differential fit into the same picture.

    I think this is basically the same approach taken in the reference cited by the answer here.

    • CommentRowNumber2.
    • CommentAuthorTobyBartels
    • CommentTimeMay 21st 2014

    Wow, this might be just what I've been looking for all along! I was early on very suspicious about having just a symmetric notion of higher differential, but it was working, so I decided to accept it. But I did once hope for a non-symmetric version whose antisymmetric part would give exterior forms.

    • CommentRowNumber3.
    • CommentAuthorMike Shulman
    • CommentTimeMay 21st 2014

    I’m glad you like it! I’m also enthusiastic about this point of view. I’m having some expositional/pedagogical issues in thinking about how to use it, though.

    • What are these forms called? I want to call them “co-X differential forms” where “X” is the name of a point of an iterated tangent bundle. But what is that called? The only thing I can think of is than in SDG they are “microcubes”, i.e. maps out of D kD^k where DD is the space of nilsquare infinitesimals; but do they have a name in classical differential geometry?

    • How to notate the nonsymmetric differential of a 1-form (for example)? I think I can say “given a 1-form like x 2dxx^2\,\mathrm{d}x, we take its differential by treating xx and dx\mathrm{d}x as separate variables, just like d(x 2y)=2xydx+x 2dy\mathrm{d}(x^2\,y) = 2x y\,\mathrm{d}x + x^2 \,\mathrm{d}y, only now yy is dx\mathrm{d}x.” But in this setup, the differential of xx that appears when we take the second differential is different from the previous variable dx\mathrm{d}x. We could call them d 1x\mathrm{d}_1 x and d 2x\mathrm{d}_2 x, or dx\mathrm{d}x and d^x\hat{\mathrm{d}}x, but nothing I’ve thought of looks very pleasant to me.

      Perhaps the most consistent thing would be to say that d k\mathrm{d}_k is the differential from kk-forms to (k+1)(k+1)-forms, but that we omit the subscript kk when it is zero. So we would have d 1(x 2dx)=2xd 1xdx+x 2d 1dx\mathrm{d}_1(x^2\,\mathrm{d}x) = 2x\,\mathrm{d}_1x \, \mathrm{d}x + x^2 \,\mathrm{d}_1\mathrm{d}x, and we could abbreviate d 1dx\mathrm{d}_1\mathrm{d}x by d 2x\mathrm{d}^2x and observe that d 1xdx=dxd 1x=dxdx\mathrm{d}_1x \; \mathrm{d}x = \mathrm{d}x \; \mathrm{d}_1x = \mathrm{d}x\otimes \mathrm{d}x (where \otimes on a pair of 1-forms means the 2-form that applies them to the two underlying vectors and multiplies the result). But this notation is still heavier than I’d prefer.

    • CommentRowNumber4.
    • CommentAuthorTobyBartels
    • CommentTimeMay 21st 2014

    Yeah, it's all still a bit crazy. Sorry, I don't have any answers. Although it's good to see that d 1xdxd_1 x \ne d x; that was tripping me up when I tried to do this before.

    • CommentRowNumber5.
    • CommentAuthorMike Shulman
    • CommentTimeMay 21st 2014

    I’m thinking about ways to finesse (2). Suppose we restrict attention to “differential polynomials” of the sort that one gets by repeated differentiation of a function, and then just say that multiplication of differentials doesn’t commute. So for instance we have

    d(x 2dy)=2xdxdy+x 2d 2y\mathrm{d}(x^2\, \mathrm{d}y) = 2x\,\mathrm{d}x\,\mathrm{d}y + x^2 \mathrm{d}^2y

    in which dxdydydx\mathrm{d}x \,\mathrm{d}y \neq \mathrm{d}y \,\mathrm{d}x (being secretly a \otimes). This already breaks for general 3-forms, however: in

    d(2xdxdy+x 2d 2y)=2dx 2dy+2xd 2xdy+2xdxd 2y+2xdxd 2y+x 2d 3y \mathrm{d}(2x \, \mathrm{d}x\, \mathrm{d}y +x^2 \,\mathrm{d}^2y) = 2 \,\mathrm{d}x^2\,\mathrm{d}y + 2x\,\mathrm{d}^2x\,\mathrm{d}y + 2x\,\mathrm{d}x\,\mathrm{d}^2y + 2x\,\mathrm{d}x\,\mathrm{d}^2 y + x^2\,\mathrm{d}^3 y

    the two “dxd 2y\mathrm{d}x\,\mathrm{d}^2y” terms are actually different. If we coordinatize T 3XT^3X by (v 1,v 2,v 3,v 12,v 13,v 23,v 123)(v^1,v^2,v^3,v^{12},v^{13},v^{23},v^{123}), then one of them gives v x 2v y 13v^2_x v^{13}_y while the other gives v x 1v y 23v^1_x v^{23}_y.

    However, if we symmetrize, then these two terms become equal, and the same ought to be true more generally. So for taking symmetric differentials, maybe we don’t need to worry about heavy notation. And possibly the same is true for antisymmetric ones, once we have a Leibniz rule for the wedge product. I’m not entirely positive, but now I have to run off and give a final…

    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeMay 21st 2014

    For (1), so far my best invention is “(co)flare” — like a jet, but less unidirectional.

    • CommentRowNumber7.
    • CommentAuthorMike Shulman
    • CommentTimeMay 21st 2014

    I think it does basically work.

    The symmetric and antisymmetric kk-forms are vector subspaces of all kk-forms, and symmetrization and antisymmetrization are linear retractions onto them. There is a tensor product from kk-forms \otimes \ell-forms to (k+)(k+\ell)-forms, obtained by restricting along two projections T k+XT kXT^{k+\ell}X \to T^k X and T k+XT XT^{k+\ell}X \to T^{\ell} X and then multiplying. Symmetrizing and antisymmetrizing this makes the symmetric and antisymmetric forms into algebras as well.

    In a coordinate system (x a)(x^a), the coordinates of T kXT^k X are (d Ix a)(\mathrm{d}_I x^a), where I{1,2,,k}I\subseteq \{1,2,\dots,k\} (the case I=I=\emptyset giving the coordinate functions x ax^a). We may write d kx a\mathrm{d}^k x^a for d {1,,k}x a\mathrm{d}_{\{1,\dots,k\}} x^a. Say that a (degree-kk homogeneous) differential polynomial is a kk-form that is a sum of terms of the form

    f(x)d I 1x a 1d I mx a mf(x)\; \mathrm{d}_{I_1} x^{a_1} \cdots \mathrm{d}_{I_m} x^{a_m}

    where I 1,,I mI_1,\dots,I_m is a partition of {1,2,,k}\{1,2,\dots,k\}. The k thk^{\mathrm{th}} differential of a function is a symmetric differential polynomial.

    Each I{1,2,,k}I\subseteq \{1,2,\dots,k\} determines |I|!{|I|}! maps T kXT |I|XT^{k} X \to T^{|I|}X, allowing us to unambiguously regard a symmetric degree-|I|{|I|} differential polynomial as a degree-kk differential polynomial. In particular, for any change of coordinates we can substitute the |I| th{|I|}^{\mathrm{th}} differential of x 1(u)x^1(u) for each d Ix a\mathrm{d}_{I} x^a in a differential polynomial in xx, obtaining a differential polynomial in uu. Since this is how kk-forms transform, the notion of “differential polynomial” is well-defined independent of coordinates.

    Now the differential of a differential polynomial is again a differential polynomial, as is the symmetrization or antisymmetrization. The symmetrization of d I 1x a 1d I mx a m\mathrm{d}_{I_1} x^{a_1} \cdots \mathrm{d}_{I_m} x^{a_m} is

    1k! σS kd σ(I 1)x a σ(1)d σ(I m)x a σ(m)\frac{1}{k!} \sum_{\sigma \in S_k} \mathrm{d}_{\sigma(I_1)} x^{a_{\sigma(1)}} \cdots \mathrm{d}_{\sigma(I_m)} x^{a_{\sigma(m)}}

    and terms like this are a basis for the symmetric degree-kk differential polynomials (with functions as coefficients). However, this symmetrized term is also the product

    d |I 1|x a 1d |I m|x a m \mathrm{d}^{|I_1|}x^{a_1}\cdots \mathrm{d}^{|I_m|}x^{a_m}

    in the algebra of symmetric forms, and if we use this notation then the symmetric differential polynomials look much cleaner.

    The antisymmetric case is even easier: since the antisymmetrization of d Ix a\mathrm{d}_I x^a is 00 if |I|>1{|I|}\gt 1, the antisymmetric differential polynonmials are just the ordinary exterior kk-forms, and the antisymmetric differential is the usual exterior differential.

    I don’t think we’ve run into any situations yet where we want to take differentials of forms that aren’t differential polynomials. We want to integrate forms like |dx||\mathrm{d}x| and dx 2+dy 2\sqrt{\mathrm{d}x^2+\mathrm{d}y^2}, but we haven’t found a good meaning/use for their differentials yet.

    • CommentRowNumber8.
    • CommentAuthorTobyBartels
    • CommentTimeMay 22nd 2014
    • (edited May 22nd 2014)

    So for taking symmetric differentials, maybe we don't need to worry about heavy notation. And possibly the same is true for antisymmetric ones, once we have a Leibniz rule for the wedge product.

    Yes, that's the conclusion that I came to. So they live in different systems. But you are saying that maybe you can make them work together after all.

    • CommentRowNumber9.
    • CommentAuthorTobyBartels
    • CommentTimeMay 22nd 2014
    • (edited May 22nd 2014)

    It's interesting that exterior differentials are already known to be the antisymmetric part of a more general algebra. But that algebra (consisting of what physicists call covariant tensors) acts on (TX) k(T X)^k, not on T kXT^k X. And there is no natural notion of differential of a general form in this algebra; but if you pick a notion (using an affine connection), then the antisymmetrization of the differential of an exterior form always matches the exterior differential. I was always afraid of recreating that, which would have to be wrong.

    But you are saying that you have a natural notion of differential of the general form; it's a different kind of form but the antisymmetric ones are still precisely the exterior forms.

    • CommentRowNumber10.
    • CommentAuthorMike Shulman
    • CommentTimeMay 22nd 2014

    Yes, I agree with all of that. Your observation about covariant tensors and connections in #9 is interesting, and makes me wonder how the general “coflare differential” might be related to affine connections. Can a connection be equivalently formulated as some way of “restricting the coflare differential to act on tensors”?

    • CommentRowNumber11.
    • CommentAuthorMike Shulman
    • CommentTimeMay 22nd 2014

    Oh, of course: a connection is just a fiberwise-bilinear section of the projection T 2X(TX) 2T^2 X\to (T X)^2, so we can obtain the covariant derivative of a 1-form by taking the coflare differential and precomposing with the connection. Presumably something analogous works for other covariant derivatives.

    • CommentRowNumber12.
    • CommentAuthorTobyBartels
    • CommentTimeMay 22nd 2014

    Brilliant!

    • CommentRowNumber13.
    • CommentAuthorTobyBartels
    • CommentTimeMay 5th 2015
    • (edited May 20th 2015)

    [Edit for people reading through the discussion history: Note the dates; the conversation has been resumed after a pause of nearly a year, and in fact I'd kind of forgotten about everything in this thread during that break!]

    Wow, apparently I never clicked through to the M.O answer that you mention in the top comment, where they quote somebody quoting anonymous ‘classical calculus books’ as writing

    d 2f=f xd 2x+f yd 2y+f x 2dx 2+2f xydxdy+f y 2dy 2. d^2f = f'_x \,d^2x + f'_y \,d^2y + f''_{x^2} \,d{x}^2 + 2f''_{x y} \,d{x} \,d{y} + f''_{y^2} \,d{y}^2 .

    This is our d 2f\mathrm{d}^2f!

    • CommentRowNumber14.
    • CommentAuthorMike Shulman
    • CommentTimeMay 5th 2015

    Yep!

    • CommentRowNumber15.
    • CommentAuthorMike Shulman
    • CommentTimeMay 6th 2015

    So how do we integrate these “coflare” forms? For coflare forms the order is bounded above by the rank (terminology from here), so a rank-1 coflare form (a function on TXT X) is roughly the same as a 1-cojet form, and can be integrated in either the “genuine” or “affine” way as proposed on the page cogerm differential form. For a rank-2 coflare form, a function on TTXT T X, my first thoughts in the “affine” direction are to cover the parametrizing surface by a grid of points t i,jt_{i,j}, regard our form ω\omega as a function on a point and a triple of vectors (though the third one doesn’t transform as a vector), and sum up the values of

    ω(c(t i,j *),Δ 1c i,j,Δ 2c i,j,Δ 12c i,j) \omega(c(t^*_{i,j}), \Delta^1 c_{i,j}, \Delta^2 c_{i,j}, \Delta^{12} c_{i,j})

    where

    Δ 1c i,j=c(t i,j)c(t i,j1) \Delta^1 c_{i,j} = c(t_{i,j}) - c(t_{i,j-1}) Δ 2c i,j=c(t i,j)c(t i1,j) \Delta^2 c_{i,j} = c(t_{i,j}) - c(t_{i-1,j}) Δ 12c i,j=c(t i,j)(c(t i1,j1)+Δ 1c i,j+Δ 2c i,j). \Delta^{12} c_{i,j} = c(t_{i,j}) - (c(t_{i-1,j-1}) + \Delta^1 c_{i,j} + \Delta^2 c_{i,j}).

    I suspect that there won’t be many forms other than exterior ones that will have convergent integrals. Are there any that we should expect?

    • CommentRowNumber16.
    • CommentAuthorTobyBartels
    • CommentTimeMay 6th 2015

    Well, I'd really want the absolute forms to converge, and to the correct value. Anything that doesn't recreate integration of exterior and absolute forms isn't doing its job.

    • CommentRowNumber17.
    • CommentAuthorMike Shulman
    • CommentTimeMay 6th 2015

    Right, I agree, thanks. I think it’s reasonable to hope the absolute forms to work too, although it’ll probably be some work to check. Maybe there is a Lipschitz condition like in the rank-1 case.

    • CommentRowNumber18.
    • CommentAuthorMike Shulman
    • CommentTimeMay 8th 2015

    Here’s a proposal for how to integrate coflare 2-forms, which I conjecture should suffice to integrate exterior and absolute forms, and also be invariant under orientation-preserving twice-differentiable reparametrization for Lipschitz forms. Let a “partition” be an N×MN\times M grid of points t i,jt_{i,j} such that the quadrilaterals with vertices t i,j,t i+1,j,t i+1,j+1,t i,j+1t_{i,j}, t_{i+1,j}, t_{i+1,j+1}, t_{i,j+1} partition the region disjointly and have the correct orientation. Tag it with points t i,j *t^*_{i,j} in each such quadrilateral. If δ\delta is a gauge (\mathbb{R}-valued function) on the region, say the tagged partition is δ\delta-fine if for all i,ji,j:

    1. t i+1,jt i,jt_{i+1,j}- t_{i,j} and t i,j+1t i,jt_{i,j+1}-t_{i,j} are both less that δ(t i,j *)\delta(t^*_{i,j}), and also
    2. t i+1,j+1t i+1,jt i,j+1+t i,jt_{i+1,j+1} - t_{i+1,j} - t_{i,j+1} + t_{i,j} is less than δ(t i,j *) 2\delta(t^*_{i,j})^2.

    The Riemann sum of a 2-form ω\omega over such a tagged partition is defined as in #15, and the integral is the limit of these over gauges as usual.

    Note that because grids can be rotated arbitrarily, there’s no way that a form like dxdy\mathrm{d}x \otimes \mathrm{d}y is going to be integrable with this definition. We essentially need to perform some antisymmetrization to make the pieces rotationally invariant, but we should be free to insert norms or absolute values afterwards to get absolute forms. I don’t know whether this integral will satisfy any sort of Fubini theorem, though.

    • CommentRowNumber19.
    • CommentAuthorTobyBartels
    • CommentTimeMay 12th 2015

    I’m not finished looking at this integral, but I want to record here a fact about second-order differentials for when we write an article on them:

    +-- {: .num_theorem #2ndDTest}
    ###### The second-differential test
    

    Given a twice-differentiable space XX, a twice-differentiable quantity uu defined on XX, and a point PP in XX. If uu has a local minimum at PP, then du| P\mathrm{d}u|_P is zero and d 2u| P\mathrm{d}^2u|_P is positive semidefinite. Conversely (mostly), if du| P\mathrm{d}u|_P is zero and d 2u| P\mathrm{d}^2u|_P is positive definite, then uu has a local minimum at PP.

    =--
    

    Here we may view d 2u\mathrm{d}^2u as either a cojet 11-form (the question is whether d 2u|C\langle{\mathrm{d}^2u|C}\rangle is always (semi)-positive for any curve CC through PP) or a coflare 22-form (the question is whether d 2u| R=P,dR=v,d 1R=v,d 2R=w\mathrm{d}^2u|_{R = P, \mathrm{d}R = \mathbf{v}, \mathrm{d}_1R = \mathbf{v}, \mathrm{d}^2R = \mathbf{w}} is always (semi)-positive for any vector v\mathbf{v} (note the repetition) and second-order vector w\mathbf{w}).

    Of course, this is well-known in the case of the second derivative (which does not act on w\mathbf{w}, but of course the action on w\mathbf{w} is zero when du=0\mathrm{d}u = 0).

    • CommentRowNumber20.
    • CommentAuthorTobyBartels
    • CommentTimeMay 12th 2015

    Actually, this is still more complicated than it needs to be! The theorem is simply this:

    +-- {: .num_theorem #2ndDTest}
    ###### The second-differential test
    

    Given a twice-differentiable space XX, a twice-differentiable quantity uu defined on XX, and a point PP in XX. If uu has a local minimum at PP, then d 2u| P\mathrm{d}^2u|_P is positive semidefinite. Conversely (mostly), if d 2u| P\mathrm{d}^2u|_P is positive definite, then uu has a local minimum at PP.

    =--
    

    In terms of the cojet version, d 2u|C=u(P)C(0)C(0)+u(P)C(0)\langle{\mathrm{d}^2u|C}\rangle = u''(P) \cdot C'(0) \cdot C'(0) + u'(P) \cdot C''(0) is (semi)-positive1 for all twice-differentiable CC iff u(P)u''(P) is positive (semi)-definite and u(P)u'(P) is zero. (Just look at examples where CC' or CC'' is zero.) In terms of the coflare version, d 2u| R=P,dR=v,d 1R=v,d 2R=w=u(P)vv+u(P)w\mathrm{d}^2u|_{R = P, \mathrm{d}R = \mathbf{v}, \mathrm{d}_1R = \mathbf{v}, \mathrm{d}^2R = \mathbf{w}} = u''(P) \cdot \mathbf{v} \cdot \mathbf{v} + u'(P) \cdot \mathbf{w} is (semi)-positive for all v\mathbf{v} and all w\mathbf{w} iff the same condition holds. (Inasmuch as the cojet version of d 2u\mathrm{d}^2u is the symmetrization of the coflare version, then properties of values of the cojet form correspond precisely to properties of values of the coflare form when dR\mathrm{d}R and d 1R\mathrm{d}_1R are set to the same value.)

    So one never needs to mention du\mathrm{d}u as such at all!


    1. where ‘semipositive’ means 0\geq 0, which hopefully was obvious in context 

    • CommentRowNumber21.
    • CommentAuthorMike Shulman
    • CommentTimeMay 12th 2015

    Nice! The improved version of the theorem gives another argument for why the d 2x\mathrm{d}^2 x terms should be included in d 2u\mathrm{d}^2 u.

    I like the word “semipositive”; maybe I will start using it in other contexts instead of the unlovely “nonnegative”.

    • CommentRowNumber22.
    • CommentAuthorTobyBartels
    • CommentTimeMay 13th 2015

    It also works better when the context consists of more than just the real numbers. For example, 2+3i2 + 3\mathrm{i} is nonnegative, but it's not semipositive. (Another term that I've used is ‘weakly positive’, which goes with the French-derived ‘strictly positive’ rather than the operator-derived ‘semidefinite’.)

    • CommentRowNumber23.
    • CommentAuthorMike Shulman
    • CommentTimeMay 13th 2015

    Maybe it’s because I’m more familiar with operators than with French, but “semipositive” is more intuitive for me than “weakly positive”. The word I’ve most often heard used for \le (in opposition to “strictly” for <\lt) is the equally unlovely “non-strictly”.

    • CommentRowNumber24.
    • CommentAuthorTobyBartels
    • CommentTimeMay 13th 2015

    Wow, ‘non-strictly’!? As far as I know, ‘weakly’ here doesn't come from French; in French, they say simply ‘positif’ for the weak concept (which is why they must say ‘strictement positif’ for the strict one). But it seems to me that ‘weakly’ is the established antonym of ‘strictly’.

    • CommentRowNumber25.
    • CommentAuthorMike Shulman
    • CommentTimeMay 13th 2015

    Unless it’s “strongly” or “pseudo”…

    • CommentRowNumber26.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 14th 2015

    In that funny way that sometimes happens on the nn forum/Lab/Café where separate conversations turn out to be related, is there anything you guys are finding out in these threads which makes contact with differential cohesion?

    • CommentRowNumber27.
    • CommentAuthorUrs
    • CommentTimeMay 14th 2015
    • (edited May 14th 2015)

    To the extent that this is about higher differentials, it might be (I haven’t followed closely) that the concepts here overlap with Kochan-Severa’s “differential gorms and differential worms” (arXiv:math/0307303). To the extent that this is so, then this is seamlessly axiomatized in solid cohesion.

    • CommentRowNumber28.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 14th 2015

    Well that paper

    From a more conceptual point of view, differential forms are functions on the superspace of maps from the odd line to M

    seems to relate to our discussion here, where I was wondering about how the shift to the super gave rise to differential forms.

    • CommentRowNumber29.
    • CommentAuthorUrs
    • CommentTimeMay 14th 2015
    • (edited May 14th 2015)

    I may have misunderstood your aim there. It seemed to me that you were wondering there about the role of “(0|1)(0\vert 1)-Euclidean field theory” in this business, to which I replied that I find it a red herring in the present context.

    That the /2\mathbb{Z}/2\mathbb{Z}-graded algebra of differential forms on a (super-)manifold XX is C ([ 0|1,X])C^\infty([\mathbb{R}^{0\vert 1},X]) is a standard fact of supergeometry, and the observation that the natural Aut( 0|1)Aut(\mathbb{R}^{0\vert 1})-action on this gives the refinement to \mathbb{Z}-grading as well as the differential originates in Kontsevich 97, and was long amplified by Severa, see also at odd line – the automorphism supergroup.

    In the article on gorms and worms they generalize this by replacing 0|1\mathbb{R}^{0|1} by 0|q\mathbb{R}^{0|q}.

    • CommentRowNumber30.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 14th 2015

    Re #29, I returned the discussion to the original thread.

    • CommentRowNumber31.
    • CommentAuthorMike Shulman
    • CommentTimeMay 14th 2015

    I think we just had a drive-by categorification. (-:

    The “higher” in this thread refers essentially to roughly the same number that indexes the usual 1-forms, 2-forms, nn-forms; the generalization is rather to eliminate the requirement of antisymmetry. It sounds from the abstract like Kochan-Severa instead keep the “antisymmetry” (encoded using super-ness) and generalize in a different direction.

    On the other hand, re #26 as stated, coflare differential forms and their differentials make perfect sense in SDG, whereas integration is always tricky in such contexts.

    • CommentRowNumber32.
    • CommentAuthorUrs
    • CommentTimeMay 14th 2015
    • (edited May 14th 2015)

    It sounds from the abstract like Kochan-Severa instead keep the “antisymmetry”

    They also get commuting differentials: the algebra C ( 0|2)C^\infty(\mathbb{R}^{0\vert 2}) is spanned over \mathbb{R} by the two odd-graded coordinate functions θ 1\theta_1 and θ 2\theta_2 as well as their even graded product θ 1θ 2\theta_1 \theta_2. Under the identification of C ([ 0|2,X])C^\infty([\mathbb{R}^{0\vert 2}, X]) with second order differential forms, the first two generators give anticommuting forms, but the second gives commuting second order forms.

    This seems close to what you start with in #1, but I haven’t really thought through it.

    • CommentRowNumber33.
    • CommentAuthorTobyBartels
    • CommentTimeMay 18th 2015
    • (edited Dec 13th 2019)

    And another issue that I've been thinking of:

    I've occasionally seen the claim (and now I wish that I knew where, and it may have only ever been oral) that the higher differentials are the terms in the Taylor series; that is (for ff analytic at cc with a radius of convergence greater than hh),

    f(c+h)= i=0 1i!f (i)(c)h i= i=0 1i!d if(x)| xcdxh=e df(x)| xcdxh. f(c + h) = \sum_{i=0}^\infty \frac{1}{i!} f^{(i)}(c) h^i = \sum_{i=0}^\infty \frac{1}{i!} \,\mathrm{d}^{i}f(x)|_{x\coloneqq{c}\atop\mathrm{d}x\coloneqq{h}} = \mathrm{e}^{\mathrm{d}}f(x)|_{x\coloneqq{c}\atop\mathrm{d}x\coloneqq{h}}.

    This works if d if(x)=f (i)(x)dx i\mathrm{d}^{i}f(x) = f^{(i)}(x) \,\mathrm{d}x^i, but our claim is that d if(x)\mathrm{d}^{i}f(x) is much more complicated. However, the extra terms all involve higher differentials of xx, so if we evaluate e df(x)\mathrm{e}^{\mathrm{d}}f(x) with not only xcx \coloneqq c and dxh\mathrm{d}x \coloneqq h but also d ix0\mathrm{d}^{i}x \coloneqq 0 for i2i \geq 2, then it still works.

    • CommentRowNumber34.
    • CommentAuthorMike Shulman
    • CommentTimeMay 18th 2015

    @Toby: Yes, I agree. In general, the sum of the higher differentials seems like some generalization of a power series in which you have not only a “first-order-small variable” but also a separate nnth-order-small variable for all nn.

    • CommentRowNumber35.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2015
    • (edited May 19th 2015)

    Please help me: to which extent is the following definition (first full paragraph on p. 4 of arXiv:math/0307303) what you are after, or to which extent is it not what you are after:

    for X= nX = \mathbb{R}^n a Cartesian space, say that its /2\mathbb{Z}/2\mathbb{Z}-bigraded commutative differential algebra of second order differential forms is the smooth algebra freely generated over the smooth functions on XX in even degree from two differentials d 1d_1 and d 2d_2. In particular for f: nf \colon \mathbb{R}^n \to \mathbb{R} then

    d 1d 2f= 2fx ix jd 1x 1d 2x 2+fx id 1d 2x i, d_1 d_2 f = \frac{\partial ^2 f}{\partial x^i \partial x^j} d_1 x^1 d_2 x^2 + \frac{\partial f}{\partial x^i} d_1 d_2 x^i \,,

    where d 1x id_1 x^i is in bidegree (1,0)(1,0), d 2x id_2 x^i is in bidegree (0,1)(0,1) and d 1d 2x id_1 d_2 x^i is in bidegree (1,1)(1,1).

  1. Re #35: here bidifferential means that the two differentials anti-commute d 1d 2=d 2d 1d_1 d_2=-d_2 d_1 right?

    Another approach to higher differentials, which I don’t know if it had been mentioned already, is in Laurent Schwartz “Geometrie differentielle du 2ème ordre, semi-martingales et equations differentielles stochastiques sur une variete differentielle” p.3. There he defines d kfd^k f, for a scalar valued function ff on some manifold MM, to be the k-th jet of f “modulo 0-th order information”. More precisely: if ff vanishes at pMp\in M, then d kf(p)d^k f(p) is the equivalence class of ff modulo all functions that vanish to kthk-th order at pp. If ff does not vanish at pp, then d kf(p)d^k f(p) is the equivalence class of ff(p)f-f(p) modulo functions vanishing to order kk at pp. In coordinates this should amount to the same as considering the k-th Taylor polynomials of ff, but forgetting the constant term.

    In this approach it is not immediately clear (to me), what the product of two higher differentials is and in which sense say d 2=ddd^2=dd etc. But I thought I’ll add it to the mix.

    • CommentRowNumber37.
    • CommentAuthorMike Shulman
    • CommentTimeMay 19th 2015

    Urs, I need you to back up a moment as I’m not used to thinking of differential forms in this way. Even for ordinary differential forms, how do you get the partial derivatives of ff appearing from only algebraic assumptions? I can see it if ff is a polynomial, but in general, ff might not even be analytic!

    • CommentRowNumber38.
    • CommentAuthorTobyBartels
    • CommentTimeMay 19th 2015

    I found Schwartz’s paper here: http://www.numdam.org/item?id=SPS_1982__S16__1_0.

    It seems to me that Schwartz's differentials contain the same information as mine (if I may lay claim to them even after seeing citations to them in 19th-century books). In particular, the space of order-22 forms at a given point in a manifold of dimension nn is n+n(n+1)/2n + n(n+1)/2, that is nn for the order-22 differentials of the local coordinates and n(n+1)/2n(n+1)/2 for the products of two order-11 differentials. (These are the symmetrized products, producing my cojet forms, not Mike's coflare forms.)

    • CommentRowNumber39.
    • CommentAuthorMike Shulman
    • CommentTimeMay 19th 2015

    Perhaps Schwartz is one of the “classical calculus books” mentioned in the answer linked in #1 and referred to in #13.

    • CommentRowNumber40.
    • CommentAuthorTobyBartels
    • CommentTimeMay 19th 2015

    Schwartz has the good point that the space of order-11 forms (at a given point) is a quotient of the space of order-22 forms; the kernel consists of those forms which appear in the order-22 terms of a Taylor series.

    • CommentRowNumber41.
    • CommentAuthorMike Shulman
    • CommentTimeMay 19th 2015

    Re #40: is that something special about 1 and 2, or is it true more generally?

    • CommentRowNumber42.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2015

    Mike, the reasoning with the graded infinitesimals here is the same as for the non-graded ones familiar from SDG, e.g. here.

    • CommentRowNumber43.
    • CommentAuthorTobyBartels
    • CommentTimeMay 19th 2015

    In local coordinates, the quotient map is

    ijA i,jdx idx j+ iB id 2x i iB idx i. \sum_{i\leq{j}} A_{i,j} \mathrm{d}x_i \,\mathrm{d}x_j + \sum_i B_i \,\mathrm{d}^2x_i \mapsto \sum_i B_i \,\mathrm{d}x_i .

    In 11 dimension, an analogous quotient map from order-33 forms to order-22 forms would be

    Adx 3+Bd 2xdx+Cd 3xBdx 2+Cd 2x, A \,\mathrm{d}x^3 + B \,\mathrm{d}^2x \,\mathrm{d}x + C \,\mathrm{d}^3 x \mapsto B \,\mathrm{d}x^2 + C \,\mathrm{d}^2x ,

    but that is not coordinate-invariant (most simply, if A,B=0A, B = 0, C=1C = 1, and x=t 2x = t^2). And I don't even know how I would write down a map from order 44 to order 33 in 22 dimensions (in particular, for d 2xd 2y\mathrm{d}^2x \,\mathrm{d}^2y).

    However, it seems to me that we can make a quotient map from order-pp forms to order-11 forms for any pp, so it's just special about 11. But in this case, I don't see anything especially special about the kernel.

    • CommentRowNumber44.
    • CommentAuthorTobyBartels
    • CommentTimeMay 19th 2015

    H'm, but this map from order 33 to order 22 is invariant:

    Adx 3+Bd 2xdx+Cd 3xBdx 2+3Cd 2x A \,\mathrm{d}x^3 + B \,\mathrm{d}^2x \,\mathrm{d}x + C \,\mathrm{d}^3 x \mapsto B \,\mathrm{d}x^2 + 3 C \,\mathrm{d}^2x

    (note the factor of 33). I'm not sure what to make of that!

    • CommentRowNumber45.
    • CommentAuthorMike Shulman
    • CommentTimeMay 20th 2015

    Thanks Urs, that was the reminder I needed.

    The formula for the second differential does indeed look just like the one Toby and I are working with. But is Michael also right that the differentials anticommute? d 1d 2=d 2d 1d_1d_2 = -d_2d_1 and/or d 1xd 2x=d 2xd 1xd_1x \, d_2x = - d_2 x\, d_1x? And does their setup include nonlinear differential forms such as dx 2+dy 2\sqrt{d x^2 + d y^2}?

    • CommentRowNumber46.
    • CommentAuthorMike Shulman
    • CommentTimeMay 20th 2015

    From the coflare perspective, the general linear order-2 form is

    Ad 210x+Bd 210x+Cd 210x+Dd 120x+Ed 210x A \mathrm{d}_{2\cdot 1\cdot 0}x + B \mathrm{d}_{21\cdot 0}x + C\mathrm{d}_{2\cdot10}x + D \mathrm{d}_{1\cdot 20}x + E\mathrm{d}_{210} x

    where d 210x\mathrm{d}_{2\cdot 1 0}x is shorthand for d 2xd 1d 0x\mathrm{d}_2x \, \mathrm{d}_{1}\mathrm{d}_0 x, and so on. From this perspective the factor of 3 arises because your map to order-2 forms would have to give something like

    (B+C+D)d 10x+Ed 10x (B+C+D) \mathrm{d}_{1\cdot 0} x + E \mathrm{d}_{10} x

    whereas under a coordinate change BB, CC, and DD all get their own correction term involving EE.

    Just making stuff up, I notice that sending the above order-2 form to

    2(B+C+D)d 10x+6Ed 10x 2(B+C+D) \mathrm{d}_{1\cdot 0} x + 6E \mathrm{d}_{10} x

    will also be coordinate-invariant, and there seems to be some sense in which there are two ways to “forget” from 21021\cdot 0 to 101\cdot 0 (“send 00 to 00, and send 11 to either 11 or 22”) and six ways to “forget” from 210210 to 1010 (the six injective maps from a 2-element set to a 3-element set).

    • CommentRowNumber47.
    • CommentAuthorTobyBartels
    • CommentTimeMay 20th 2015

    Thanks, Mike, I tried to figure out something like that, but I wasn't coming up with the right combinatorics. I like your version. It's not immediately obvious why it should work that way, but it does seem to work!

    • CommentRowNumber48.
    • CommentAuthorMike Shulman
    • CommentTimeMay 20th 2015

    Here’s an even better explanation, again from the coflare POV. At the 2-to-1 stage, there is a map TXT 2XT X \to T^2 X that sends a tangent vector (x;v)(x;v) to (x;0,0;v)(x;0,0;v). Second-order tangent vectors don’t transform like vectors in general, but they do if the associated first-order tangent vectors are zero, so this makes sense. The quotient map from 2-forms to 1-forms is just precomposition with this map.

    Now at the 3-to-2 stage, there are six natural maps T 2XT 3XT^2 X \to T^3 X, sending (x;u,v;w)(x;u,v;w) to (x;u,0,0;0,0,v;w)(x;u,0,0;0,0,v;w) or (x;0,u,0;0,v,0;w)(x;0,u,0;0,v,0;w) or (x;0,0,u;v,0,0;w)(x;0,0,u;v,0,0;w) or the same with uu and vv switched. The formula 2(B+C+D)d 10x+6Ed 10x 2(B+C+D) \mathrm{d}_{1\cdot 0} x + 6E \mathrm{d}_{10} x is the sum of the precomposition with all six of these maps. (But it would probably be more natural, in a general theory, to consider all these maps separately rather than only when added up.)

    • CommentRowNumber49.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2015
    • (edited May 20th 2015)

    But is Michael also right that the differentials anticommute?

    Yes.

    And does their setup include nonlinear differential forms such as dx 2+dy 2\sqrt{d x^2 + d y^2}?

    Only for the second order differentials d 1d 2xd_1 d_2 x, yes, since only these are commuting forms in their setup. In section 4.4 you see them consider Gaussians of these.

    (Do you want first order differentials to not square to 0? That would sound a dubious wish to me, but I don’t really know where you are coming from here.)

    • CommentRowNumber50.
    • CommentAuthorTobyBartels
    • CommentTimeMay 20th 2015

    tl;dr: We seek an algebra that encompasses both exterior forms and equations such as the first displayed equation in the top post of this thread.

    The main motivating examples for what Mike and I have been doing (beyond the exterior calculus) are these from classical Calculus:

    f(x)=d 2f(x)dx 2 f''(x) = \frac{\mathrm{d}^2f(x)}{\mathrm{d}x^2}

    (which is wrong but can be fixed) and

    ds=dx 2+dy 2 \mathrm{d}s = \sqrt{\mathrm{d}x^2 + \mathrm{d}y^2}

    (which is correct except for the implication that ds\mathrm{d}s is the differential of some global ss). Anything that treats dx 2\mathrm{d}x^2 as zero is not what we're looking for.

    Having determined that

    f(x)=d 2f(x)dx 2f(x)d 2xdx 2 f''(x) = \frac{\mathrm{d}^2f(x)}{\mathrm{d}x^2} - f'(x) \frac{\mathrm{d}^2x}{\mathrm{d}x^2}

    is correct, we're now looking more carefully at d 2\mathrm{d}^2 and deciding that (if we wish to fit exterior forms into the same algebra) the two d\mathrm{d}s are not exactly the same operator, and that furthermore it matters which of these appears in dx\mathrm{d}x.

    The current working hypothesis is that

    d 2f(x)=f(x)dx 2+f(x)d 2x \mathrm{d}^2f(x) = f''(x) \,\mathrm{d}x^2 + f'(x) \,\mathrm{d}^2x

    (the previous equation, rearranged) is the symmetrization (in the algebra of cojet forms) of

    d 1d 0f(x)=f(x)d 1xd 0x+f(x)d 1d 0x \mathrm{d}_1\mathrm{d}_0f(x) = f''(x) \,\mathrm{d}_1x \,\mathrm{d}_0x + f'(x) \,\mathrm{d}_1\mathrm{d}_0x

    (in the algebra of coflare forms).

    • CommentRowNumber51.
    • CommentAuthorMike Shulman
    • CommentTimeMay 20th 2015

    And does their setup include nonlinear differential forms such as dx 2+dy 2\sqrt{d x^2 + d y^2}?

    Only for the second order differentials d 1d 2xd_1 d_2 x, yes

    What does that mean? Does dx 2+dy 2\sqrt{d x^2 + d y^2} exist or doesn’t it? Here dxd x and dyd y are first-order differentials, but their squares are of course second order, and then when we take a square root we get back to “first order” in a suitable sense.

    • CommentRowNumber52.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2015

    By second order differentials I mean those with second derivatives, as in d 1d 2xd_1 d_2 x. In C ([ 0|2, 1])C^\infty([\mathbb{R}^{0|2},\mathbb{R}^1]) we have (d 1x)(d 1x)=0(d_1 x) (d_1 x) = 0 but no power of d 1d 2xd_1 d_2 x vanishes.

    • CommentRowNumber53.
    • CommentAuthorMike Shulman
    • CommentTimeMay 20th 2015

    So dx 2+dy 2=0\sqrt{d x^2 + d y^2}=0 in their setup, because dx 2=dy 2=0d x^2 = d y^2 = 0?

    I’m also confused because dx 2=0d x^2=0 seems to contradict the formula cited in #35, which ought to have a nonzero term with coefficient 2f/(x i) 2\partial^2 f / (\partial x^i)^2 when i=ji=j.

    • CommentRowNumber54.
    • CommentAuthorMichael_Bachtold
    • CommentTimeMay 21st 2015
    • (edited May 21st 2015)

    Wait, I’m now in doubt if indeed in Severas setting we have that differentials d 1d_1, d 2d_2 anti-commute. Wouldn’t the commutation rule in a 2\mathbb{Z}^2 graded commutative algebra read ab=(1) |a||b|baa\cdot b=(-1)^{|a|\cdot |b|}b\cdot a where |a|=(n 1,n 2)|a|=(n_1,n_2) is the bi-degree of aa and |a||b|:=n 1k 1+n 2k 2|a|\cdot|b|:=n_1 k_1+ n_2 k_2, for |b|=(k 1,k 2)|b|=(k_1,k_2)? If that’s the case then elements of bi-degree (1,0)(1,0) commute with things of bi-degree (0,1)(0,1). ?

    • CommentRowNumber55.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2015
    • (edited May 21st 2015)

    Sorry, yes, I suppose you are right.

    Somehow I am not fully focusing on this discussion here…

    I think I am just suggesting that if in search of a generalization of anything (here: exterior calculus) it helps to have some guiding principles. What Kochan-Ševera do has the great advantage that by construction it is guaranteed to have much internal coherence, because in this approach one doesn’t posit the new rules by themselves, but derives them from some more abstract principle (of course one has to derive them correctly, I just gave an example of failing to do that :-)

    • CommentRowNumber56.
    • CommentAuthorMike Shulman
    • CommentTimeMay 21st 2015

    Which “you” of us is right, #53 or #54?

    I think that what Toby and I are doing has perfectly valid guiding principles. One of those principles is, I think, that it should be as simple and concrete as possible, so that it can be taught to calculus students who don’t know what a graded commutative algebra is. (-:

    • CommentRowNumber57.
    • CommentAuthorTobyBartels
    • CommentTimeMay 21st 2015

    Those two guiding principles (a structure with internal coherence, simple calculations) should really be in concert.

    • CommentRowNumber58.
    • CommentAuthorMike Shulman
    • CommentTimeMay 22nd 2015

    Sure; I think coflare forms have plenty of internal coherence as well. We aren’t “positing” any rules, just taking seriously the idea of iterated tangent bundles and dropping any requirement of linearity on forms.

    As long as I’m over here, let me record this observation here which I made in the other thread: the view of a connection in #11 gives a coordinate-invariant way (which of course depends on the chosen connection) to “forget the order-2 part” of a rank-2 coflare form, by substituting

    d 2x iΓ jk idx jdx k d^2x^i \mapsto \Gamma^i_{j k} d x^j \otimes d x^k

    wherever it appears. The transformation rule for Christoffel symbols precisely ensures that this is coordinate-invariant. Are there analogues in higher rank?

    • CommentRowNumber59.
    • CommentAuthorUrs
    • CommentTimeMay 22nd 2015
    • (edited May 22nd 2015)

    Michael, re #54 wait, no, the algebra in the end is the superalgebra C ([ 0|2,X])C^\infty([\mathbb{R}^{0|2},X]) and as such is /2\mathbb{Z}/2\mathbb{Z}-graded.

    I’ll have a few spare minutes tomorrow morning when I am on the train. I’ll try to sort it out then and write it cleanly into the nnLab entry.

    • CommentRowNumber60.
    • CommentAuthorUrs
    • CommentTimeMay 22nd 2015
    • (edited May 22nd 2015)

    it should be as simple and concrete as possible, so that it can be taught to calculus students who don’t know what a graded commutative algebra is.

    Returning the favour, allow me to add the advise: first you need to understand it, only then should it be made simple for exposition. Not the other way around! :-)

    The graded algebra is there to guide the concept. The concept in the end exists also without graded algebra.

    I’ll try to find time tomorrow to extract the relevant essence in Kochan-Severa.

    • CommentRowNumber61.
    • CommentAuthorTobyBartels
    • CommentTimeMay 22nd 2015

    The simple Calculus exposition already exists, particularly if you go back to the 19th century. It is internally inconsistent, but perhaps it can be cleaned up to be consistent. (Already parts of it have been, but can the whole thing be bundled into a single system?) A unifying conceptual idea would be very valuable here, and so far the best idea that Mike and I have is coflare forms.

    • CommentRowNumber62.
    • CommentAuthorMichael_Bachtold
    • CommentTimeMay 22nd 2015
    • (edited May 22nd 2015)

    Urs re #59 you are probably right, I was trying to recall from vague memory how the formula for graded commutativity works. It should follow from the definition at graded commutative algebra once we agree how a 2\mathbb{Z}^2 graded vector space is 2\mathbb{Z}_2 graded, which is probably by mapping the bi-degree (n 1,n 2)n 1+n 2mod2(n_1,n_2)\mapsto n_1 + n_2 mod 2. Then I guess the right commutativity formula is ab=(1) (n 1+n 2)(k 1+k 2)baa\cdot b=(-1)^{(n_1+n_2)(k_1+k_2)}b\cdot a.

    • CommentRowNumber63.
    • CommentAuthorTobyBartels
    • CommentTimeApr 24th 2019
    • (edited Apr 24th 2019)

    I ran across a math help page that at least tries to justify the incorrect traditional formula for the second differential; it argues ‘When we calculate differentials it is important to remember that dxd x is arbitrary and independent from xx number [sic]. So, when we differentiate with respect to xx we treat dxd x as constant.’. Now, that's not how differentials work; you're not differentiating with respect to anything in particular when you form a differential, and so you treat nothing as constant. (If you do treat something as constant, then you get a partial differential.) But at least they realize that there is something funny going on here, and that it might be meaningful to not treat dxd x as constant.

    • CommentRowNumber64.
    • CommentAuthorMichael_Bachtold
    • CommentTimeApr 25th 2019
    • (edited Apr 25th 2019)

    The formulation on that page is indeed confusing. On the other hand, treating dxd x constant (ddx=0d d x=0) does lead to the incorrect traditional formula. That’s also how Leibniz, Euler etc. arrived at the (generally incorrect) equation ddxdydx=d 2ydx 2\frac{d}{d x}\frac{d y}{d x} = \frac{d^2 y}{d x^2}. Nowadays that equation is true by notational convention.

    • CommentRowNumber65.
    • CommentAuthorTobyBartels
    • CommentTimeMay 2nd 2019

    That MSE.stackexchange answer that you linked is interesting, as are the comments by Francois Ziegler. I'm glad to see that the early Calculators explicitly stated that they were making an assumption, equivalent to d 2x=0d^2{x}=0, in deriving that formula.

    • CommentRowNumber66.
    • CommentAuthorTobyBartels
    • CommentTimeNov 16th 2023

    This seems to be the right thread to record something about notation for coflare forms.

    Here is a diagram of the 2 32^3 coordinate functions on T 3{\mathrm{T}^3}\mathbb{R} (which generalizes to n\mathbb{R}^n and then to a coordinate patch on any manifold, but I only want to think about one coordinate at a time on the base manifold right now), in the notation indicated by Mike in comment #7, where we write the differential operator d{\mathrm{d}} with a finite set of natural numbers as the index:

    \begin{tikzcd} {\mathrm{d}_{\varnothing}}x \ar[rr] \ar[dr] \ar[dd] & & {\mathrm{d}_{{1}}}x \ar[dr] \ar[dd] \ & {\mathrm{d}_{{2}}}x \ar[rr] \ar[dd] & & {\mathrm{d}_{{1,2}}}x \ar[dd] \ {\mathrm{d}_{{3}}}x \ar[rr] \ar[dr] & & {\mathrm{d}_{{1,3}}}x \ar[dr] \ & {\mathrm{d}_{{2,3}}}x \ar[rr] & & {\mathrm{d}_{{1,2,3}}}x \end{tikzcd}

    Although I didn't label the arrows, moving the right adjoins 11 to the subscript, moving outwards (which is how I visualize the arrows pointing down-right in the 2D projection, although you might see them as inwards depending on how you visualize this Necker cube) adjoins 22 to the subscript, and moving downwards adjoins 33 to the subscript.

    I might like to start counting with 00 instead of with 11, but I won't worry about that now (although Mike did that too starting in comment #46). I'm looking at convenient abbreviations of these instead. Some of these abbreviations are well established: d x{\mathrm{d}_{\emptyset}}x is of course usually just called xx; d {1}x{\mathrm{d}_{\{1\}}}x is usually called just dx{\mathrm{d}}x; d {1,2}x{\mathrm{d}_{\{1,2\}}}x is traditionally called d 2x{\mathrm{d}^{2}}x; and d {1,2,3}x{\mathrm{d}_{\{1,2,3\}}}x is traditionally called d 3x{\mathrm{d}^{3}}x. These abbreviations also appear in Mike's comment #7, where we can also write d 0x{\mathrm{d}^{0}}x for xx. Note that these examples are all coordinates on the jet bundle.

    Another obvious abbreviation is that d ix{\mathrm{d}_{i}}x can mean d {i}x{\mathrm{d}_{\{i\}}}x. There is some precedent for this at least as regards d 1x{\mathrm{d}_{1}}x and d 2x{\mathrm{d}_{2}}x in discussion of exterior differential forms; for example, dxdy{\mathrm{d}}x \wedge {\mathrm{d}}y can be explained as d 1xd 2yd 2xd 1y{\mathrm{d}_{1}}x {\mathrm{d}_2}y - {\mathrm{d}_{2}}x {\mathrm{d}_{1}}y (or half that, depending on your conventions for antisymmetrization). But d 1xd 2y{\mathrm{d}_{1}}x {\mathrm{d}_{2}}y can also be written as dxdy{\mathrm{d}}x \otimes {\mathrm{d}}y, and I'm not sure that there's much precedent for d 2x{\mathrm{d}_{2}}x by itself.

    What there really is no precedent for is something like d 2 2x{\mathrm{d}_{2}^{2}}x. But I think that I know what this should mean; it should be d {2,3}x{\mathrm{d}_{\{2,3\}}}x, just as d 1 2x{\mathrm{d}_{1}^{2}}x means d {1,2}x{\mathrm{d}_{\{1,2\}}}x.

    Finally, d {1,3}x{\mathrm{d}_{\{1,3\}}}x also has an abbreviation that I've used before, as in this StackExchange answer; and which Mike used as well, way back in comment #1: it's ddx{\mathrm{d}} \otimes {\mathrm{d}}x (or d 2x{\mathrm{d}^{{\otimes}2}x} to be even shorter). This is because the first (from the right) d{\mathrm{d}} is d 1{\mathrm{d}_{1}}, the next would be d 2{\mathrm{d}_{2}} if it were just ddx{\mathrm{d}\mathrm{d}}x (aka d 2x{\mathrm{d}^{2}}x), but the tensor product pushes the order back one more.

    Except that that's not what I was thinking when I wrote that answer! I was thinking more like d 2d 1x{\mathrm{d}_{2}\mathrm{d}_{1}}x (but not wanting to use the subscripts, since I hadn't introduced them in that answer). That is, I'm thinking of d 1{\mathrm{d}_{1}}, d 2{\mathrm{d}_{2}}, etc as operators that can be applied to any generalized differential form. They can't just adjoin an element to the subscript, since the subscript might already have that element. So d 1d {1}x=d {1,2}x{\mathrm{d}_{1}\mathrm{d}_{\{1\}}}x = {\mathrm{d}_{\{1,2\}}}x, which is why d 2d {1}x{\mathrm{d}_{2}\mathrm{d}_{\{1\}}}x must be d {1,3}x{\mathrm{d}_{\{1,3\}}}x etc.

    So here's another way to label the vertices on the cube above:

    \begin{tikzcd} x \ar[rr] \ar[dr] \ar[dd] & & {\mathrm{d}_{1}}x \ar[dr] \ar[dd] \ & {\mathrm{d}_{2}}x \ar[rr] \ar[dd] & & {\mathrm{d}_{1}\mathrm{d}_{1}}x \ar[dd] \ {\mathrm{d}_{3}}x \ar[rr] \ar[dr] & & {\mathrm{d}_{2}\mathrm{d}_{1}}x \ar[dr] \ & {\mathrm{d}_{2}\mathrm{d}_{2}}x \ar[rr] & & {\mathrm{d}_{1}\mathrm{d}_{1}\mathrm{d}_{1}}x \end{tikzcd}

    Note that there's no contradiction here with the notation in the previous cube! The rule is that d i{\mathrm{d}_{i}} immediately in front of xx adjoins ii to the subscript set, while d i{\mathrm{d}_{i}} in the next position to the left adjoins i+1i + 1 to the subscript, etc. So repeating d i{\mathrm{d}_{i}} (which you can also abbreviate using exponents) never tries to adjoin something that's already in the set.

    At least, that's the case if the subscript integers come in weakly decreasing order. But what if we apply d 1{\mathrm{d}_{1}} to d 2x{\mathrm{d}_{2}x}; in other words, what is d 1d 2x{\mathrm{d}_{1}\mathrm{d}_{2}}x? I think that the has to be d 2d 1x{\mathrm{d}_{2}\mathrm{d}_{1}}x, that is d {1,3}x{\mathrm{d}_{\{1,3\}}}x. So to interpret the operator d i{\mathrm{d}_{i}} properly, you first need to first rearrange all of the individual differentials in the proper order.

    The other direction is easier. To interpret d Ix{\mathrm{d}_{I}}x, where II is a set of indices, as the result of applying some operations d i{\mathrm{d}_{i}} to xx, each element ii of II is interpreted as d ij{\mathrm{d}_{i-j}}, where jj is the number of elements of II that are less than ii. So a special case is that d {1,,k}x{\mathrm{d}_{\{1,\ldots,k\}}}x is d 1 kx{\mathrm{d}_{1}^{k}}x, as Mike used in #7 (but thinking in the reverse direction). But also d {1,3}x{\mathrm{d}_{\{1,3\}}}x is d 1d 2x{\mathrm{d}_{1}\mathrm{d}_{2}}x (or equivalently d 2d 1x{\mathrm{d}_{2}\mathrm{d}_{1}}x), where the d 1{\mathrm{d}_{1}} comes from 1{1,3}1 \in \{1,3\} and the d 2{\mathrm{d}_{2}} comes from 3{1,3}3 \in \{1,3\}, because there's 11 element of {1,3}\{1,3\} that's less than 33 (namely, 1<31 \lt 3), so we use d 31{\mathrm{d}_{3-1}}.

    In fact, we can define an operator d I{\mathrm{d}_{I}} for any set II of indices. Again, each element ii of II gives us an operator d ij{\mathrm{d}_{i-j}}, where jj is the number of elements of II that are less than ii. (And since these operators commute, we can apply them in any order.) For example, d {1,3}d {2,3}x{\mathrm{d}_{\{1,3\}}\mathrm{d}_{\{2,3\}}}x means d 2 3d 1x{\mathrm{d}_{2}^{3}\mathrm{d}_{1}}x. This is because the element 11 of {1,3}\{1,3\} gives us d 1{\mathrm{d}_{1}} (since nothing in {1,3}\{1,3\} is less than 11), the element 33 of {1,3}\{1,3\} gives us d 31{\mathrm{d}_{3-1}} (since {iI|i<3}={1}\{i \in I \;|\; i \lt 3\} = \{1\} and |{1}|=1{|{\{1\}}|} = 1), the element 22 of {2,3}\{2,3\} gives us d 2{\mathrm{d}_{2}} (since nothing in {2,3}\{2,3\} is less than 22), and the element 33 of {2,3}\{2,3\} also gives us d 31{\mathrm{d}_{3-1}} (since iI|i<3}={2}\i \in I \;|\; i \lt 3\} = \{2\} and |{2}|=1{|{\{2\}}|} = 1). And this is also equal to d {1,3,4,5}x{\mathrm{d}_{\{1,3,4,5\}}}x.

    I don't know about these operators d I{\mathrm{d}_{I}} so much, but my intuition about coflare forms is largely based on the operators d i{\mathrm{d}_{i}} now. That's what I'm talking about in that StackExchange answer when I say that certain differentials represent velocities, accelerations, and so forth. There are potentially infinitely many different variations of xx (although we only consider finitely many at a time in a coflare form), and d i{\mathrm{d}_{i}} represents an infinitesimal change in the iith of these. And such a change can apply to any coflare form.