Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf sheaves simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorTobyBartels
    • CommentTimeFeb 24th 2014

    I added some fine print, examples, and counterexamples to cogerm differential form.

    • CommentRowNumber2.
    • CommentAuthorTobyBartels
    • CommentTimeFeb 24th 2014

    Note to newbies: there is previous discussion of these at What is a variable? and related discussion at differentials. (Both of these began as discussions about teaching Calculus but got much more abstract near the end.)

    • CommentRowNumber3.
    • CommentAuthorMike Shulman
    • CommentTimeFeb 24th 2014

    Thanks!

    • CommentRowNumber4.
    • CommentAuthorMike Shulman
    • CommentTimeMar 6th 2014

    I added a bit more about the disappointing “exterior differentials” of cogerm forms.

    • CommentRowNumber5.
    • CommentAuthorTobyBartels
    • CommentTimeMar 12th 2014

    Here’s a way in which we want to be able to take exterior (wedge) products of cogerm forms. Recall that a Riemannian metric, while usually thought of as a symmetric bilinear covector form, can be equivalently thought of as a quadratic covector form, hence in particular a cogerm 1-form. I argued years ago, in the great forms thread on s.p.r with Eric Forgy, that the volume pseudo-form on an (unoriented) nn-dimensional Riemannian manifold is the principle square root of the nn-fold wedge product of the metric with itself. I'm not sure why I said that, exactly; you also have to divide by n!n!. At the time, I considered that this calculation took place in the second symmetric power of the exterior algebra, which is correct enough; but I'd also like it to take place in the exterior cogerm algebra. Even better, since the arclength element is the principal square root of the metric, I'd like the volume pseudo-form to simply be the nn-fold wedge product of the arclength element (divided by n!\sqrt{n!}).

    Here is an example, to help you follow me. Given g=dx 2+dy 2g = \mathrm{d}x^2 + \mathrm{d}y^2 on 2\mathbb{R}^2,

    vol=gg2!=(dx 2+dy 2)(dx 2+dy 2)2=dx 2dx 2+dx 2dy 2+dy 2dx 2+dy 2dy 22=(dxdx) 2+(dxdy) 2+(dydx) 2+(dydy) 22=(0) 2+(dxdy) 2+(dxdy) 2+(0) 22=2(dxdy) 22=(dxdy) 2=|dxdy|=|dx||dy|. \vol = \sqrt{\frac{g \wedge g}{2!}} = \sqrt{\frac{(\mathrm{d}x^2 + \mathrm{d}y^2) \wedge (\mathrm{d}x^2 + \mathrm{d}y^2)}{2}} = \sqrt{\frac{\mathrm{d}x^2 \wedge \mathrm{d}x^2 + \mathrm{d}x^2 \wedge \mathrm{d}y^2 + \mathrm{d}y^2 \wedge \mathrm{d}x^2 + \mathrm{d}y^2 \wedge \mathrm{d}y^2}{2}} = \sqrt{\frac{(\mathrm{d}x \wedge \mathrm{d}x)^2 + (\mathrm{d}x \wedge \mathrm{d}y)^2 + (\mathrm{d}y \wedge \mathrm{d}x)^2 + (\mathrm{d}y \wedge \mathrm{d}y)^2}{2}} = \sqrt{\frac{(0)^2 + (\mathrm{d}x \wedge \mathrm{d}y)^2 + (-\mathrm{d}x \wedge \mathrm{d}y)^2 + (0)^2}{2}} = \sqrt{\frac{2 (\mathrm{d}x \wedge \mathrm{d}y)^2}{2}} = \sqrt{(\mathrm{d}x \wedge \mathrm{d}y)^2} = {|\mathrm{d}x \wedge \mathrm{d}y|} = {|\mathrm{d}x|} \wedge {|\mathrm{d}y|} .

    (This last expression is traditionally called ‘dxdy\mathrm{d}x \,\mathrm{d}y’, but the absolute value is really there, as you can see because reversing the sign of one variable leaves the area unchanged.)

    That's how I did it with Eric (except for the last step which I would not have allowed then); here's how I'd also like to do it:

    vol=đsđs2!=dx 2+dy 2dx 2+dy 22=(dx 2+dy 2)(dx 2+dy 2)2==|dx||dy|. \vol = \frac{đ{s} \wedge đ{s}}{\sqrt{2!}} = \frac{\sqrt{\mathrm{d}x^2 + \mathrm{d}y^2} \wedge \sqrt{\mathrm{d}x^2 + \mathrm{d}y^2}}{\sqrt{2}} = \sqrt{\frac{(\mathrm{d}x^2 + \mathrm{d}y^2) \wedge (\mathrm{d}x^2 + \mathrm{d}y^2)}{2}} = \cdots = {|\mathrm{d}x|} \wedge {|\mathrm{d}y|} .

    In all of these calculations, operations that normally distribute over multiplication (square, principal square root, and their composite, absolute value) distribute over the wedge product, and it works.

    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeMar 12th 2014

    It’s nice to have an example! Now let’s see whether we can make it an example of anything. I wonder whether germs of parametrized surfaces are too general?

    • CommentRowNumber7.
    • CommentAuthorMike Shulman
    • CommentTimeMar 13th 2014

    Here’s one idea I had had for the exterior product of cogerm forms. Let cc be a parametrized surface; then for any angle θ\theta we have a curve defined by c θ(t)=c(tcosθ,tsinθ)c_\theta(t) = c(t\cos\theta,t\sin\theta). Now given cogerm 1-forms η\eta and ω\omega, we evaluate them on the “perpendicular” curves c θc_\theta and c θ+π/2c_{\theta+\pi/2} respectively, multiply the results, and average over θ\theta:

    12π θ=0 2πη|c θω|c θ+π/2dθ. \frac{1}{2\pi}\, \int_{\theta=0}^{2\pi} \langle \eta {|} c_{\theta} \rangle \cdot \langle \omega {|} c_{\theta+\pi/2} \rangle \, \mathrm{d}\theta.

    I think for exterior 1-forms this gives the right answer, working it out in coordinates and using the facts that 0 2πcos 2(θ)dθ= 0 2πsin 2(θ)dθ=π\int_0^{2\pi} \cos^2(\theta) \mathrm{d}\theta = \int_0^{2\pi} \sin^2(\theta) \mathrm{d}\theta = \pi and 0 2πsin(θ)cos(θ)dθ=0\int_0^{2\pi} \sin(\theta) \cos(\theta) \mathrm{d}\theta = 0. Does it give anything sensible for the metric or the arclength element?

    • CommentRowNumber8.
    • CommentAuthorMike Shulman
    • CommentTimeMar 19th 2014

    Suppose we had a wedge product of cogerm forms which, when restricted to exterior 1-forms η\eta and ω\omega, yields a cogerm 2-form which acts like the ordinary wedge product of exterior forms:

    ηω|c=η(v)ω(w)η(w)ω(v) \langle\eta\wedge\omega{|}c\rangle = \eta(v) \omega(w) - \eta(w)\omega(v)

    where v,wv,w are the canonical tangent vectors to cc at 00, i.e. v=Dc(e 0)v = D c(e_0) and w=Dc(e 1)w = D c(e_1). Then we would have

    (ηω) 2|c=ηω|c 2=(η(v)ω(w)η(w)ω(v)) 2=η 2(v)ω 2(w)+η 2(w)ω 2(v)2η(v)ω(w)η(w)ω(v) \langle(\eta\wedge\omega)^2{|}c\rangle = \langle\eta\wedge\omega{|}c\rangle^2 = (\eta(v) \omega(w) - \eta(w)\omega(v))^2 = \eta^2(v) \omega^2(w) + \eta^2(w) \omega^2(v) - 2 \eta(v) \omega(w)\eta(w)\omega(v)

    which seems to say that the cogerm 2-form (ηω) 2(\eta\wedge\omega)^2 doesn’t depend only on η 2\eta^2 and ω 2\omega^2, but also on η\eta and ω\omega. This makes it seem unlikely that it could be equal to η 2ω 2\eta^2 \wedge \omega^2. Is this a wrong conclusion to draw?

    • CommentRowNumber9.
    • CommentAuthorMike Shulman
    • CommentTimeMar 19th 2014

    By the way, I just had occasion to look at the book A geometric approach to differential forms by David Bachman, and noticed that although he talks mostly about ordinary exterior forms, he’s very explicit about there being other sorts which one may want to integrate:

    …a more general integral [over a domain parametrized by ϕ\phi] would be f(ϕ(a))ω(dϕda)da\int f(\phi(a))\, \omega(\frac{d\phi}{d a})\, d a, where ff is a function of points and ω\omega is a function of vectors. It is not the purpose of the present work to undertake a study of integrating with respect to all possible functions, ω\omega. However, as with the study of functions of real variables, a natural place to start is with linear functions. This is the study of differential forms… The strength of differential forms lies in the fact that their integrals do not depend on a choice of parametrization. (p31)

    In the appendix, he considers the arc length element and the unoriented area form as particular examples of non-linear ω\omega that one may want to integrate, but he doesn’t give any general theory.

    • CommentRowNumber10.
    • CommentAuthorTobyBartels
    • CommentTimeMar 20th 2014

    Those two examples share the strength of differential forms —that they don't depend on parametrization— and improve it: they don't depend on orientation either. (This is because they are absolute differential forms.)

    I need to go and think about (ωη) 2(\omega \wedge \eta)^2 right now.

    • CommentRowNumber11.
    • CommentAuthorMike Shulman
    • CommentTimeMar 20th 2014

    Yes, I was surprised that he didn’t point that out. In the appendix he seems to change his mind a bit and writes

    The thing that makes (linear) differential forms so useful is the generalized Stokes’ Theorem. We do not have anything like this for non-linear forms…

    • CommentRowNumber12.
    • CommentAuthorMike Shulman
    • CommentTimeMar 24th 2014

    Today I decided that I don’t like the definition of integration which I proposed in the other thread (and wrote on the page). Here’s why.

    Right now I’m teaching my students to apply integrals to calculate volumes, arc lengths, etc., and I gave them the following method: slice up the quantity you want to compute into small pieces, find a differential form that approximates the value of each piece to first order, then integrate that differential form. For instance, in the basic case when finding an area, we slice it into pieces which are rectangles to first-order, hence have area f(x)dxf(x) dx (where f(x)f(x) is the height and dxdx the width), then we integrate that. But suppose we instead used a differential form that is a better approximation. For instance, if we instead approximate that piece by a trapezoid, then its area would be f(x)dx+12(f(x+dx)f(x))dxf(x) dx + \frac{1}{2}(f(x+dx)-f(x)) dx, which is about f(x)dx+12f(x)dx 2f(x) dx + \frac{1}{2} f'(x) dx^2. I’d like to say that if we integrate this differential form, we should get the same area; but with the definition of integration that I proposed before, we don’t.

    This argument suggests that integrating a cogerm 1-form of “degree >1\gt 1” should always result in zero. Symmetrically, one might argue that integrating a cogerm 1-form of “degree <1\lt 1” should always result in divergence.

    Here’s a definition of integration which does have that property, and at least looks vaguely reasonable. As before, if cc is a curve, denote by c hc_h the curve with c h(t)=c(t+h)c_h(t) = c(t+h). Also, denote by hch \cdot c the curve with (hc)(t)=c(ht)(h \cdot c)(t) = c(h t). Then we can say that a cogerm 1-form ω\omega is “degree-nn oriented-homogeneous” if ω|hc=h nω|c\langle \omega {|} h \cdot c\rangle = h^n \langle \omega{|}c\rangle, and “degree-nn absolute-homogeneous” if ω|hc=|h| nω|c\langle \omega {|} h \cdot c\rangle = {|h|}^n \langle \omega{|}c\rangle.

    Suppose ω\omega is a cogerm 1-form and cc a curve on [a,b][a,b], and say we have a tagged partition a=x 0<x 1<<x N=ba = x_0 \lt x_1 \lt \cdots \lt x_N = b, with tags t i[x i1,x i]t_i \in [x_{i-1},x_i]. Write Δx i=x ix i1\Delta x_i = x_i - x_{i-1}, and define the Riemann sum of ω\omega over this partition to be

    i=1 nω|Δx ic t i \sum_{i=1}^n \langle \omega {|} \Delta x_i \cdot c_{t_i} \rangle

    Now the integral cω\oint_c \omega is just the limit of these Riemann sums as the mesh goes to zero, if it exists. (We can take this limit either in the Riemann way or the Henstock way.)

    If ω\omega is degree-1 homogeneous of either sort, then this should agree with the other definition of the integral: just pull the Δx i\Delta x_i out by homogeneity and you have the ordinary integral over [a,b][a,b]. But this definition should also give zero if ω\omega is degree-nn homogeneous with n>1n\gt 1, and diverge if it is degree-nn homogeneous with n<1n\lt 1, at least if ω\omega is otherwise well-behaved (e.g. continuous).

    I believe this integral also has another nice property: if cω\oint_c \omega exists for some ω\omega, then it is invariant under orientation-preserving affine reparametrization of cc.

    I don’t know whether this would help to solve any of the other problems. It does reinforce the idea that the exterior differential only makes sense on degree-1 forms (and not even all of those): if we define dωd\wedge \omega in the “Arnold” way as the limit of integrals of ω\omega over shrinking loops, then it will automatically vanish if ω\omega has degree >1\gt 1 and diverge if it has degree <1\lt 1, because the integral does so.

    • CommentRowNumber13.
    • CommentAuthorMike Shulman
    • CommentTimeMar 26th 2014

    My new proposed definition of integration in #12 would mean that only linear 1-forms satisfy FTC. But perhaps that’s not too bothersome in light of the fact that FTC is a special case of Stokes’ theorem, and the general form of Stokes’ theorem requires the exterior differential.

    I also think there’s the germ of a nice proof of the 1-variable FTC somewhere in this. Consider the fact that for any function ff, the difference f(x+dx)f(x)f(x+dx) - f(x) is itself a cogerm form, and differentiability of ff says that this form differs from the differential d(f(x))=f(x)dxd (f(x)) = f'(x) dx by something of second order. Thus, given a general theorem like “the integral of a form depends only on its first-order part”, we could say that a bf(x)dx= a b(f(x+dx)f(x))\int_a^b f'(x) dx = \int_a^b (f(x+dx) - f(x)), and by choosing a suitable Riemann sum the latter simply telescopes to f(b)f(a)f(b) -f(a). That’s not very precise, of course, but I like that way of thinking about it.

    • CommentRowNumber14.
    • CommentAuthorTobyBartels
    • CommentTimeMar 29th 2014

    This argument suggests that integrating a cogerm 1-form of “degree >1\gt 1” should always result in zero. Symmetrically, one might argue that integrating a cogerm 1-form of “degree <1\lt 1” should always result in divergence.

    OK, I'll buy this, for integrals along curves. But then a cogerm pp-form of homogeneous degree pp should still be integrable over a pp-dimensional submanifold.

    • CommentRowNumber15.
    • CommentAuthorMike Shulman
    • CommentTimeMar 29th 2014

    Yes, I agree.

    • CommentRowNumber16.
    • CommentAuthorMike Shulman
    • CommentTimeApr 6th 2014

    Re #5, can you say anything intuitive about why you expect this to be the case, other than the fact that the formal calculations work if you make certain assumptions? Having thought about it a bit more, my intuition doesn’t match. For instance, normally the exterior square of anything is zero; why would you expect the nonlinear case to be different? Also, the main intuition that I’ve been able to think of for why the area element should be the square of the arc length element is that area is length times width — but that’s only true for rectangles, not for arbitrary parallelograms, and a 2-form acts on all pairs of vectors (or cojets or cogerms), not just orthogonal ones.

    • CommentRowNumber17.
    • CommentAuthorTobyBartels
    • CommentTimeApr 6th 2014

    normally the exterior square of anything is zero

    That's not true for, for example, scalar fields (or more generally exterior forms with even rank). And scalar fields are, besides being exterior 00-forms, also cogerm 11-forms. So, we already know that the square of a cogerm 11-form need not be 00, and it shouldn't be surprising if gg and đs&#x0111;{s} are further counterexamples.

    area is length times width — but that’s only true for rectangles, not for arbitrary parallelograms

    That's just the sort of thing that we'd expect the wedge product to handle. Indeed, the exterior square is zero (when it is zero) because it gives us a flat parallelogram.

    But you're mainly right, that my main reason for believing this is that the calculation seems to work. I mean, the original calculation definitely works, in the second symmetric power of the exterior calculus. But to do it in an exterior cogerm calculus, well, I don't know that calculus, so I can only say that it seems to work.

    In particular, I don't know how to justify the factorial, other than that it has to be there to make the answer come out correctly.

    • CommentRowNumber18.
    • CommentAuthorTobyBartels
    • CommentTimeApr 6th 2014

    Here I try to give a geometric justification of the factorial, for a low-level sense of ‘geometric’ and a very hand-waving sense of ‘justification’.

    Start with the usual picture of dsd{s} as the hypotenuse of a right triangle with legs dxd{x} and dyd{y}. Now double this right triangle to a rectangle, with diagonal dsd{s} and area dA=dxdyd{A} = d{x} \,d{y}. In this picture, how do we express dAd{A} in terms of dsd{s}?

    In one sense, we can't, of course. If the rectangle is a square, then dA=ds 2/2d{A} = d{s}^2/\sqrt{2}, just as I want. But if the rectangle is flat, then dA=0d{A} = 0 even though ds0d{s} \ne 0.

    But my claim is not that đA=đs 2/2&#x0111;A = &#x0111;s^2/\sqrt{2} but that đA=đsđs/2&#x0111;A = &#x0111;s \wedge &#x0111;s/\sqrt{2}. And the exterior product only counts the part that's perpendicular. (That's why the exterior square is zero, after all, at least in the simplest case.) But how can any part of đs&#x0111;s be perpendicular to itself?

    We have a rectangle with a diagonal. But this has another diagonal, which is also dsd{s}. When the rectangle is a square, these diagonals are perpendicular, and we get dA=ds 2/2d{A} = d{s}^2/\sqrt{2}. But when the rectangle is flat, these are parallel, and we get dA=0d{A} = 0. In general, dA=ds 2sinθ/2d{A} = d{s}^2 \sin \theta/\sqrt{2}, where θ\theta is the angle1 that the two diagonals make with each other. And that's exactly the perpendicular part; it's just what the wedge product ought to give us.

    Somehow, this should reflect something about the exterior cogerm calculus. But I don't know how. (Particularly when all of this talk of angles and orthogonality refers to the metric, or at least to the conformal structure, which we have when we talk about đs&#x0111;s but which should not be available in the exterior cogerm calculus itself.)


    1. Of course, they cross and so make four angles, which are only equal in pairs. But these angles' sines are all equal. 

    • CommentRowNumber19.
    • CommentAuthorTobyBartels
    • CommentTimeApr 6th 2014
    • (edited Apr 6th 2014)

    Well, that's extremely hand-wavy when you realize that every time that I wrote 2\sqrt{2} in that last comment, I was really only justified in writing 22. And if I generalize to nn dimensions, it becomes an argument that vol=đsđs/nvol = &#x0111;s \wedge \cdots \wedge &#x0111;s/n instead of vol=đsđs/n!vol = &#x0111;s \wedge \cdots \wedge &#x0111;s/\sqrt{n!} as my algebraic calculation gave. So I don't know what's up with that!

    • CommentRowNumber20.
    • CommentAuthorMike Shulman
    • CommentTimeApr 6th 2014

    I haven’t digested #18 yet, but re #17, you’re right that I should have said the exterior square of any 1-form is zero. however, if we are to embed the cogerm 1-forms in some context that also contains cogerm kk-forms for other kk, then it might be that we will have to regard scalar functions as cogerm 0-forms separately from their incarnation as cogerm 1-forms. It would then be perfectly consistent for the exterior square of a scalar function qua cogerm 0-form to be nonzero (being another cogerm 0-form), while its square qua cogerm 1-form is zero (specifically, the zero cogerm 2-form). For instance, that’s what happens for the “cojet differential forms” which I proposed in the other thread.

    A notational question: is there a deep reason that you usually write đs&#x0111;{s} for the arc length element? Is it just that you want a symbol that looks like the traditional notation ds\mathrm{d}s but doesn’t suggest that it is actually the differential of something?

    • CommentRowNumber21.
    • CommentAuthorTobyBartels
    • CommentTimeApr 7th 2014

    Yes, that's it. It's a notation that's used in thermodynamics; for example in the first law of thermodynamics

    dU=đQ+đW \mathrm{d}U = &#x0111;Q + &#x0111;W

    (the total change in the energy of a subsystem is the sum of the heat transferred into that subsystem and the mechanical work done on that subsystem). It's an important point that (contrary to the caloric theory of heat) there is no such thing as QQ, only đQ&#x0111;Q.

    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeApr 7th 2014

    Nice, thanks.

    • CommentRowNumber23.
    • CommentAuthorMike Shulman
    • CommentTimeApr 12th 2014

    Here’s an improvement of my vague remarks in #12. Let’s say that a cogerm 1-form ω\omega is “o(dx n)o(dx^n)” if for any curve cc, we have

    lim h0ω|hch n=0 \lim_{h\to 0} \frac{\langle \omega {|} h\cdot c\rangle}{h^n} = 0

    A degree-nn homogeneous form is clearly o(dx k)o(dx^k) for any k<nk\lt n. I claim that if ω\omega is o(dx)o(dx), then it is Henstock integrable over any curve cc (defined on a closed interval [a,b][a,b]), and cω=0\oint_c \omega = 0.

    Suppose ε>0\epsilon\gt 0; then for any t[a,b]t\in [a,b] there is a δ(t)\delta(t) such that ω|hc t<εbah{\langle \omega {|} h \cdot c_t\rangle} \lt \frac{\epsilon}{b-a} h for any 0<h<δ(t)0\lt h\lt \delta(t). This defines a gauge δ\delta on [a,b][a,b]. Now suppose we have a δ\delta-fine tagged partition a=x 0<<x n=ba = x_0 \lt \cdots \lt x_n = b with tags t i[x i1,x i]t_i \in [x_{i-1},x_i], so that Δx i=x ix i1<δ(t i)\Delta x_i = x_i - x_{i-1} \lt \delta(t_i). Then the corresponding Riemann sum is, by definition, iω|Δx ic t i\sum_{i} \langle \omega {|} \Delta x_i \cdot c_{t_i}\rangle . Since Δx i<δ(t i)\Delta x_i \lt \delta(t_i), each ω|Δx ic t i<εbaΔx i\langle \omega {|} \Delta x_i \cdot c_{t_i}\rangle \lt \frac{\epsilon}{b-a} \Delta x_i. Thus, when we sum them up, we get something less than εba iΔx i=εba(ba)=ε\frac{\epsilon}{b-a} \sum_i \Delta x_i = \frac{\epsilon}{b-a} (b-a) = \epsilon. Thus, for any ε\epsilon there is a gauge δ\delta such that the Riemann sum over any δ\delta-fine tagged partition is <ε\lt\epsilon; so the Henstock integral is zero.

    It follows that if any such ω\omega is Riemann integrable, then its Riemann integral is zero. I suspect that there are probably forms that are o(dx)o(dx) but not Riemann integrable, but I haven’t come up with any yet.

    I think my suggested proof of FTC in #13 can also be made to work. Say we assume that dfdf is continuous, hence integrable (I think the usual proof that continuous functions are integrable will show that any continuous linear 1-form, such as the differential of a function, is integrable). Then by definition, dfdf differs from f(x+dx)f(x)f(x+dx)-f(x) by something o(dx)o(dx), and the sum of integrable forms is integrable; thus f(x+dx)f(x)f(x+dx)-f(x) is integrable. But any left Riemann sum of f(x+dx)f(x)f(x+dx)-f(x) telescopes to f(b)f(a)f(b) - f(a); hence that must be its integral, and thus also the integral of dfdf.

    • CommentRowNumber24.
    • CommentAuthorMike Shulman
    • CommentTimeApr 13th 2014

    Here’s another way to approach the question of #5: suppose we know how to measure lengths (with some 1-form đl&#x0111;l); how can we measure (say) areas? The first thing we might do is use the polarization identity to recover the inner product:

    vw=14(đl(v+w) 2đl(vw) 2) v\cdot w = \textstyle\frac{1}{4}(&#x0111;l(v+w)^2 - &#x0111;l(v-w)^2)

    Note that if we replace đl&#x0111;l by a linear form η\eta, then this would give η(v)η(w)\eta(v)\,\eta(w).

    Now vw=vwcosθv\cdot w = \Vert v\Vert \,\Vert w\Vert \,\cos \theta, whereas the area spanned by vv and ww should be vwsinθ\Vert v\Vert \,\Vert w\Vert \,\sin \theta. Thus, we can compute the area as

    đl(v) 2đl(w) 2(vw) 2. \sqrt{&#x0111;l(v)^2\, &#x0111;l(w)^2 - (v\cdot w)^2}.

    And if we replace đl&#x0111;l by a linear form η\eta here, we would get zero. So at least we have some operation on “nonlinear covector” 1-forms that yields a “nonlinear covector 2-form”, which when applied to đl&#x0111;l gives the area form, and when applied to a linear 1-form gives zero, so we might hope to call it the “exterior square”.

    But I can’t think of a way to tease an operation on pairs of 1-forms out of this. I’m also ambivalent about it anyway because it doesn’t seem canonical; e.g. there are several forms of polarization, which are the same for đl&#x0111;l or a linear form, but it seems that they might be different on other forms.

    • CommentRowNumber25.
    • CommentAuthorMike Shulman
    • CommentTimeApr 13th 2014

    Re: #18, it doesn’t seem right to me to say “the exterior product only counts the part that’s perpendicular”. There is no notion of “perpendicular” until we have a metric, and the exterior product is defined without any metric. I feel like dxdydx\wedge \dy is measuring area in one way (sort of declaring dxdx and dydy to “be perpendicular” for the purposes of area), while a “product” đA=đlđl&#x0111;A = &#x0111;l \wedge &#x0111;l would be measuring area in a different way (obtaining a notion of “perpendicular” from the factors đl&#x0111;l rather than declaring them to be perpendicular to each other).

    Here’s a question: suppose a manifold has two metrics g 1g_1 and g 2g_2, giving rise to two length elements đl 1&#x0111;l_1 and đl 2&#x0111;l_2. What would you expect đl 1đl 2&#x0111;l_1 \wedge &#x0111;l_2 to measure?

    • CommentRowNumber26.
    • CommentAuthorMike Shulman
    • CommentTimeApr 13th 2014
    • (edited Apr 17th 2014)

    Re #24, here’s something. Maybe I was half asleep last night, because now I think this is the “obvious” thing to try. Given two covector 1-forms α\alpha and β\beta, consider the covector 2-form defined by

    (αβ)(v,w)=14(α(v+w)β(v+w)α(vw)β(vw)).(\alpha\cdot\beta)(v,w) = \textstyle\frac{1}{4}(\alpha(v+w)\beta(v+w) - \alpha(v-w)\beta(v-w)).

    If α=β=đl\alpha=\beta=&#x0111;l, this gives the inner product as in #24. But if α\alpha and β\beta are linear, then this gives the symmetrized product

    (αβ)(v,w)=12(α(v)β(w)+β(v)α(w)). (\alpha\cdot\beta)(v,w) = \textstyle\frac{1}{2}(\alpha(v)\beta(w) + \beta(v)\alpha(w)).

    Now consider the covector 2-form

    (αβ)(v,w)=|α(v)β(v)α(w)β(w)(αβ)(v,w) 2|. (\alpha\barwedge\beta)(v,w) = \sqrt{|\alpha(v)\beta(v)\alpha(w)\beta(w) - (\alpha\cdot\beta)(v,w)^2|}.

    If α=β=đl\alpha=\beta=&#x0111;l, this gives the area element as in as in #24. But if α\alpha and β\beta are linear, then it gives the absolute value of their exterior product,

    |αβ|(v,w)=12|α(v)β(w)β(v)α(w)|. {|\alpha\wedge\beta|}(v,w) = \textstyle\frac{1}{2} | \alpha(v)\beta(w) - \beta(v)\alpha(w)|.

    (with the unpopular convention 12!\frac{1}{2!} rather than 11!1!\frac{1}{1!1!} for the factorial factors). This raises a bunch of questions:

    1. Can it be extended to higher dimensions to get volume forms?
    2. Can it be extended to cojet or cogerm forms, since we can’t add and subtract jets or germs?
    3. What’s up with the absolute value? Is there any way to get the non-absolute-valued exterior product?
    4. What’s up with the absolute value inside the square root? I put it there because if I’m not mistaken, the relative signs of the terms being subtracted are different in the metric case and in the linear case. Is there some sense in which the two are “imaginary rotations” of each other?
    • CommentRowNumber27.
    • CommentAuthorTobyBartels
    • CommentTimeApr 17th 2014

    there are several forms of polarization, which are the same for đl&#x0111;l or a linear form, but it seems that they might be different on other forms

    They are the same if and only if the parallelogram identity holds. Which, of course, it typically won't. (This result is for any operation on an abelian group in which 22 is invertible; the operation doesn't have to have to have any properties of the square of a norm. And while I'm at it, 22 doesn't even have to be invertible if you take the polarization identities to define 44 times the inner product.)

    Your version of the polarization identity is perhaps the most even-handed. But in general, it's not even-handed enough; besides v+wv + w and vwv - w, you also ought to include v+w-v + w and vw-v - w.

    (αβ)(v,w)=|α(v)β(v)α(w)β(w)(αβ)(v,w)|. (\alpha\barwedge\beta)(v,w) = \sqrt{|\alpha(v)\beta(v)\alpha(w)\beta(w) - (\alpha\cdot\beta)(v,w)|}.

    You mean

    (αβ)(v,w)=|α(v)β(v)α(w)β(w)(αβ)(v,w) 2| (\alpha\barwedge\beta)(v,w) = \sqrt{|\alpha(v)\beta(v)\alpha(w)\beta(w) - (\alpha\cdot\beta)(v,w)^2|}

    (squaring the dot product).

    What’s up with the absolute value? Is there any way to get the non-absolute-valued exterior product?

    It looks like getting the non-absolute-valued (or rather non-normed) cross product. This is fixed (up to orientation, when even that can be done) by also requiring the cross product of two vectors to also be orthogonal to both vectors. But I don't see an analogue of that condition here.

    • CommentRowNumber28.
    • CommentAuthorTobyBartels
    • CommentTimeApr 17th 2014

    it doesn’t seem right to me to say “the exterior product only counts the part that’s perpendicular”

    Not in general, but in this particular case we do have a metric; you talked this way in #16 too. But I'm pretty much convinced that #18 is nonsense anyway; it gives the wrong answer, after all.

    Here’s a question: suppose a manifold has two metrics g 1g_1 and g 2g_2, giving rise to two length elements đl 1&#x0111;l_1 and đl 2&#x0111;l_2. What would you expect đl 1đl 2&#x0111;l_1 \wedge &#x0111;l_2 to measure?

    Good question! I don't know, but I think that I can calculate it.

    For definiteness, let the manifold be 2\mathbb{R}^2, so we have

    g 1=Adx 2+2Bdxdy+Cdy 2, g_1 = A \,\mathrm{d}x^2 + 2 B \,\mathrm{d}x \,\mathrm{d}y + C \,\mathrm{d}y^2 , g 2=Ddx 2+2Edxdy+Fdy 2, g_2 = D \,\mathrm{d}x^2 + 2 E \,\mathrm{d}x \,\mathrm{d}y + F \,\mathrm{d}y^2 ,

    with A,C,D,F>0A, C, D, F \gt 0, AC>B 2A C \gt B^2, and DF>E 2D F \gt E^2.

    I need to establish a lemma. I have already surmised that

    α 2β 2=(αβ) 2, \alpha^2 \wedge \beta^2 = (\alpha \wedge \beta)^2 ,

    but what about αβ\alpha \beta? (which is also a symmetric bilinear form and so may appear in a metric). Polarization lets me rewrite this using squares:

    (2αβ)(2γδ)=((α+β) 2α 2β 2)((γ+δ) 2γ 2δ 2)=(α+β) 2(γ+δ) 2α 2(γ+δ) 2β 2(γ+δ) 2(α+β) 2γ 2+α 2γ 2+β 2γ 2(α+β) 2δ 2+α 2δ 2+β 2δ 2=((α+β)(γ+δ)) 2(α(γ+δ)) 2(β(γ+δ)) 2((α+β)γ) 2+(αγ) 2+(βγ) 2((α+β)δ) 2+(αδ) 2+(βδ) 2=(αγ+αδ+βγ+βδ) 2(αγ+αδ) 2(βγ+βδ) 2(αγ+βγ) 2+(αγ) 2+(βγ) 2(αδ+βδ) 2+(αδ) 2+(βδ) 2=(αγ) 2+2(αγ)(αδ)+2(αγ)(βγ)+2(αγ)(βδ)+(αδ) 2+2(αδ)(βγ)+2(αδ)(βδ)+(βγ) 2+2(βγ)(βδ)+(βδ) 2(αγ) 22(αγ)(αδ)(αδ) 2(βγ) 22(βγ)(βδ)(βδ) 2(αγ) 22(αγ)(βγ)(βγ) 2+(αγ) 2+(βγ) 2(αδ) 22(αδ)(βδ)(βδ) 2+(αδ) 2+(βδ) 2=2(αγ)(βδ)+2(αδ)(βγ); (2 \alpha \beta) \wedge (2 \gamma \delta) = ((\alpha + \beta)^2 - \alpha^2 - \beta^2) \wedge ((\gamma + \delta)^2 - \gamma^2 - \delta^2) = (\alpha + \beta)^2 \wedge (\gamma + \delta)^2 - \alpha^2 \wedge (\gamma + \delta)^2 - \beta^2 \wedge (\gamma + \delta)^2 - (\alpha + \beta)^2 \wedge \gamma^2 + \alpha^2 \wedge \gamma^2 + \beta^2 \wedge \gamma^2 - (\alpha + \beta)^2 \wedge \delta^2 + \alpha^2 \wedge \delta^2 + \beta^2 \wedge \delta^2 = ((\alpha + \beta) \wedge (\gamma + \delta))^2 - (\alpha \wedge (\gamma + \delta))^2 - (\beta \wedge (\gamma + \delta))^2 - ((\alpha + \beta) \wedge \gamma)^2 + (\alpha \wedge \gamma)^2 + (\beta \wedge \gamma)^2 - ((\alpha + \beta) \wedge \delta)^2 + (\alpha \wedge \delta)^2 + (\beta \wedge \delta)^2 = (\alpha \wedge \gamma + \alpha \wedge \delta + \beta \wedge \gamma + \beta \wedge \delta)^2 - (\alpha \wedge \gamma + \alpha \wedge \delta)^2 - (\beta \wedge \gamma + \beta \wedge \delta)^2 - (\alpha \wedge \gamma + \beta \wedge \gamma)^2 + (\alpha \wedge \gamma)^2 + (\beta \wedge \gamma)^2 - (\alpha \wedge \delta + \beta \wedge \delta)^2 + (\alpha \wedge \delta)^2 + (\beta \wedge \delta)^2 = (\alpha \wedge \gamma)^2 + 2 (\alpha \wedge \gamma) (\alpha \wedge \delta) + 2 (\alpha \wedge \gamma) (\beta \wedge \gamma) + 2 (\alpha \wedge \gamma) (\beta \wedge \delta) + (\alpha \wedge \delta)^2 + 2 (\alpha \wedge \delta) (\beta \wedge \gamma) + 2 (\alpha \wedge \delta) (\beta \wedge \delta) + (\beta \wedge \gamma)^2 + 2 (\beta \wedge \gamma) (\beta \wedge \delta) + (\beta \wedge \delta)^2 - (\alpha \wedge \gamma)^2 - 2 (\alpha \wedge \gamma) (\alpha \wedge \delta) - (\alpha \wedge \delta)^2 - (\beta \wedge \gamma)^2 - 2 (\beta \wedge \gamma) (\beta \wedge \delta) - (\beta \wedge \delta)^2 - (\alpha \wedge \gamma)^2 - 2 (\alpha \wedge \gamma) (\beta \wedge \gamma) - (\beta \wedge \gamma)^2 + (\alpha \wedge \gamma)^2 + (\beta \wedge \gamma)^2 - (\alpha \wedge \delta)^2 - 2 (\alpha \wedge \delta) (\beta \wedge \delta) - (\beta \wedge \delta)^2 + (\alpha \wedge \delta)^2 + (\beta \wedge \delta)^2 = 2 (\alpha \wedge \gamma) (\beta \wedge \delta) + 2 (\alpha \wedge \delta) (\beta \wedge \gamma) ;

    dividing by 44, αβγδ=12(αγ)(βδ)+12(αδ)(βγ)\alpha \beta \wedge \gamma \delta = \frac{1}{2} (\alpha \wedge \gamma) (\beta \wedge \delta) + \frac{1}{2} (\alpha \wedge \delta) (\beta \wedge \gamma). Note the special case αβγ 2=(αγ)(βγ)\alpha \beta \wedge \gamma^2 = (\alpha \wedge \gamma) (\beta \wedge \gamma) (and similarly with the square on the other side.)

    Then I get

    g 1g 2=(Adx 2+2Bdxdy+Cdy 2)(Ddx 2+2Edxdy+Fdy 2)=(ADdx 2dx 2+2AEdx 2dxdy+AFdx 2dy 2+2BDdxdydx 2+4BEdxdydxdy+2BFdxdydy 2+CDdy 2dx 2+2CEdy 2dxdy+CFdy 2dy 2=AD(dxdx) 2+2AE(dxdx)(dxdy)+AF(dxdy) 2+2BD(dxdx)(dydx)+2BE(dxdx)(dydy)+2BE(dxdy)(dydx)+2BF(dxdy)(dydy)+CD(dydx) 2+2CE(dydx)(dydy)+CF(dydy) 2=0+0+AF(dxdy) 2+0+02BE(dxdy) 2+0+CD(dxdy) 2+0+0=(AF2BE+CD)(dxdy) 2. g_1 \wedge g_2 = (A \,\mathrm{d}x^2 + 2 B \,\mathrm{d}x \,\mathrm{d}y + C \,\mathrm{d}y^2) \wedge (D \,\mathrm{d}x^2 + 2 E \,\mathrm{d}x \,\mathrm{d}y + F \,\mathrm{d}y^2) = (A D \,\mathrm{d}x^2 \wedge \,\mathrm{d}x^2 + 2 A E \,\mathrm{d}x^2 \wedge \mathrm{d}x \,\mathrm{d}y + A F \,\mathrm{d}x^2 \wedge \mathrm{d}y^2 + 2 B D \,\mathrm{d}x \,\mathrm{d}y \wedge \mathrm{d}x^2 + 4 B E \,\mathrm{d}x \,\mathrm{d}y \wedge \mathrm{d}x \,\mathrm{d}y + 2 B F \,\mathrm{d}x \,\mathrm{d}y \wedge \mathrm{d}y^2 + C D \,\mathrm{d}y^2 \wedge \mathrm{d}x^2 + 2 C E \,\mathrm{d}y^2 \wedge \mathrm{d}x \,\mathrm{d}y + C F \,\mathrm{d}y^2 \wedge \mathrm{d}y^2 = A D (\mathrm{d}x \wedge \mathrm{d}x)^2 + 2 A E (\mathrm{d}x \wedge \mathrm{d}x) (\mathrm{d}x \wedge \mathrm{d}y) + A F (\mathrm{d}x \wedge \mathrm{d}y)^2 + 2 B D (\mathrm{d}x \wedge \mathrm{d}x) (\mathrm{d}y \wedge \mathrm{d}x) + 2 B E (\mathrm{d}x \wedge \mathrm{d}x) (\mathrm{d}y \wedge \mathrm{d}y) + 2 B E (\mathrm{d}x \wedge \mathrm{d}y) (\mathrm{d}y \wedge \mathrm{d}x) + 2 B F (\mathrm{d}x \wedge \mathrm{d}y) (\mathrm{d}y \wedge \mathrm{d}y) + C D (\mathrm{d}y \wedge \mathrm{d}x)^2 + 2 C E (\mathrm{d}y \wedge \mathrm{d}x) (\mathrm{d}y \wedge \mathrm{d}y) + C F (\mathrm{d}y \wedge \mathrm{d}y)^2 = 0 + 0 + A F (\mathrm{d}x \wedge \mathrm{d}y)^2 + 0 + 0 - 2 B E (\mathrm{d}x \wedge \mathrm{d}y)^2 + 0 + C D (\mathrm{d}x \wedge \mathrm{d}y)^2 + 0 + 0 = (A F - 2 B E + C D) (\mathrm{d}x \wedge \mathrm{d}y)^2 .

    Taking square roots, đl 1đl 2=AF2BE+CD|dxdy|&#x0111;l_1 \wedge &#x0111;l_2 = \sqrt{A F - 2 B E + C D} \,{|\mathrm{d}x \wedge \mathrm{d}y|}.

    • CommentRowNumber29.
    • CommentAuthorTobyBartels
    • CommentTimeApr 17th 2014

    If g 1=g 2g_1 = g_2, then the expression under the square root is twice the determinant of the matrix of the coefficients of gg, so we have 2\sqrt{2} times the area element (which is the same answer that I've gotten before in that case). But I'm not sure what to make of AF2BE+CDA F - 2 B E + C D itself; at least it is positive, since

    AF2BE+CDAF2|B||E|+CD>AF2ACDF+CD=(AFCD) 20. A F - 2 B E + C D \geq A F - 2 {|B|} {|E|} + C D \gt A F - 2 \sqrt{A C} \sqrt{D F} + C D = (\sqrt{A F} - \sqrt{C D})^2 \geq 0 .

    (And strictly so; that is, the wedge product of two positive-definite length elements is also positive-definite; we never get zero.)

    • CommentRowNumber30.
    • CommentAuthorMike Shulman
    • CommentTimeApr 17th 2014

    Thanks for the missing square; I’ve fixed it in the original comment.

    in general, it's not even-handed enough; besides v+wv + w and vwv - w, you also ought to include v+w-v + w and vw-v - w.

    You’re absolutely right. And it seems to me that even that might not be even-handed enough, when our forms may not be either linear or quadratic. It feels like maybe we ought to include everything in the lattice generated by vv and ww.

    • CommentRowNumber31.
    • CommentAuthorMike Shulman
    • CommentTimeApr 17th 2014

    Re: #28 and 29, that’s intriguing. I don’t know what to make of AF2BE+CDA F - 2 B E + C D either. And I guess I don’t really know how to think about a manifold that has two different metrics.

    • CommentRowNumber32.
    • CommentAuthorMike Shulman
    • CommentTimeApr 23rd 2014

    I’ve added #12 and #23 to the page cogerm differential form. One thing that is still missing is a good theorem about which cogerm forms are integrable (with either definition).

    • CommentRowNumber33.
    • CommentAuthorMike Shulman
    • CommentTimeApr 23rd 2014

    I added a theorem about existence and parametrization-invariance of integration. I’m not completely satisfied with it, but it’s the best I’ve been able to come up with.

    • CommentRowNumber34.
    • CommentAuthorMike Shulman
    • CommentTimeApr 27th 2014

    Here’s an interesting example: dx 3+dy 33\sqrt[3]{\mathrm{d}x^3 +\mathrm{d}y^3}. It is degree-1 homogeneous with sign, so its integral reverses with orientation. Its integral around any closed circle or parallelogram is zero, but around other closed curves (like triangles) it has a nonzero integral.

    In general, its integral along a line of angle θ\theta to the xx-axis is (sinθ) 3+(cosθ) 33\sqrt[3]{(\sin\theta)^3+(\cos\theta)^3} times the length of that line. Thus, along horizontal and vertical lines it just measures (signed) length, while along 45 45^\circ lines it measures something less than length, and along 135 135^\circ lines it measures zero. (This is similar to dx+dy\mathrm{d}x+\mathrm{d}y, except that the latter measures more than length along 45 45^\circ lines, in such a way that its integral around any closed curve works out to zero — as it must, since the form is exact.)

    Being degree-1 homogeneous with sign means in particular that if we take a region RR and divide it into two regions R 1R_1 and R 2R_2, then the integral around the boundary of RR is equal to the sum of the integrals around the boundaries of R 1R_1 and R 2R_2. This suggests that there might be a way to define the exterior differential of this 1-form in such a way that Stokes’ theorem would hold. (Note that since the analogous subdivision property for boundary integrals fails for đl=dx 2+dy 2&#x0111;l=\sqrt{\mathrm{d}x^2+\mathrm{d}y^2}, it can’t have an exterior differential satisfying Stokes’ theorem.) However, I haven’t been able to think of such a definition.

    • CommentRowNumber35.
    • CommentAuthorTobyBartels
    • CommentTimeApr 29th 2014

    Working in the plane, we ought to be able to calculate this exterior derivative as an ordinary 22-form. If every sufficiently regular oriented region RR is given a measure by integrating dx 3+dy 33\root{3}{\mathrm{d}x^3 + \mathrm{d}y^3} around it, and if this is additive and reasonably continuous, then this integral must be given by an absolutely continuous Radon pseudo-measure, hence an exterior 22-form.

    But this can't be right, because (as you say) the measure of any rectangle is zero, while the measure of some triangles is not. So it must violate continuity. Indeed, if we approximate a triangle from inside and out by rectangles, then we approximate its measure as zero, which it is not, so we don't have a Radon measure.

    Well, this is just reporting a negative result, but those don't get published as much as they should, so here it is.

    • CommentRowNumber36.
    • CommentAuthorMike Shulman
    • CommentTimeApr 29th 2014

    That’s a good observation, thanks. So maybe we should ask, is integration of cogerm 2-forms on the plane necessarily continuous? Because if so, that means we can’t define such an exterior derivative there either.

    While I’ve got you here, I have an unrelated question. When teaching multiple integration, what notation do you use for iterated integrals as in Fubini’s theorem? Always insisting on writing

    (f(x,y)dx)dy\int \Big(\int f(x,y)\, \mathrm{d}x \Big) \mathrm{d}y

    seems tedious and cumbersome, but omitting the parentheses and writing

    f(x,y)dxdy\int \int f(x,y)\, \mathrm{d}x \, \mathrm{d}y

    looks like we are integrating with respect to a product dxdy\mathrm{d}x \, \mathrm{d}y, which we are not (in particular, because f(x,y)dxdy=f(x,y)dydx\int \int f(x,y)\, \mathrm{d}x \, \mathrm{d}y = \int \int f(x,y)\, \mathrm{d}y \, \mathrm{d}x, both being equal to f(x,y)đA\iint f(x,y) \,&#x0111;A).

    • CommentRowNumber37.
    • CommentAuthorTobyBartels
    • CommentTimeApr 30th 2014

    Well, I wouldn't write either of those, at least not until very late in the course when we all know what it means and I'm abbreviating. You're not specifying the region of integration, and that's key.

    I start with

    y=a b( x=g(y) h(y)f(x,y)dx)dy \int_{y=a}^b \Big(\int_{x=g(y)}^{h(y)} f(x,y) \,\mathrm{d}x\Big) \,\mathrm{d}y

    but abbreviate this fairly early on as

    a b g(y) h(y)f(x,y)dxdy \int_a^b \int_{g(y)}^{h(y)} f(x,y) \,\mathrm{d}x \,\mathrm{d}y

    (which is what they write in the textbook). This is an iterated integral.

    Then I introduce the separate concept

    (x,y)Rf(x,y)dxdy, \iint_{(x,y) \in R} f(x,y) \,\mathrm{d}x \,\mathrm{d}y ,

    which is defined (for bounded RR) as the limit of Riemann sums, each given by a tagged rectangular mesh. This is pretty obviously equal to

    (x,y)Rf(x,y)dydx \iint_{(x,y) \in R} f(x,y) \,\mathrm{d}y \,\mathrm{d}x

    (with an obvious bijective correspondence between the relevant tagged meshes), and these can be abbreviated as

    Rf(x,y)đA. \iint_R f(x,y) \,&#x0111;A .

    This is a double integral. (I am not yet discussing what sort of thing đA&#x0111;A actually is, or even that the dx\mathrm{d}x and dy\mathrm{d}y in this expression are really |dx|{|\mathrm{d}x|} and |dy|{|\mathrm{d}y|}. In fact, at this point I'm still writing ‘dA\mathrm{d}A’, like they do in the book. I don't really examine what kind of a thing this is until I get to change of variables. It's no secret that I'm going to discuss this, but I don't discuss it right away.)

    Then I state (without proof, we don't have that kind of time) the Fubini Theorem as the statement that the iterated integral is equal to the obvious corresponding double integral (when ff, gg, and hh are continuous). In particular, if a region can be written as an iterated integral in both ways, then these iterated integrals are equal. Then further immediate corollaries, not explicitly stated in detail, about regions that can be divided into subregions amenable to iterated integrals. So the lesson is that we really care about double integrals, while iterated integrals are the method used to evaluate them analytically.

    • CommentRowNumber38.
    • CommentAuthorMike Shulman
    • CommentTimeApr 30th 2014

    Ok, but you do write dxdy\mathrm{d}x \,\mathrm{d}y in both the double integral and the iterated integral.

    • CommentRowNumber39.
    • CommentAuthorMike Shulman
    • CommentTimeApr 30th 2014
    • CommentRowNumber40.
    • CommentAuthorTobyBartels
    • CommentTimeApr 30th 2014

    Ok, but you do write dxdy\mathrm{d}x \,\mathrm{d}y in both the double integral and the iterated integral.

    Yes, but I make it clear (or try to) that the iterated integral is fundamentally the expression with parentheses, so that dxdy\mathrm{d}x \,\mathrm{d}y is not a thing in it. It is a thing in the double integral, but I don't examine closely what that thing is until we have a little practical experience.

    By the way, there is still a slight issue in the inner integral in the iterated integral. I do line integrals before area integrals (which the book also does but in the case of arclengths only), originally so that they'll have had some experience manipulating differential forms before I spring |dxdy|{|\mathrm{d}x \wedge \mathrm{d}y|} on them when we get to change of variables. But another benefit is that I can now point out that

    g(y) h(y)f(x,y)dx \int_{g(y)}^{h(y)} f(x,y) \,\mathrm{d}x

    is really a line integral along a line of constant yy-value.

    • CommentRowNumber41.
    • CommentAuthorMike Shulman
    • CommentTimeMay 8th 2014

    Have you read anything by Solomon Leader? I’m just having a look at his book The Kurzweil-Henstock integral and its differentials in which he has an interesting definition of differentials. He defines a summant to be a function on intervals tagged with an endpoint, which is basically to say tangent vectors, and then defines the integral of a summant basically just as I did for cogerm forms. Then he defines a differential to be an equivalence class of summants, where STS\sim T if |ST|=0\int |S-T| = 0. Then the differential of a function ff is the equivalence class df\mathrm{d}f of the summant Δf\Delta f which is defined by Δf([a,b])=f(b)f(a)\Delta f([a,b]) = f(b) - f(a); we could write that as f(x+dx)f(x)f(x+\mathrm{d}x)-f(x). If ff is differentiable, then df=f(x)dx\mathrm{d}f = f'(x) \,\mathrm{d}x as differentials, i.e. their summants are equivalent – this is what contains the content of the fundamental theorem of calculus. But df\mathrm{d}f is defined for any function ff, and its integrals are Riemann–Stieltjes integrals.

    • CommentRowNumber42.
    • CommentAuthorMike Shulman
    • CommentTimeMay 13th 2014

    I added some remarks about delta functions and Riemann-Stieltjes integrals to cogerm differential form.

    • CommentRowNumber43.
    • CommentAuthorTobyBartels
    • CommentTimeMay 21st 2014

    No, I've never heard of Solomon Leader. It looks like I should read him!

    • CommentRowNumber44.
    • CommentAuthorTobyBartels
    • CommentTimeMay 4th 2015

    I’m coming back to the idea of parametrization invariance, such as the parametrization invariance (or lack thereof) of the integration of higher-order1 forms such as d 2f\sqrt{\mathrm{d}^{2}f}. (Recall from the page that Cd 2f= tdomC(fC)(t)|dt|\int_C \sqrt{\mathrm{d}^{2}f} = \int_{t \in \dom C} \sqrt{(f \circ C)''(t)} \,{|\mathrm{d}t|}, at least when ff and CC are each twice continuously differentiable, for the so-called ‘genuine’ integral.)

    It seems to me that parametrization invariance is a form of the Chain Rule, and we know that the Chain Rule can be tricky to apply to higher-order derivatives. I believe that the formula above for the integral of d 2f\sqrt{\mathrm{d}^2f} is wrong in the same way that f(x)=d 2f(x)/dx 2f''(x) = \mathrm{d}^2f(x)/\mathrm{d}x^2 is wrong. This formula seems to work under affine change of variables, just as the ‘genuine’ integral seems to work under affine reparametrization, but otherwise we can see that it fails.

    In fact, knowing that d 2f(x)=f(x)dx 2+f(x)d 2x\mathrm{d}^2f(x) = f''(x) \,\mathrm{d}x^2 + f'(x) \,\mathrm{d}^2x, I would calculate as follows:

    Cd 2f= tdomCd 2f(C(t))= td 2(fC)(t)= t(fC)(t)dt 2+(fC)(t)d 2t. \int_C \sqrt{\mathrm{d}^2f} = \int_{t \in \dom C} \sqrt{\mathrm{d}^2f(C(t))} = \int_t \sqrt{\mathrm{d}^2(f \circ C)(t)} = \int_t \sqrt{(f \circ C)''(t) \,\mathrm{d}t^2 + (f \circ C)'(t) \,\mathrm{d}^2t} .

    If d 2t=0\mathrm{d}^2t = 0, then this simplifies to the result for ‘genuine’ integral, but not otherwise.

    I'm convinced that this formula is correct, but unfortunately I don't know how to interpret it!


    1. I have settled on the following language for order/degree/rank: The rank of a form indicates what dimension of submanifold it is integrated on; rank is increased by using the wedge product or applying the exterior differential, the cojet differential doesn't affect the rank, and the rank of a sum must agree with all addends. The degree indicates the power to which a positive scaling factor is raised when a form is applied to a small scaled (multi)-vector/curve; the degree is increased by multiplication or by applying the cojet differential, the exterior differential doesn't affect the degree, and the degree of a sum is the minimum degree of the addends. The order of a form indicates the highest order of derivatives of a curve that affect the value of the form at that curve; the order is increased by applying the coject differential, the exterior differential doesn't affect the order, and the order of a sum or product is the maximum order of the addends. For example, dxdy=dxdy\mathrm{d} \wedge {x \,\mathrm{d}y} = \mathrm{d}x \wedge \mathrm{d}y has rank and degree 22 but order 11, dx 2\mathrm{d}x^2 has degree 22 but rank and order 11, and d 2x\mathrm{d}^2x has degree and order 22 but rank 11. In particular, d 2x\sqrt{\mathrm{d}^2x} has order 22 but rank and degree 11

    • CommentRowNumber45.
    • CommentAuthorMike Shulman
    • CommentTimeMay 5th 2015

    Interesting — so how do we calculate td 2t\int_t \sqrt{\mathrm{d}^2t}?

    • CommentRowNumber46.
    • CommentAuthorMike Shulman
    • CommentTimeMay 5th 2015

    Hmm… can we extend the “affine integral” to act on forms of order >1\gt 1? If so, maybe that is actually the right definition?

    • CommentRowNumber47.
    • CommentAuthorMike Shulman
    • CommentTimeMay 5th 2015

    Shooting from the hip, what if we consider an order-2 rank-1 form to be a function ω:X×V×V\omega:X\times V\times V\to\mathbb{R} and integrate it by adding up ω(c(t i *),Δc i,Δ 2c i)\omega(c(t_i^*),\Delta c_i, \Delta^2 c_i), where Δ 2c i=c(t i+1)2c(t i)+c(t i1)\Delta^2 c_i=c(t_{i+1}) - 2 c(t_i) + c(t_{i-1})?

    • CommentRowNumber48.
    • CommentAuthorMike Shulman
    • CommentTimeMay 5th 2015

    On the other hand, according to the idea here, the cojet differential would be replaced by the symmetric part of the coflare differential, whose antisymmetric part is the exterior differential — all of which increase the rank. In that world, d 2x\sqrt{\mathrm{d}^2 x} isn’t something we ought to think of integrating over a curve at all.

    • CommentRowNumber49.
    • CommentAuthorTobyBartels
    • CommentTimeMay 5th 2015

    Darn it, I knew that I was missing a thread! Now I have to read and contemplate that again. (I see that I was enthusiastic about it a year ago.)

    • CommentRowNumber50.
    • CommentAuthorTobyBartels
    • CommentTimeMay 5th 2015

    I thought of #47, but I also thought that it smacked of using only right or left endpoints (or, since its so symmetric, midpoints) in an ordinary Riemann sum. Since a tagged partition is a partition within a partition, I thought of using a partition within a partition within a partition, but then where would it end?

    • CommentRowNumber51.
    • CommentAuthorTobyBartels
    • CommentTimeMay 5th 2015

    The affine integral is bad, because it only works in affine spaces, but it also suggests the Stieltjes integral, which makes sense in more general contexts. Unfortunately, I don’t know how to make it work for an arbitrary cogerm (or even cojet) form given as an operation on curves, rather than given as an expression in symbols like dx\mathrm{d}x.

    • CommentRowNumber52.
    • CommentAuthorMike Shulman
    • CommentTimeMay 5th 2015

    If we do want to integrate d 2f\sqrt{\mathrm{d}^2f} along a curve, then a basic question is what the integral x=a bd 2x\int_{x=a}^b \sqrt{\mathrm{d}^2x} should be, where our curve is a simple affine increasing parametrization of [a,b][a,b]. I can’t think of anything sensible for it to equal except 00, with the argument that this parametrization has “no second derivative”. If that’s the case, then to make the integral parametrization-invariant with your formula, then we’d have to have tx(t)dt 2+x(t)d 2t=0\int_t \sqrt{x''(t) \,\mathrm{d}t^2 + x'(t) \,\mathrm{d}^2t} = 0 for any such parametrization x(t)x(t), so the two terms would have to exactly cancel somehow. I can sort of imagine how a second-order Taylor formula might cause that sum to reduce to something third-degree so that its square root would be greater than first-degree and hence integrate to zero. But it seems that the same argument would suggest that the integral of d 2x3\sqrt[3]{\mathrm{d}^2x} should also be zero, and likewise d 2x4\sqrt[4]{\mathrm{d}^2x} and so on, and it seems less and less likely to me that in those cases we could make the two terms in the change-of-variables formula cancel.

    • CommentRowNumber53.
    • CommentAuthorTobyBartels
    • CommentTimeMay 5th 2015
    • (edited May 5th 2015)

    Another thought that I had about affine integration is that if we really want to use it, but balk because we don't know what ‘affine’ means in general, then perhaps we can only integrate generally in the presence of an affine connection (not much there yet, so press on to Wikipedia), which tells us what is affine.

    Since I first learnt differential geometry through general relativity, I learnt about the exterior differential as the antisymmetrization of the covariant derivative, which uses the affine connection and applies to any tensor. It's then a great theorem that the action of this antisymmetrization on an antisymmetric contravariant tensor (which is an exterior form) doesn't depend on which connection is used. Here, we have a notion of differential that can be applied without a connection, but maybe we still need a connection to integrate the resulting forms. But of course the result won't depend on the connection, as long as we integrate only exterior forms.

    Edit: It seems that we were making the same point a year ago in the thread that I forgot.

    • CommentRowNumber54.
    • CommentAuthorMike Shulman
    • CommentTimeMay 5th 2015

    I would have thought that the affine integral could be applied on any manifold by using local charts. Do you think that’s not the case?

    • CommentRowNumber55.
    • CommentAuthorTobyBartels
    • CommentTimeMay 6th 2015

    I thought that you were suggesting on the page that it wasn't. It comes down to whether different local charts give the same results; I haven't checked.

    • CommentRowNumber56.
    • CommentAuthorMike Shulman
    • CommentTimeMay 6th 2015

    I think it does, at least in the Lipschitz case; I tried to write a proof on the page (end of the section)

    • CommentRowNumber57.
    • CommentAuthorTobyBartels
    • CommentTimeMay 6th 2015

    OK, I'll buy it in the Lipschitz case.

    I used the affine integral to define integration on a curve in my Calc 3 class the other day, even though I'm not using it to prove any theorems beyond hand-waving. I defined the mesh of a Riemann sum to be the maximum distance between points in space rather the maximum difference between values of the parameter, mainly because that was easier to point to in a diagram. But now I realize that you did it the other way (implicitly in the phrase ‘as before’).

    It seems to me that the slick proof that the affine integral is parametrization-dependent requires my mesh to go through, rather than yours. Of course, for Lipschitz forms, it works either way; everything is better with Lipschitz!

    • CommentRowNumber58.
    • CommentAuthorMike Shulman
    • CommentTimeMay 6th 2015

    Hmm, you may be right.

    • CommentRowNumber59.
    • CommentAuthorMike Shulman
    • CommentTimeMay 7th 2015

    Hmm ×2\times 2, it seems to me that what’s needed to make the two meshes coincide is continuity of the curve, not the form.

    • CommentRowNumber60.
    • CommentAuthorTobyBartels
    • CommentTimeMay 7th 2015

    Yes, I was just thinking about that. As long as the curve is uniformly continuous, it should be all right, at least for the Riemann integral.

    • CommentRowNumber61.
    • CommentAuthorMike Shulman
    • CommentTimeMay 21st 2015

    Toby, did you ever reach a conclusion about the argument in #8 above?

    • CommentRowNumber62.
    • CommentAuthorTobyBartels
    • CommentTimeMay 21st 2015

    If η\eta is determined up to sign by η 2\eta^2 and ω\omega is determined up to sign by ω 2\omega^2, then (ηω) 2(\eta \wedge \omega)^2 may still be determined by η 2\eta^2 and ω 2\omega^2, since the signs in η 2(v)ω 2(w)+η 2(w)ω 2(v)2η(v)ω(w)η(w)ω(v)\eta^2(v) \omega^2(w) + \eta^2(w) \omega^2(v) - 2 \eta(v) \omega(w) \eta(w) \omega(v) will cancel. However, that does seem to suggest an odd formula for the wedge product between 22-forms.

    • CommentRowNumber63.
    • CommentAuthorMike Shulman
    • CommentTimeMay 21st 2015

    Hmm. What’s the odd formula that it suggests?

    • CommentRowNumber64.
    • CommentAuthorTobyBartels
    • CommentTimeMay 21st 2015

    Something like

    (αβ)(v,w)=α(v)β(w)+α(w)β(v)2α(v)β(w)α(w)β(v)=(α(v)β(w)α(w)β(v)) 2, (\alpha \wedge \beta)(v,w) = \alpha(v) \beta(w) + \alpha(w) \beta(v) - 2 \sqrt{\alpha(v) \beta(w) \alpha(w) \beta(v)} = \Big(\sqrt{\alpha(v)\beta(w)} - \sqrt{\alpha(w)\beta(v)}\Big)^2,

    which doesn't entirely make sense.

    But I don't really believe this anyway. Now that we're looking at coflare forms, this set-up isn't even relevant; we should be applying (ηω) 2(\eta \wedge \omega)^2 to a 44-flare, not to two tangent vectors derived from a parametrized surface. However, I'm finding it hard to think through all of that.

    • CommentRowNumber65.
    • CommentAuthorMike Shulman
    • CommentTimeMay 22nd 2015

    That seems right if by η 2\eta^2 we mean ηη\eta\otimes \eta, but if we go back to the motivating #5 with coflares, it seems that η 2\eta^2 has to be the valuewise square (keeping the rank constant) since we want it to match with a square root?

    • CommentRowNumber66.
    • CommentAuthorMike Shulman
    • CommentTimeMay 22nd 2015

    Here’s a sort of exterior product for coflare forms that at least comes close to giving the right answer “vol g=gg2vol_g = \sqrt{\frac{g\wedge g}{2}}” for a metric on a 2-manifold.

    Suppose η\eta and ω\omega are both coflare forms of rank nn. Then ηω\eta\omega (meaning ηω\eta\otimes \omega) is a coflare form of rank 2n2n. Note that the symmetric group Σ k\Sigma_k acts on coflare forms of rank kk, because it acts on kk-flares. Let G 2nG_{2n} be the subgroup of Σ 2n\Sigma_{2n} generated by transpositions of the form (i,i+n)(i,i+n); thus for instance G 4={e,(13),(24),(13)(24)}G_4 = \{e, (13),(24),(13)(24) \}. Observe that G 2n(Σ 2) nG_{2n} \cong (\Sigma_2)^n. Now define

    ηω= σG 2n(1) sign(σ)(ηω) σ. \eta\wedge\omega = \sum_{\sigma\in G_{2n}} (-1)^{sign(\sigma)} (\eta\omega)^\sigma.

    (Hmm, I suppose there should probably be a factorial coefficient in front of that.) The point is that at least if η\eta and ω\omega depend only on the first-order tangent vectors in a flare, then σG 2n\sigma\in G_{2n} acts on ηω\eta\omega by swapping corresponding arguments of η\eta and ω\omega. (I find it easier to think here of 2n2n as a 2×n2\times n matrix rather than 2n2n things in a row.)

    When n=1n=1, this clearly gives the correct wedge product for exterior 1-forms. More generally, I think it also gives the right answer when η\eta and ω\omega are exterior nn-forms: in that case ηω\eta\omega is already antisymmetric under the two actions of Σ n\Sigma_n, so antisymmetrizing under G 2nG_{2n} forces it to be antisymmetric under all of Σ 2n\Sigma_{2n}.

    But now if we have a metric regarded as a coflare 2-form:

    g=Adx 2+Bdxdy+Bdydx+Cdy 2 g = A dx^2 + B dx dy + B dy dx + C dy^2

    we can compute its exterior square as follows. By construction, (ηω) σ=(1) sign(σ)(ηω)(\eta\wedge\omega)^\sigma = (-1)^{sign(\sigma)} (\eta\wedge\omega) for σG 2n\sigma\in G_{2n}. In particular, (dx idx j)(dx kdx l)=0(dx^i dx^j)\wedge (dx^k dx^l) = 0 if i=ki=k or j=lj=l. Thus all the terms in ggg\wedge g vanish except for

    AC(dx 2dy 2+dy 2dx 2)+B 2(dxdydydx+dydxdxdy). A C (dx^2\wedge dy^2 + dy^2 \wedge dx^2) + B^2 (dx dy \wedge dy dx + dy dx \wedge dx dy).

    Now because (13)(24)G 4(13)(24)\in G_4 and is even, we have dy 2dx 2=dx 2dy 2dy^2\wedge dx^2 = dx^2 \wedge dy^2 and dydxdxdy=dxdydydxdy dx \wedge dx dy = dx dy \wedge dy dx. Finally, because (13)G 4(13)\in G_4 and is odd, we have dxdydydx=dx 2dy 2dx dy \wedge dy dx = - dx^2 \wedge dy^2. Thus,

    gg=2(ACB 2)(dx 2dy 2). g\wedge g = 2(A C-B^2) (dx^2 \wedge dy^2).

    Note that by definition we have

    dx 2dy 2=dxdxdydydydxdxdydxdydydx+dydydxdx. dx^2 \wedge dy^2 = dx dx dy dy - dy dx dx dy - dx dy dy dx + dy dy dx dx.

    On the other hand, we have

    (dxdy) 2=(dxdydydx) 2=dxdydxdydxdydydxdydxdxdy+dydxdydx. (dx\wedge dy)^2 = (dx dy - dy dx)^2 = dx dy dx dy - dx dy dy dx - dy dx dx dy + dy dx dy dx.

    These differ exactly by the action of (23)Σ 4(23)\in \Sigma_4, which is a “transposition” of the “2×22\times 2 matrix” representing the arguments of these coflare 4-forms. Therefore, it seems that if we

    1. Construct gg2\sqrt{\frac{g\wedge g}{2}},
    2. Transpose its arguments, and
    3. Apply a “diagonal” to make it into a 2-form rather than a 4-form, so that e.g. the rank-2 form I’ve been writing as dx 2dx^2 but is really dxdxdx\otimes dx becomes the rank-1 form that’s more honestly written dx 2dx^2.

    we should get the desired vol g=det(g)|dxdy|vol_g = \sqrt{det(g)}\; {| dx \wedge dy |}.

    However, right now I don’t see any way to define that diagonal in a coordinate-invariant way. (If we could solve that problem, I expect the weird-looking transposition would get incorporated into the choice of one particular “map 242\to 4” rather than another one.)

    Moreover, this only defines the wedge product of two forms of the same rank, which seems clearly unsatisfactory. I haven’t thought yet about whether it generalizes to, for instance, 3-manifolds, where we’d have to find some way to define gggg\wedge g\wedge g.

    • CommentRowNumber67.
    • CommentAuthorMike Shulman
    • CommentTimeMay 22nd 2015

    Hmm, well the diagonal (combined with transposition) does make sense coordinate-invariantly for order-1 forms, which are all that we have here. But it would be a bit disappointing to have to use in this construction an operation that doesn’t make sense in generality.

    Another possibility that occurrs to me is that as we noted when I first suggested coflare forms, an affine connection is a section of the projection T 2X(TX) 2T^2X \to (T X)^2, thereby giving a way to “forget” from an arbitrary rank-2 (hence order 2\le 2) coflare form down to a rank-2 order-1 one. And we know that a metric gives rise to an affine connection. Perhaps a connection can be generalized to give a way to forget the higher-order terms in a coflare form of arbitrary rank? Performing such an operation first in order to make a “diagonal” make sense seems reasonable especially if our eventual goal is to integrate the result, since (“genuine”) integrals don’t generally notice higher-order terms.

    • CommentRowNumber68.
    • CommentAuthorTobyBartels
    • CommentTimeMay 25th 2015

    An affine connection by itself won't allow us to collapse the higher-order parts of a coflare form, which is clear from your explicit formula in the other thread for the collapsing operation:

    d 2x iΓ jk idx jdx k. d^2x^i \mapsto \Gamma^i_{j k} d x^j \otimes d x^k .

    As you say there, the transformation rules for Christoffel symbols ensure that this is invariant, which they can do since those transformation rules involve second derivatives. But nothing could handle d 3x\mathrm{d}^3x unless its transformation rules involve third derivatives, which Christoffel symbols' rules don’t. And an affine connection is determined by its Christoffel symbols.

    However, a metric should be plenty of information! For simplicity, let's take a Riemannian metric, so as not to worry about strange sign conventions in the semiRiemannian case. In that case, you simply pick (at any given point) a system of coordinates that's orthonormal relative to the metric, in which case Γ jk i\Gamma^i_{j k} is simply δ j iδ k i\delta^i_j \delta^i_k, so d 2x idx idx i\mathrm{d}^2x^i \mapsto \mathrm{d}x^i \otimes \mathrm{d}x^i (no summation). And this generalizes to any order:

    d nx i ndx i. \mathrm{d}^{n}x^i \mapsto \bigotimes_n \mathrm{d}x^i .

    Now all that we have to do is to obfuscate this by identifying what this looks like in an arbitrary system of coordinates.

    • CommentRowNumber69.
    • CommentAuthorTobyBartels
    • CommentTimeMay 25th 2015

    Actually, the previous comment is simpler than it should be, because it tacitly assumes that the system of coordinates is orthnormal not just at the point in question but on an infinitesimal neighbourhood of it, which may not be possible.

    • CommentRowNumber70.
    • CommentAuthorTobyBartels
    • CommentTimeMay 25th 2015

    Having said all of that, however, I don't think that the Levi-Civita connection can be relevant. The volume element at a given point depends only on the metric at that point, whereas the Levi-Civita connection there also involves the derivatives of the metric.

    • CommentRowNumber71.
    • CommentAuthorMike Shulman
    • CommentTimeMay 26th 2015

    An affine connection by itself won't allow us to collapse the higher-order parts of a coflare form

    Right, I expected as much; I was thinking along the lines of your second paragraph, whether a metric would also give rise to a sort of “higher connection”. I wonder if this is something someone else has written down?

    I don’t think that the Levi-Civita connection can be relevant. The volume element at a given point depends only on the metric at that point, whereas the Levi-Civita connection there also involves the derivatives of the metric.

    The connection (or higher connection) isn’t going to arise in the volume element itself. I’m only proposing the connection as a way to make the “diagonal” operation make sense for arbitrary coflare forms. But in defining the volume element, we only need to take diagonals of order-1 forms, and in that case the diagonal already makes sense; the connection would only arise when deciding what the diagonal does to things like d 2xd^2x.

    • CommentRowNumber72.
    • CommentAuthorMike Shulman
    • CommentTimeJun 8th 2015

    The proposal in #66 does generalize to “vol g=1n!g nvol_g = \sqrt{\frac{1}{n!} g^{\wedge n}}” on an nn-manifold (BUT see my next comment for an important caveat).

    In general, given mm coflare forms η 1,,η m\eta_1,\dots,\eta_m all of rank kk, let G m,kG_{m,k} be the subgroup of Σ mk\Sigma_{m k} generated by transpositions of the form (i,i+k)(i,i+k), which is isomorphic to (Σ m) k(\Sigma_m)^k. Define

    η 1η m= σG m,k(1) sign(σ)(η 1η m) σ. \eta_1 \wedge\cdots \wedge \eta_m = \sum_{\sigma \in G_{m,k}} (-1)^{sign(\sigma)} (\eta_1 \otimes\cdots \otimes \eta_m)^\sigma.

    Now let g= 1i,jng i,jdx idx jg = \sum_{1\le i,j\le n} g_{i,j} dx^i \otimes dx^j be a metric regarded as a coflare form of rank 2. Each term in g ng^{\wedge n} looks like

    =1 ng i ,j dx i dx j . \bigwedge_{\ell =1}^{n} g_{i_\ell, j_\ell} dx^{i_\ell} \otimes dx^{j_\ell}.

    By construction, this vanishes if i =i i_\ell = i_{\ell'} or j =j j_\ell = j_{\ell'} for any \ell\neq \ell'. Thus, i 1,,i ni_1,\dots,i_n and j 1,,j nj_1,\dots,j_n are each permutations of 1,,n1,\dots,n. We can permute the iis and jjs together without introducing any signs, so we may as well assume that i =i_\ell = \ell for all \ell. Thus we have a sum over all possible permutations j 1,,j nj_1,\dots,j_n, with each permutation occurring n!n! times:

    g n=n! σΣ n =1 ng ,σ()dx dx σ(). g^{\wedge n} = n! \, \sum_{\sigma \in \Sigma_n} \bigwedge_{\ell=1}^{n} g_{\ell, \sigma(\ell)} dx^{\ell} \otimes dx^{\sigma(\ell)}.

    Now if we permute the jjs back to the identity, we introduce the sign of that permutation:

    g n=n! σΣ n(1) sign(σ) =1 ng ,σ()dx dx . g^{\wedge n} = n! \, \sum_{\sigma \in \Sigma_n} (-1)^{sign(\sigma)} \bigwedge_{\ell=1}^{n} g_{\ell, \sigma(\ell)} dx^{\ell} \otimes dx^{\ell}.

    i.e.

    g n=n!det(g) =1 ndx dx . g^{\wedge n} = n! \, det(g) \bigwedge_{\ell=1}^{n} dx^{\ell} \otimes dx^{\ell}.

    Now there is a diagonal operation from rank-2n2n forms to rank-nn forms (at least on order-1 forms like these, and presumably extendable with a metric to arbitrary ones in a way that reduces to the obvious one on order-1 forms) that takes this to

    n!det(g)( =1 ndx ) 2 n! \, det(g) \left(\bigwedge_{\ell=1}^{n} dx^{\ell}\right)^2

    If we now divide by n!n! and take the square root, we obtain the correct volume element, as an absolute rank-nn coflare form.

    More generally, I bet that for k<nk\lt n a similar construction g kk!\sqrt{\frac{g^{\wedge k}}{k!}} will produce the standard kk-volume element in an nn-manifold. For k=1k=1 it of course gives the line element g\sqrt{g}, while for k=2k=2, n=3n=3 and the standard metric dx 2+dy 2+dz 2dx^2+dy^2+dz^2 on 3\mathbb{R}^3 I think we do get the right answer (dxdy) 2+(dydz) 2+(dzdx) 2\sqrt{(\dx\wedge \dy)^2 + (dy\wedge \dz)^2 + (\dz\wedge \dx)^2}.

    • CommentRowNumber73.
    • CommentAuthorMike Shulman
    • CommentTimeJun 8th 2015

    BUT I was wrong in #66 that this definition of \wedge (and its generalization in #72) is correct for exterior forms! I think I thought that because G 2nG_{2n} together with Σ n×Σ n\Sigma_n \times\Sigma_n generates Σ 2n\Sigma_{2n}, but that’s not enough. E.g. for exterior 2-forms η\eta and ω\omega this definition would give

    (ηω)(v 1,v 2,v 3,v 4)=η(v 1,v 2)ω(v 3,v 4)η(v 3,v 2)ω(v 1,v 4)η(v 1,v 4)ω(v 3,v 2)+η(v 3,v 4)ω(v 1,v 2) (\eta\wedge\omega) (v_1,v_2,v_3,v_4) = \eta(v_1,v_2)\omega(v_3,v_4) - \eta(v_3,v_2)\omega(v_1,v_4) - \eta(v_1,v_4)\omega(v_3,v_2) + \eta(v_3,v_4) \omega(v_1,v_2)

    whereas the correct formula is something like

    (ηω)(v 1,v 2,v 3,v 4)=η(v 1,v 2)ω(v 3,v 4)η(v 1,v 3)ω(v 2,v 4)+η(v 1,v 4)ω(v 2,v 3)+η(v 2,v 3)ω(v 1,v 4)η(v 2,v 4)ω(v 1,v 3)+η(v 3,v 4)ω(v 1,v 2) (\eta\wedge\omega) (v_1,v_2,v_3,v_4) = \eta(v_1,v_2)\omega(v_3,v_4) - \eta(v_1,v_3)\omega(v_2,v_4) + \eta(v_1,v_4)\omega(v_2,v_3) + \eta(v_2,v_3)\omega(v_1,v_4) - \eta(v_2,v_4)\omega(v_1,v_3) + \eta(v_3,v_4)\omega(v_1,v_2)

    So now I don’t know what to do.

    • CommentRowNumber74.
    • CommentAuthorTobyBartels
    • CommentTimeMay 21st 2018

    First mention of coflare forms on the Lab rather than the Forum.

    diff, v19, current

    • CommentRowNumber75.
    • CommentAuthorTobyBartels
    • CommentTimeDec 15th 2019
    • (edited Dec 15th 2019)

    I don't know where it's written down, but somewhere somebody (probably Mike) suggested that coflare forms of rank pp could be seen as functions on germs of maps from p\mathbb{R}^p. So I put this in here, in the section near the end where coflare forms are mentioned. (This gives a notion of coflare-cogerm form that includes all cogerm forms, not just cojet forms, as well as all coflare forms, thus including exterior forms, absolute forms, and even twisted forms with enough care. And why restrict the domains to only p\mathbb{R}^p …?)

    diff, v20, current

    • CommentRowNumber76.
    • CommentAuthorTobyBartels
    • CommentTimeNov 4th 2020

    My current opinion about how to apply \wedge (and the symmetric version as well) is as follows: To multiply a pp-form α\alpha and a qq-form β\beta, take all permutations on p+qp+q letters that keep the first pp in order and the last qq in order. So for example, if pp and qq are both 22, then there are 66 permitted permutations: 12341234, 13241324, 14231423, 23142314, 24132413, and 34123412. The first pp letters tell you which vectors to apply α\alpha to, while the last qq tell you which to apply β\beta too. If you're multiplying antisymmetrically, multiply by the sign of the permutation. Add up, and (if you're following this convention) divide by the number of permutations that you used. (Thus the symmetrized product is an average.)

    So, if η\eta and ω\omega are exterior 22-forms, then ηω\eta \wedge \omega is a 44-form:

    (ηω)(v 1,v 2,v 3,v 4)=16η(v 1,v 2)ω(v 3,v 4)16η(v 1,v 3)ω(v 2,v 4)+16η(v 1,v 4)ω(v 2,v 3)+16η(v 2,v 3)ω(v 1,v 4)16η(v 2,v 4)ω(v 1,v 3)+16η(v 3,v 4)ω(v 1,v 2), (\eta \wedge \omega)(v_1,v_2,v_3,v_4) = \frac 1 6 \eta(v_1,v_2) \omega(v_3,v_4) - \frac 1 6 \eta(v_1,v_3) \omega(v_2,v_4) + \frac 1 6 \eta(v_1,v_4) \omega(v_2,v_3) + \frac 1 6 \eta(v_2,v_3) \omega(v_1,v_4) - \frac 1 6 \eta(v_2,v_4) \omega(v_1,v_3) + \frac 1 6 \eta(v_3,v_4) \omega(v_1,v_2) ,

    which is exactly what Mike wanted in the previous comment. Notice that this is multilinear (since η\eta and ω\omega are) and alternating (since η\eta and ω\omega are). The usual textbook way to define the exterior product of exterior forms would give 2424 terms (and divide by 4!=244! = 24 or 2!2!=42!\,2! = 4, depending on convention), not just 66, but if ω\omega and η\eta are already alternating, then these come in groups of 44 equal terms, so the result is the same. But some textbooks save on terms by defining things as I did (see this definition on Wikipedia for example, at least currently).

    If η\eta and ω\omega are not already alternating, then neither should ηω\eta \wedge \omega be, and the definition with (pq)!(p q)! terms will incorrectly make it alternating. But the definition with (pq)!/(p!q!)(p q)!/(p!\,q!) terms will not. And nothing stops this from being applied to forms that aren't multilinear or always defined either.

    Also note that this operation is associative. The generalized version says that to multiply a list of mm forms of ranks p 1,,p mp_1, \ldots, p_m, you look at the ( ip i)!/ ip i!(\sum_i p_i)!/\prod_i p_i! permutations on ip i\sum_i p_i letters that keep the first p 1p_1 letters, the next p 2p_2 letters, etc through the last p np_n letters in order, apply the forms to the vectors given by the appropriate indexes, multiply by the sign of the permutation if you're multiplying antisymmetrically, add them up, and (optionally) divide by the number of terms.

    Now to apply this to the metric gg. There are two ways to think of gg, as a bilinear symmetric form of rank 22, or as a quadratic form of rank 11. In local coordinates on a surface parametrized by uu and vv for example (Gauss's first fundamental form), the first version is Edudu+Fdudv+Fdvdu+GdvdvE \,\mathrm{d}u \otimes \mathrm{d}u + F \,\mathrm{d}u \otimes \mathrm{d}v + F \,\mathrm{d}v \otimes \mathrm{d}u + G \,\mathrm{d}v \otimes \mathrm{d}v, while the second is Edu 2+2Fdudv+Gdv 2E \,\mathrm{d}u^2 + 2F \,\mathrm{d}u \,\mathrm{d}v + G \,\mathrm{d}v^2.

    Taking the first version, ggg \wedge g is a 9696-term expression that simplifies (after much cancellation) to 13E 2dudududu+\frac 1 3 E^2 \,\mathrm{d}u \otimes \mathrm{d}u \otimes \mathrm{d}u \otimes \mathrm{d}u + \cdots, and I'm going to stop writing it down after (what came from) the first 66 terms, because these are all of the ones with E 2E^2, and those should have cancelled completely (not just partially as happened here). The unbalanced nature of the signs has ruined this. Taking the second version, ggg \wedge g is an 1818-term expression that simplifies all the way to 00; in fact, αα=0\alpha \wedge \alpha = 0 whenever α\alpha is a 11-form, which is well-known when α\alpha is linear but true regardless.

    So neither of these is working out! (For the record, the answer that we were looking for here is EGF 2E G - F^2, possibly with a constant factor to worry about later.)

  1. [Administrative note: I have now merged a thread entitled ’Cogerm forms’ with this one. Comments 1. - 73. and 76. are from this old ’Cogerm forms’ thread.]

    • CommentRowNumber78.
    • CommentAuthorTobyBartels
    • CommentTimeNov 9th 2020

    Richard, why does this have to be done? Although the discussion of this topic is spread out, it is all linked from the bottom of the nLab page. One of those links (to https://nforum.ncatlab.org/discussion/5700/cogerm-forms/) is now broken, and while it can simply be removed now, how do we know whether anybody has put that link anywhere else? As much as possible, reorganization should not break external links.

    • CommentRowNumber79.
    • CommentAuthorRichard Williamson
    • CommentTimeNov 10th 2020
    • (edited Nov 10th 2020)

    We have been taking the Latest Changes pages as canonical for page discussion for a number of years, and doing this kind of merging of older threads into the Latest Changes threads where appropriate. Good that you noticed the link, yes, it would be good to remove it or update it. I’ll not enter into a debate about the possibility of breaking other links; it might happen, yes, but the probability of any significant negative consequences is tiny, and outweighed by the benefits of having a single, canonical page to find discussion in my opinion. E.g. it is very unlikely that I or many others would find the links to the nForum discussions when glancing at cogerm differential form, as indeed I did not in this case, but everybody can understand ’Discuss this page’ and know what to expect upon clicking upon it.

    • CommentRowNumber80.
    • CommentAuthorTobyBartels
    • CommentTimeDec 17th 2020

    There's a difference between a thread about a topic and a thread about a page (although that distinction is not made in the early comments in the merged thread). Is it possible to set up the server so that old links will redirect when merging? (That still breaks so-called PermaLinks to individual comments, but at least people will be on the correct page.)

    • CommentRowNumber81.
    • CommentAuthorDavidRoberts
    • CommentTimeJan 28th 2021

    Typo in a section title fixed

    diff, v22, current