Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I added some fine print, examples, and counterexamples to cogerm differential form.
Note to newbies: there is previous discussion of these at What is a variable? and related discussion at differentials. (Both of these began as discussions about teaching Calculus but got much more abstract near the end.)
Thanks!
I added a bit more about the disappointing “exterior differentials” of cogerm forms.
Here’s a way in which we want to be able to take exterior (wedge) products of cogerm forms. Recall that a Riemannian metric, while usually thought of as a symmetric bilinear covector form, can be equivalently thought of as a quadratic covector form, hence in particular a cogerm 1-form. I argued years ago, in the great forms thread on s.p.r with Eric Forgy, that the volume pseudo-form on an (unoriented) -dimensional Riemannian manifold is the principle square root of the -fold wedge product of the metric with itself. I'm not sure why I said that, exactly; you also have to divide by . At the time, I considered that this calculation took place in the second symmetric power of the exterior algebra, which is correct enough; but I'd also like it to take place in the exterior cogerm algebra. Even better, since the arclength element is the principal square root of the metric, I'd like the volume pseudo-form to simply be the -fold wedge product of the arclength element (divided by ).
Here is an example, to help you follow me. Given on ,
(This last expression is traditionally called ‘’, but the absolute value is really there, as you can see because reversing the sign of one variable leaves the area unchanged.)
That's how I did it with Eric (except for the last step which I would not have allowed then); here's how I'd also like to do it:
In all of these calculations, operations that normally distribute over multiplication (square, principal square root, and their composite, absolute value) distribute over the wedge product, and it works.
It’s nice to have an example! Now let’s see whether we can make it an example of anything. I wonder whether germs of parametrized surfaces are too general?
Here’s one idea I had had for the exterior product of cogerm forms. Let be a parametrized surface; then for any angle we have a curve defined by . Now given cogerm 1-forms and , we evaluate them on the “perpendicular” curves and respectively, multiply the results, and average over :
I think for exterior 1-forms this gives the right answer, working it out in coordinates and using the facts that and . Does it give anything sensible for the metric or the arclength element?
Suppose we had a wedge product of cogerm forms which, when restricted to exterior 1-forms and , yields a cogerm 2-form which acts like the ordinary wedge product of exterior forms:
where are the canonical tangent vectors to at , i.e. and . Then we would have
which seems to say that the cogerm 2-form doesn’t depend only on and , but also on and . This makes it seem unlikely that it could be equal to . Is this a wrong conclusion to draw?
By the way, I just had occasion to look at the book A geometric approach to differential forms by David Bachman, and noticed that although he talks mostly about ordinary exterior forms, he’s very explicit about there being other sorts which one may want to integrate:
…a more general integral [over a domain parametrized by ] would be , where is a function of points and is a function of vectors. It is not the purpose of the present work to undertake a study of integrating with respect to all possible functions, . However, as with the study of functions of real variables, a natural place to start is with linear functions. This is the study of differential forms… The strength of differential forms lies in the fact that their integrals do not depend on a choice of parametrization. (p31)
In the appendix, he considers the arc length element and the unoriented area form as particular examples of non-linear that one may want to integrate, but he doesn’t give any general theory.
Those two examples share the strength of differential forms —that they don't depend on parametrization— and improve it: they don't depend on orientation either. (This is because they are absolute differential forms.)
I need to go and think about right now.
Yes, I was surprised that he didn’t point that out. In the appendix he seems to change his mind a bit and writes
The thing that makes (linear) differential forms so useful is the generalized Stokes’ Theorem. We do not have anything like this for non-linear forms…
Today I decided that I don’t like the definition of integration which I proposed in the other thread (and wrote on the page). Here’s why.
Right now I’m teaching my students to apply integrals to calculate volumes, arc lengths, etc., and I gave them the following method: slice up the quantity you want to compute into small pieces, find a differential form that approximates the value of each piece to first order, then integrate that differential form. For instance, in the basic case when finding an area, we slice it into pieces which are rectangles to first-order, hence have area (where is the height and the width), then we integrate that. But suppose we instead used a differential form that is a better approximation. For instance, if we instead approximate that piece by a trapezoid, then its area would be , which is about . I’d like to say that if we integrate this differential form, we should get the same area; but with the definition of integration that I proposed before, we don’t.
This argument suggests that integrating a cogerm 1-form of “degree ” should always result in zero. Symmetrically, one might argue that integrating a cogerm 1-form of “degree ” should always result in divergence.
Here’s a definition of integration which does have that property, and at least looks vaguely reasonable. As before, if is a curve, denote by the curve with . Also, denote by the curve with . Then we can say that a cogerm 1-form is “degree- oriented-homogeneous” if , and “degree- absolute-homogeneous” if .
Suppose is a cogerm 1-form and a curve on , and say we have a tagged partition , with tags . Write , and define the Riemann sum of over this partition to be
Now the integral is just the limit of these Riemann sums as the mesh goes to zero, if it exists. (We can take this limit either in the Riemann way or the Henstock way.)
If is degree-1 homogeneous of either sort, then this should agree with the other definition of the integral: just pull the out by homogeneity and you have the ordinary integral over . But this definition should also give zero if is degree- homogeneous with , and diverge if it is degree- homogeneous with , at least if is otherwise well-behaved (e.g. continuous).
I believe this integral also has another nice property: if exists for some , then it is invariant under orientation-preserving affine reparametrization of .
I don’t know whether this would help to solve any of the other problems. It does reinforce the idea that the exterior differential only makes sense on degree-1 forms (and not even all of those): if we define in the “Arnold” way as the limit of integrals of over shrinking loops, then it will automatically vanish if has degree and diverge if it has degree , because the integral does so.
My new proposed definition of integration in #12 would mean that only linear 1-forms satisfy FTC. But perhaps that’s not too bothersome in light of the fact that FTC is a special case of Stokes’ theorem, and the general form of Stokes’ theorem requires the exterior differential.
I also think there’s the germ of a nice proof of the 1-variable FTC somewhere in this. Consider the fact that for any function , the difference is itself a cogerm form, and differentiability of says that this form differs from the differential by something of second order. Thus, given a general theorem like “the integral of a form depends only on its first-order part”, we could say that , and by choosing a suitable Riemann sum the latter simply telescopes to . That’s not very precise, of course, but I like that way of thinking about it.
This argument suggests that integrating a cogerm 1-form of “degree ” should always result in zero. Symmetrically, one might argue that integrating a cogerm 1-form of “degree ” should always result in divergence.
OK, I'll buy this, for integrals along curves. But then a cogerm -form of homogeneous degree should still be integrable over a -dimensional submanifold.
Yes, I agree.
Re #5, can you say anything intuitive about why you expect this to be the case, other than the fact that the formal calculations work if you make certain assumptions? Having thought about it a bit more, my intuition doesn’t match. For instance, normally the exterior square of anything is zero; why would you expect the nonlinear case to be different? Also, the main intuition that I’ve been able to think of for why the area element should be the square of the arc length element is that area is length times width — but that’s only true for rectangles, not for arbitrary parallelograms, and a 2-form acts on all pairs of vectors (or cojets or cogerms), not just orthogonal ones.
normally the exterior square of anything is zero
That's not true for, for example, scalar fields (or more generally exterior forms with even rank). And scalar fields are, besides being exterior -forms, also cogerm -forms. So, we already know that the square of a cogerm -form need not be , and it shouldn't be surprising if and are further counterexamples.
area is length times width — but that’s only true for rectangles, not for arbitrary parallelograms
That's just the sort of thing that we'd expect the wedge product to handle. Indeed, the exterior square is zero (when it is zero) because it gives us a flat parallelogram.
But you're mainly right, that my main reason for believing this is that the calculation seems to work. I mean, the original calculation definitely works, in the second symmetric power of the exterior calculus. But to do it in an exterior cogerm calculus, well, I don't know that calculus, so I can only say that it seems to work.
In particular, I don't know how to justify the factorial, other than that it has to be there to make the answer come out correctly.
Here I try to give a geometric justification of the factorial, for a low-level sense of ‘geometric’ and a very hand-waving sense of ‘justification’.
Start with the usual picture of as the hypotenuse of a right triangle with legs and . Now double this right triangle to a rectangle, with diagonal and area . In this picture, how do we express in terms of ?
In one sense, we can't, of course. If the rectangle is a square, then , just as I want. But if the rectangle is flat, then even though .
But my claim is not that but that . And the exterior product only counts the part that's perpendicular. (That's why the exterior square is zero, after all, at least in the simplest case.) But how can any part of be perpendicular to itself?
We have a rectangle with a diagonal. But this has another diagonal, which is also . When the rectangle is a square, these diagonals are perpendicular, and we get . But when the rectangle is flat, these are parallel, and we get . In general, , where is the angle1 that the two diagonals make with each other. And that's exactly the perpendicular part; it's just what the wedge product ought to give us.
Somehow, this should reflect something about the exterior cogerm calculus. But I don't know how. (Particularly when all of this talk of angles and orthogonality refers to the metric, or at least to the conformal structure, which we have when we talk about but which should not be available in the exterior cogerm calculus itself.)
Of course, they cross and so make four angles, which are only equal in pairs. But these angles' sines are all equal. ↩
Well, that's extremely hand-wavy when you realize that every time that I wrote in that last comment, I was really only justified in writing . And if I generalize to dimensions, it becomes an argument that instead of as my algebraic calculation gave. So I don't know what's up with that!
I haven’t digested #18 yet, but re #17, you’re right that I should have said the exterior square of any 1-form is zero. however, if we are to embed the cogerm 1-forms in some context that also contains cogerm -forms for other , then it might be that we will have to regard scalar functions as cogerm 0-forms separately from their incarnation as cogerm 1-forms. It would then be perfectly consistent for the exterior square of a scalar function qua cogerm 0-form to be nonzero (being another cogerm 0-form), while its square qua cogerm 1-form is zero (specifically, the zero cogerm 2-form). For instance, that’s what happens for the “cojet differential forms” which I proposed in the other thread.
A notational question: is there a deep reason that you usually write for the arc length element? Is it just that you want a symbol that looks like the traditional notation but doesn’t suggest that it is actually the differential of something?
Yes, that's it. It's a notation that's used in thermodynamics; for example in the first law of thermodynamics
(the total change in the energy of a subsystem is the sum of the heat transferred into that subsystem and the mechanical work done on that subsystem). It's an important point that (contrary to the caloric theory of heat) there is no such thing as , only .
Nice, thanks.
Here’s an improvement of my vague remarks in #12. Let’s say that a cogerm 1-form is “” if for any curve , we have
A degree- homogeneous form is clearly for any . I claim that if is , then it is Henstock integrable over any curve (defined on a closed interval ), and .
Suppose ; then for any there is a such that for any . This defines a gauge on . Now suppose we have a -fine tagged partition with tags , so that . Then the corresponding Riemann sum is, by definition, . Since , each . Thus, when we sum them up, we get something less than . Thus, for any there is a gauge such that the Riemann sum over any -fine tagged partition is ; so the Henstock integral is zero.
It follows that if any such is Riemann integrable, then its Riemann integral is zero. I suspect that there are probably forms that are but not Riemann integrable, but I haven’t come up with any yet.
I think my suggested proof of FTC in #13 can also be made to work. Say we assume that is continuous, hence integrable (I think the usual proof that continuous functions are integrable will show that any continuous linear 1-form, such as the differential of a function, is integrable). Then by definition, differs from by something , and the sum of integrable forms is integrable; thus is integrable. But any left Riemann sum of telescopes to ; hence that must be its integral, and thus also the integral of .
Here’s another way to approach the question of #5: suppose we know how to measure lengths (with some 1-form ); how can we measure (say) areas? The first thing we might do is use the polarization identity to recover the inner product:
Note that if we replace by a linear form , then this would give .
Now , whereas the area spanned by and should be . Thus, we can compute the area as
And if we replace by a linear form here, we would get zero. So at least we have some operation on “nonlinear covector” 1-forms that yields a “nonlinear covector 2-form”, which when applied to gives the area form, and when applied to a linear 1-form gives zero, so we might hope to call it the “exterior square”.
But I can’t think of a way to tease an operation on pairs of 1-forms out of this. I’m also ambivalent about it anyway because it doesn’t seem canonical; e.g. there are several forms of polarization, which are the same for or a linear form, but it seems that they might be different on other forms.
Re: #18, it doesn’t seem right to me to say “the exterior product only counts the part that’s perpendicular”. There is no notion of “perpendicular” until we have a metric, and the exterior product is defined without any metric. I feel like is measuring area in one way (sort of declaring and to “be perpendicular” for the purposes of area), while a “product” would be measuring area in a different way (obtaining a notion of “perpendicular” from the factors rather than declaring them to be perpendicular to each other).
Here’s a question: suppose a manifold has two metrics and , giving rise to two length elements and . What would you expect to measure?
Re #24, here’s something. Maybe I was half asleep last night, because now I think this is the “obvious” thing to try. Given two covector 1-forms and , consider the covector 2-form defined by
If , this gives the inner product as in #24. But if and are linear, then this gives the symmetrized product
Now consider the covector 2-form
If , this gives the area element as in as in #24. But if and are linear, then it gives the absolute value of their exterior product,
(with the unpopular convention rather than for the factorial factors). This raises a bunch of questions:
there are several forms of polarization, which are the same for or a linear form, but it seems that they might be different on other forms
They are the same if and only if the parallelogram identity holds. Which, of course, it typically won't. (This result is for any operation on an abelian group in which is invertible; the operation doesn't have to have to have any properties of the square of a norm. And while I'm at it, doesn't even have to be invertible if you take the polarization identities to define times the inner product.)
Your version of the polarization identity is perhaps the most even-handed. But in general, it's not even-handed enough; besides and , you also ought to include and .
You mean
(squaring the dot product).
What’s up with the absolute value? Is there any way to get the non-absolute-valued exterior product?
It looks like getting the non-absolute-valued (or rather non-normed) cross product. This is fixed (up to orientation, when even that can be done) by also requiring the cross product of two vectors to also be orthogonal to both vectors. But I don't see an analogue of that condition here.
it doesn’t seem right to me to say “the exterior product only counts the part that’s perpendicular”
Not in general, but in this particular case we do have a metric; you talked this way in #16 too. But I'm pretty much convinced that #18 is nonsense anyway; it gives the wrong answer, after all.
Here’s a question: suppose a manifold has two metrics and , giving rise to two length elements and . What would you expect to measure?
Good question! I don't know, but I think that I can calculate it.
For definiteness, let the manifold be , so we have
with , , and .
I need to establish a lemma. I have already surmised that
but what about ? (which is also a symmetric bilinear form and so may appear in a metric). Polarization lets me rewrite this using squares:
dividing by , . Note the special case (and similarly with the square on the other side.)
Then I get
Taking square roots, .
If , then the expression under the square root is twice the determinant of the matrix of the coefficients of , so we have times the area element (which is the same answer that I've gotten before in that case). But I'm not sure what to make of itself; at least it is positive, since
(And strictly so; that is, the wedge product of two positive-definite length elements is also positive-definite; we never get zero.)
Thanks for the missing square; I’ve fixed it in the original comment.
in general, it's not even-handed enough; besides and , you also ought to include and .
You’re absolutely right. And it seems to me that even that might not be even-handed enough, when our forms may not be either linear or quadratic. It feels like maybe we ought to include everything in the lattice generated by and .
Re: #28 and 29, that’s intriguing. I don’t know what to make of either. And I guess I don’t really know how to think about a manifold that has two different metrics.
I’ve added #12 and #23 to the page cogerm differential form. One thing that is still missing is a good theorem about which cogerm forms are integrable (with either definition).
I added a theorem about existence and parametrization-invariance of integration. I’m not completely satisfied with it, but it’s the best I’ve been able to come up with.
Here’s an interesting example: . It is degree-1 homogeneous with sign, so its integral reverses with orientation. Its integral around any closed circle or parallelogram is zero, but around other closed curves (like triangles) it has a nonzero integral.
In general, its integral along a line of angle to the -axis is times the length of that line. Thus, along horizontal and vertical lines it just measures (signed) length, while along lines it measures something less than length, and along lines it measures zero. (This is similar to , except that the latter measures more than length along lines, in such a way that its integral around any closed curve works out to zero — as it must, since the form is exact.)
Being degree-1 homogeneous with sign means in particular that if we take a region and divide it into two regions and , then the integral around the boundary of is equal to the sum of the integrals around the boundaries of and . This suggests that there might be a way to define the exterior differential of this 1-form in such a way that Stokes’ theorem would hold. (Note that since the analogous subdivision property for boundary integrals fails for , it can’t have an exterior differential satisfying Stokes’ theorem.) However, I haven’t been able to think of such a definition.
Working in the plane, we ought to be able to calculate this exterior derivative as an ordinary -form. If every sufficiently regular oriented region is given a measure by integrating around it, and if this is additive and reasonably continuous, then this integral must be given by an absolutely continuous Radon pseudo-measure, hence an exterior -form.
But this can't be right, because (as you say) the measure of any rectangle is zero, while the measure of some triangles is not. So it must violate continuity. Indeed, if we approximate a triangle from inside and out by rectangles, then we approximate its measure as zero, which it is not, so we don't have a Radon measure.
Well, this is just reporting a negative result, but those don't get published as much as they should, so here it is.
That’s a good observation, thanks. So maybe we should ask, is integration of cogerm 2-forms on the plane necessarily continuous? Because if so, that means we can’t define such an exterior derivative there either.
While I’ve got you here, I have an unrelated question. When teaching multiple integration, what notation do you use for iterated integrals as in Fubini’s theorem? Always insisting on writing
seems tedious and cumbersome, but omitting the parentheses and writing
looks like we are integrating with respect to a product , which we are not (in particular, because , both being equal to ).
Well, I wouldn't write either of those, at least not until very late in the course when we all know what it means and I'm abbreviating. You're not specifying the region of integration, and that's key.
I start with
but abbreviate this fairly early on as
(which is what they write in the textbook). This is an iterated integral.
Then I introduce the separate concept
which is defined (for bounded ) as the limit of Riemann sums, each given by a tagged rectangular mesh. This is pretty obviously equal to
(with an obvious bijective correspondence between the relevant tagged meshes), and these can be abbreviated as
This is a double integral. (I am not yet discussing what sort of thing actually is, or even that the and in this expression are really and . In fact, at this point I'm still writing ‘’, like they do in the book. I don't really examine what kind of a thing this is until I get to change of variables. It's no secret that I'm going to discuss this, but I don't discuss it right away.)
Then I state (without proof, we don't have that kind of time) the Fubini Theorem as the statement that the iterated integral is equal to the obvious corresponding double integral (when , , and are continuous). In particular, if a region can be written as an iterated integral in both ways, then these iterated integrals are equal. Then further immediate corollaries, not explicitly stated in detail, about regions that can be divided into subregions amenable to iterated integrals. So the lesson is that we really care about double integrals, while iterated integrals are the method used to evaluate them analytically.
Ok, but you do write in both the double integral and the iterated integral.
Ok, but you do write in both the double integral and the iterated integral.
Yes, but I make it clear (or try to) that the iterated integral is fundamentally the expression with parentheses, so that is not a thing in it. It is a thing in the double integral, but I don't examine closely what that thing is until we have a little practical experience.
By the way, there is still a slight issue in the inner integral in the iterated integral. I do line integrals before area integrals (which the book also does but in the case of arclengths only), originally so that they'll have had some experience manipulating differential forms before I spring on them when we get to change of variables. But another benefit is that I can now point out that
is really a line integral along a line of constant -value.
Have you read anything by Solomon Leader? I’m just having a look at his book The Kurzweil-Henstock integral and its differentials in which he has an interesting definition of differentials. He defines a summant to be a function on intervals tagged with an endpoint, which is basically to say tangent vectors, and then defines the integral of a summant basically just as I did for cogerm forms. Then he defines a differential to be an equivalence class of summants, where if . Then the differential of a function is the equivalence class of the summant which is defined by ; we could write that as . If is differentiable, then as differentials, i.e. their summants are equivalent – this is what contains the content of the fundamental theorem of calculus. But is defined for any function , and its integrals are Riemann–Stieltjes integrals.
I added some remarks about delta functions and Riemann-Stieltjes integrals to cogerm differential form.
No, I've never heard of Solomon Leader. It looks like I should read him!
I’m coming back to the idea of parametrization invariance, such as the parametrization invariance (or lack thereof) of the integration of higher-order1 forms such as . (Recall from the page that , at least when and are each twice continuously differentiable, for the so-called ‘genuine’ integral.)
It seems to me that parametrization invariance is a form of the Chain Rule, and we know that the Chain Rule can be tricky to apply to higher-order derivatives. I believe that the formula above for the integral of is wrong in the same way that is wrong. This formula seems to work under affine change of variables, just as the ‘genuine’ integral seems to work under affine reparametrization, but otherwise we can see that it fails.
In fact, knowing that , I would calculate as follows:
If , then this simplifies to the result for ‘genuine’ integral, but not otherwise.
I'm convinced that this formula is correct, but unfortunately I don't know how to interpret it!
I have settled on the following language for order/degree/rank: The rank of a form indicates what dimension of submanifold it is integrated on; rank is increased by using the wedge product or applying the exterior differential, the cojet differential doesn't affect the rank, and the rank of a sum must agree with all addends. The degree indicates the power to which a positive scaling factor is raised when a form is applied to a small scaled (multi)-vector/curve; the degree is increased by multiplication or by applying the cojet differential, the exterior differential doesn't affect the degree, and the degree of a sum is the minimum degree of the addends. The order of a form indicates the highest order of derivatives of a curve that affect the value of the form at that curve; the order is increased by applying the coject differential, the exterior differential doesn't affect the order, and the order of a sum or product is the maximum order of the addends. For example, has rank and degree but order , has degree but rank and order , and has degree and order but rank . In particular, has order but rank and degree . ↩
Interesting — so how do we calculate ?
Hmm… can we extend the “affine integral” to act on forms of order ? If so, maybe that is actually the right definition?
Shooting from the hip, what if we consider an order-2 rank-1 form to be a function and integrate it by adding up , where ?
On the other hand, according to the idea here, the cojet differential would be replaced by the symmetric part of the coflare differential, whose antisymmetric part is the exterior differential — all of which increase the rank. In that world, isn’t something we ought to think of integrating over a curve at all.
Darn it, I knew that I was missing a thread! Now I have to read and contemplate that again. (I see that I was enthusiastic about it a year ago.)
I thought of #47, but I also thought that it smacked of using only right or left endpoints (or, since its so symmetric, midpoints) in an ordinary Riemann sum. Since a tagged partition is a partition within a partition, I thought of using a partition within a partition within a partition, but then where would it end?
The affine integral is bad, because it only works in affine spaces, but it also suggests the Stieltjes integral, which makes sense in more general contexts. Unfortunately, I don’t know how to make it work for an arbitrary cogerm (or even cojet) form given as an operation on curves, rather than given as an expression in symbols like .
If we do want to integrate along a curve, then a basic question is what the integral should be, where our curve is a simple affine increasing parametrization of . I can’t think of anything sensible for it to equal except , with the argument that this parametrization has “no second derivative”. If that’s the case, then to make the integral parametrization-invariant with your formula, then we’d have to have for any such parametrization , so the two terms would have to exactly cancel somehow. I can sort of imagine how a second-order Taylor formula might cause that sum to reduce to something third-degree so that its square root would be greater than first-degree and hence integrate to zero. But it seems that the same argument would suggest that the integral of should also be zero, and likewise and so on, and it seems less and less likely to me that in those cases we could make the two terms in the change-of-variables formula cancel.
Another thought that I had about affine integration is that if we really want to use it, but balk because we don't know what ‘affine’ means in general, then perhaps we can only integrate generally in the presence of an affine connection (not much there yet, so press on to Wikipedia), which tells us what is affine.
Since I first learnt differential geometry through general relativity, I learnt about the exterior differential as the antisymmetrization of the covariant derivative, which uses the affine connection and applies to any tensor. It's then a great theorem that the action of this antisymmetrization on an antisymmetric contravariant tensor (which is an exterior form) doesn't depend on which connection is used. Here, we have a notion of differential that can be applied without a connection, but maybe we still need a connection to integrate the resulting forms. But of course the result won't depend on the connection, as long as we integrate only exterior forms.
Edit: It seems that we were making the same point a year ago in the thread that I forgot.
I would have thought that the affine integral could be applied on any manifold by using local charts. Do you think that’s not the case?
I thought that you were suggesting on the page that it wasn't. It comes down to whether different local charts give the same results; I haven't checked.
I think it does, at least in the Lipschitz case; I tried to write a proof on the page (end of the section)
OK, I'll buy it in the Lipschitz case.
I used the affine integral to define integration on a curve in my Calc 3 class the other day, even though I'm not using it to prove any theorems beyond hand-waving. I defined the mesh of a Riemann sum to be the maximum distance between points in space rather the maximum difference between values of the parameter, mainly because that was easier to point to in a diagram. But now I realize that you did it the other way (implicitly in the phrase ‘as before’).
It seems to me that the slick proof that the affine integral is parametrization-dependent requires my mesh to go through, rather than yours. Of course, for Lipschitz forms, it works either way; everything is better with Lipschitz!
Hmm, you may be right.
Hmm , it seems to me that what’s needed to make the two meshes coincide is continuity of the curve, not the form.
Yes, I was just thinking about that. As long as the curve is uniformly continuous, it should be all right, at least for the Riemann integral.
Toby, did you ever reach a conclusion about the argument in #8 above?
If is determined up to sign by and is determined up to sign by , then may still be determined by and , since the signs in will cancel. However, that does seem to suggest an odd formula for the wedge product between -forms.
Hmm. What’s the odd formula that it suggests?
Something like
which doesn't entirely make sense.
But I don't really believe this anyway. Now that we're looking at coflare forms, this set-up isn't even relevant; we should be applying to a -flare, not to two tangent vectors derived from a parametrized surface. However, I'm finding it hard to think through all of that.
That seems right if by we mean , but if we go back to the motivating #5 with coflares, it seems that has to be the valuewise square (keeping the rank constant) since we want it to match with a square root?
Here’s a sort of exterior product for coflare forms that at least comes close to giving the right answer “” for a metric on a 2-manifold.
Suppose and are both coflare forms of rank . Then (meaning ) is a coflare form of rank . Note that the symmetric group acts on coflare forms of rank , because it acts on -flares. Let be the subgroup of generated by transpositions of the form ; thus for instance . Observe that . Now define
(Hmm, I suppose there should probably be a factorial coefficient in front of that.) The point is that at least if and depend only on the first-order tangent vectors in a flare, then acts on by swapping corresponding arguments of and . (I find it easier to think here of as a matrix rather than things in a row.)
When , this clearly gives the correct wedge product for exterior 1-forms. More generally, I think it also gives the right answer when and are exterior -forms: in that case is already antisymmetric under the two actions of , so antisymmetrizing under forces it to be antisymmetric under all of .
But now if we have a metric regarded as a coflare 2-form:
we can compute its exterior square as follows. By construction, for . In particular, if or . Thus all the terms in vanish except for
Now because and is even, we have and . Finally, because and is odd, we have . Thus,
Note that by definition we have
On the other hand, we have
These differ exactly by the action of , which is a “transposition” of the “ matrix” representing the arguments of these coflare 4-forms. Therefore, it seems that if we
we should get the desired .
However, right now I don’t see any way to define that diagonal in a coordinate-invariant way. (If we could solve that problem, I expect the weird-looking transposition would get incorporated into the choice of one particular “map ” rather than another one.)
Moreover, this only defines the wedge product of two forms of the same rank, which seems clearly unsatisfactory. I haven’t thought yet about whether it generalizes to, for instance, 3-manifolds, where we’d have to find some way to define .
Hmm, well the diagonal (combined with transposition) does make sense coordinate-invariantly for order-1 forms, which are all that we have here. But it would be a bit disappointing to have to use in this construction an operation that doesn’t make sense in generality.
Another possibility that occurrs to me is that as we noted when I first suggested coflare forms, an affine connection is a section of the projection , thereby giving a way to “forget” from an arbitrary rank-2 (hence order ) coflare form down to a rank-2 order-1 one. And we know that a metric gives rise to an affine connection. Perhaps a connection can be generalized to give a way to forget the higher-order terms in a coflare form of arbitrary rank? Performing such an operation first in order to make a “diagonal” make sense seems reasonable especially if our eventual goal is to integrate the result, since (“genuine”) integrals don’t generally notice higher-order terms.
An affine connection by itself won't allow us to collapse the higher-order parts of a coflare form, which is clear from your explicit formula in the other thread for the collapsing operation:
As you say there, the transformation rules for Christoffel symbols ensure that this is invariant, which they can do since those transformation rules involve second derivatives. But nothing could handle unless its transformation rules involve third derivatives, which Christoffel symbols' rules don’t. And an affine connection is determined by its Christoffel symbols.
However, a metric should be plenty of information! For simplicity, let's take a Riemannian metric, so as not to worry about strange sign conventions in the semiRiemannian case. In that case, you simply pick (at any given point) a system of coordinates that's orthonormal relative to the metric, in which case is simply , so (no summation). And this generalizes to any order:
Now all that we have to do is to obfuscate this by identifying what this looks like in an arbitrary system of coordinates.
Actually, the previous comment is simpler than it should be, because it tacitly assumes that the system of coordinates is orthnormal not just at the point in question but on an infinitesimal neighbourhood of it, which may not be possible.
Having said all of that, however, I don't think that the Levi-Civita connection can be relevant. The volume element at a given point depends only on the metric at that point, whereas the Levi-Civita connection there also involves the derivatives of the metric.
An affine connection by itself won't allow us to collapse the higher-order parts of a coflare form
Right, I expected as much; I was thinking along the lines of your second paragraph, whether a metric would also give rise to a sort of “higher connection”. I wonder if this is something someone else has written down?
I don’t think that the Levi-Civita connection can be relevant. The volume element at a given point depends only on the metric at that point, whereas the Levi-Civita connection there also involves the derivatives of the metric.
The connection (or higher connection) isn’t going to arise in the volume element itself. I’m only proposing the connection as a way to make the “diagonal” operation make sense for arbitrary coflare forms. But in defining the volume element, we only need to take diagonals of order-1 forms, and in that case the diagonal already makes sense; the connection would only arise when deciding what the diagonal does to things like .
The proposal in #66 does generalize to “” on an -manifold (BUT see my next comment for an important caveat).
In general, given coflare forms all of rank , let be the subgroup of generated by transpositions of the form , which is isomorphic to . Define
Now let be a metric regarded as a coflare form of rank 2. Each term in looks like
By construction, this vanishes if or for any . Thus, and are each permutations of . We can permute the s and s together without introducing any signs, so we may as well assume that for all . Thus we have a sum over all possible permutations , with each permutation occurring times:
Now if we permute the s back to the identity, we introduce the sign of that permutation:
i.e.
Now there is a diagonal operation from rank- forms to rank- forms (at least on order-1 forms like these, and presumably extendable with a metric to arbitrary ones in a way that reduces to the obvious one on order-1 forms) that takes this to
If we now divide by and take the square root, we obtain the correct volume element, as an absolute rank- coflare form.
More generally, I bet that for a similar construction will produce the standard -volume element in an -manifold. For it of course gives the line element , while for , and the standard metric on I think we do get the right answer .
BUT I was wrong in #66 that this definition of (and its generalization in #72) is correct for exterior forms! I think I thought that because together with generates , but that’s not enough. E.g. for exterior 2-forms and this definition would give
whereas the correct formula is something like
So now I don’t know what to do.
I don't know where it's written down, but somewhere somebody (probably Mike) suggested that coflare forms of rank could be seen as functions on germs of maps from . So I put this in here, in the section near the end where coflare forms are mentioned. (This gives a notion of coflare-cogerm form that includes all cogerm forms, not just cojet forms, as well as all coflare forms, thus including exterior forms, absolute forms, and even twisted forms with enough care. And why restrict the domains to only …?)
My current opinion about how to apply (and the symmetric version as well) is as follows: To multiply a -form and a -form , take all permutations on letters that keep the first in order and the last in order. So for example, if and are both , then there are permitted permutations: , , , , , and . The first letters tell you which vectors to apply to, while the last tell you which to apply too. If you're multiplying antisymmetrically, multiply by the sign of the permutation. Add up, and (if you're following this convention) divide by the number of permutations that you used. (Thus the symmetrized product is an average.)
So, if and are exterior -forms, then is a -form:
which is exactly what Mike wanted in the previous comment. Notice that this is multilinear (since and are) and alternating (since and are). The usual textbook way to define the exterior product of exterior forms would give terms (and divide by or , depending on convention), not just , but if and are already alternating, then these come in groups of equal terms, so the result is the same. But some textbooks save on terms by defining things as I did (see this definition on Wikipedia for example, at least currently).
If and are not already alternating, then neither should be, and the definition with terms will incorrectly make it alternating. But the definition with terms will not. And nothing stops this from being applied to forms that aren't multilinear or always defined either.
Also note that this operation is associative. The generalized version says that to multiply a list of forms of ranks , you look at the permutations on letters that keep the first letters, the next letters, etc through the last letters in order, apply the forms to the vectors given by the appropriate indexes, multiply by the sign of the permutation if you're multiplying antisymmetrically, add them up, and (optionally) divide by the number of terms.
Now to apply this to the metric . There are two ways to think of , as a bilinear symmetric form of rank , or as a quadratic form of rank . In local coordinates on a surface parametrized by and for example (Gauss's first fundamental form), the first version is , while the second is .
Taking the first version, is a -term expression that simplifies (after much cancellation) to , and I'm going to stop writing it down after (what came from) the first terms, because these are all of the ones with , and those should have cancelled completely (not just partially as happened here). The unbalanced nature of the signs has ruined this. Taking the second version, is an -term expression that simplifies all the way to ; in fact, whenever is a -form, which is well-known when is linear but true regardless.
So neither of these is working out! (For the record, the answer that we were looking for here is , possibly with a constant factor to worry about later.)
[Administrative note: I have now merged a thread entitled ’Cogerm forms’ with this one. Comments 1. - 73. and 76. are from this old ’Cogerm forms’ thread.]
Richard, why does this have to be done? Although the discussion of this topic is spread out, it is all linked from the bottom of the nLab page. One of those links (to https://nforum.ncatlab.org/discussion/5700/cogerm-forms/) is now broken, and while it can simply be removed now, how do we know whether anybody has put that link anywhere else? As much as possible, reorganization should not break external links.
We have been taking the Latest Changes pages as canonical for page discussion for a number of years, and doing this kind of merging of older threads into the Latest Changes threads where appropriate. Good that you noticed the link, yes, it would be good to remove it or update it. I’ll not enter into a debate about the possibility of breaking other links; it might happen, yes, but the probability of any significant negative consequences is tiny, and outweighed by the benefits of having a single, canonical page to find discussion in my opinion. E.g. it is very unlikely that I or many others would find the links to the nForum discussions when glancing at cogerm differential form, as indeed I did not in this case, but everybody can understand ’Discuss this page’ and know what to expect upon clicking upon it.
There's a difference between a thread about a topic and a thread about a page (although that distinction is not made in the early comments in the merged thread). Is it possible to set up the server so that old links will redirect when merging? (That still breaks so-called PermaLinks to individual comments, but at least people will be on the correct page.)
1 to 81 of 81