Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I just got around to reading Dray and Manogue’s paean to differentials. Most of it I agree with, but I am confused by their antagonism towards differentials for linear approximation.
Some of what they say seems just wrong, e.g. they claim (bottom of p95) that between a functional relationship and its inverse the resulting notions of and are different, but I don’t see it. It seems to me what’s different is rather the relationship between and and and . Specifically, for we have but , while for we have and , but in both cases the ’s represent changes along the tangent line to the curve.
They also say that using differentials for linear approximation obstructs their use as infinitesimals, but I don’t see that either. Quite the opposite, in fact: I would say that infinitesimals are a way of making precise exactly what a linear approximation is. The idea of a linear approximation is that when is close to , then is close to , but what does that mean exactly? You can say it with epsilons and deltas, but it’s more intuitive to say it with infinitesimals: when is first-order close to , then is second-order close to . Isn’t it important in applications outside of mathematics that a Taylor series approximates a function even for appreciable (non-infinitesimal) changes? Frequently it seems that in practice we use the smooth (infinitesimal change) to approximate the discrete (appreciable change). And the notation isn’t contradictory either: makes sense as a relationship between and as they range over both infinitesimals and appreciable values; we use the infinitesimal version to define the relationship and work with it formally, but the appreciable one in applications.
I would say that infinitesimals are a way of making precise exactly what a linear approximation is.
Yes, indeed. The finite order in “nilpotent element” is precisely the finite order in “linear approximation to some order”.
It is also striking that their list of formalizations of infinitesimals on p. 96 (7 of 11) omits what is probably the best way, namely Grothendieck’s way as later highlighted in its essence by Lawvere and as used all the time by all algebraic geometers (and in fact intuitively by many physicists without formal mathematical training).
It is odd that they omit nilpotent infinitesimals. FWIW, though, my current opinion is that for pedagogical purposes, and perhaps for many applied fields as well, invertible infinitesimals are preferable to nilpotent ones.
It is probably worth adding that one can formalize the Dirac distribution via invertible infinitesimals, but not via nilpotent ones.
To me, the most annoying error is right there on the front page, where they put ‘dividing’ in scare quotes. It is quite literally the operation of division! To shy away from that is exactly the kind of thinking that leads to banishing differentials in the first place.
I also object to their division between differentials of equations and differentials of functions. There's only one kind of differential (in their paper), which is the differential of an expression/quantity. As applications, these are different; the first starts with two quantities and and uses the theorem that if (and either or exists), while the second starts with a quantity and a function and uses the theorem that (if exists and is defined at ). But it's the same operation .
I would also consider their calculation for the problem on pages 92&93 to be in error, in the step where they move from the last equation with differentials to the following one. The basic principle of optimization is that the maximum or minimum value of can only occur when is or undefined; so having established that , they should conclude that the extreme values of occur only when , , , , or is undefined. The last two possibilities can only be dealt with by examining the nature of in the context of the original problem, to see that it is possible to vary smoothly (so is defined) and without pausing (so ) except for the extreme cases where or . (After all, it would be illegitimate to write and conclude that has no extreme values because has no solution. At some point you must check that you've used a differential of a quantity whose critical behaviour you already understand.) Since and are impossible, this leaves us with (only) these extreme cases in addition to the one considered in the paper. As it happens, while they derive the minimum value of , one of the extreme cases gives us the maximum value of ; both extremes occur.
As to the point at hand … I teach that an equation involving infinitesimals leads to an approximate equation involving finitesimal (or ‘appreciable’, that's a good word) differences. So for example, from , we derive and so .1 (I delay discussion of the precision of this approximation to the treatment of Taylor polynomials in the sequence on infinite series, although in principle that could be done earlier.) Dray & Manogue's discussion here seems particularly confused, especially the bit about and agreeing on the graph of the function (which is largely moot) and their use of for the approximate change in (which I would consider unforgivable).
But I can understand their objection to the textbook treatment; if you motivate differentials as infinitesimal changes, then using and for appreciable quantities (whether and themselves or merely approximations thereto) seems wrong. If you instead motivate differentials as changes in a linear approximation, then this is not a problem, but this is not a motivation that I would give students when I introduce them (even though ultimately it underlies the rigorous definition).
Still, they seem to say that linear approximation requires giving a name to the function that is of , and that's just not true. Particularly in applications, there is no need to do this (just as there is no need to give a name to the function that is of in the optimization problem). So, we need to use differentials of equations here, and that's what I do; but rather than identify a differential with either a difference or an approximation thereto, I give the rule (as I gave it above) that you can change differentials to differences in an equation so long as you also change equality to approximate equality.
Here's a bug in iTeX; \Delta
comes out in italics by default (which I don't mind) yet \mathrm
may not be applied to it (which I do mind). ↩
I don't like to identify the differentials of Calculus with either nilpotent infinitesimals or nonstandard infinitesimals. I want them to be invertible (in appropriate contexts), so that I can write , and I also want to be equal (not merely adequal) to the derivative. So to me, they are differential forms in the sense of standard differential geometry; and even though I use Lawvere's ideas to explain (or to avoid explaining) what space they are differential forms on, I'm not doing SDG.
Todd, one can formalize the Dirac distribution also without any infinitesimals at all.
Seriously, I think there is overwhelming pactical evidence that nilpotent infinitesimals is the right way to do differential calculus, while there is close to no practical evidence for the use of non-nilpotent infinitesimals.
Of course, their statement on page 98 about , that it's at best shorthand for an equality of integrals, does not go far enough either. It is rather an equation between absolute differential forms which may be calculated as follows:
I teach this in the multivariable term in place of the Jacobian determinant (although there are tricks to speed it up).
Urs #10, surely you don’t believe I don’t know that?? That’s why I used the phrase “can be formalized” – not “are formalized”.
Actually, though, I think the assertions in the second paragraph of #10 are deserving of more careful articulation and supporting evidence, since I am sure there are many mathematicians who would instinctively disagree with such dogma. To be clear: maybe you’re right, Urs, but such a sweeping dismissal of invertible infinitesimals does deserve at least some explanation by someone.
When I studied nonstandard analysis, I read at several places that there are places in applied mathematics where they have several infinitesimal scales involved which are not functionally dependent, and that there are subtle kinds of convergence suited to deal with such situations; so it is intuitively easier to indeed have infinitesimals whose smallness s not scaled as powers of a fixed infinitesimal. It does not look to me that this is straightforward to treat with nilpotent infinitesimals.
On the other hand, nonstandard analysis is not only bringing the infinitesimals but the transfer principle which makes automatic transfer of one whole class of theorems. In SDG one needs to work in a special way with infinitesimals and prove many theorems from scratch. By no means SDG replaces nonstandard analysis in all important applications. For example, Keistler is emphasising a power of very rich functional spaces of nonstandard analysis, e.g. Loeb probability spaces.
Good points, Zoran. In general, I am skeptical of claims that any one way to do something is “the right way”.
Even if one accepts not to do nonstandard kind of differentals, there are other kinds of nonnilpotent infinitesmals, namely it is useful also to work with full completion rather than finite nilpotent thickenings; e.g. theorems on formal functions around a subvariety, which are supported on the completion; they are not supported at any nilpotent level but at the colimit where the differentials are still infinitesimal in the sense that the series does not converge in finitary sense, and they are not nilpotent as all powers contribute. Zariski’s algebraic geometry could not make sense of those and Grothendieck did in his related work.
I do not know if formal functions along (=normal to) submanifold in SDG have their status already at the axiomatic level or one needs to go to a specific model ?
The word “appreciable” is from nonstandard analysis literature. I’m not sure how standard (npi) it is.
I actually don’t even know how to do ordinary 1-variable integration with nilpotent infinitesimals. The only SDG treatments of integration that I’ve seen basically postulate it as an axiom, which is not really satisfying, especially pedagogically.
Re #8 and #9, my inclination would be to say that the differential of a quantity is another quantity that represents the first-order change in the first quantity. Since the meaning of “first-order” is relative to some (explicitly or implicitly) chosen scale, it could be either infinitesimal or appreciable depending on context, but in either case there is still the intuition of it being “small”. We can define precisely what differentials mean by using (invertible) infinitesimals — or using appreciables, with epsilons and deltas — but once we’ve done that then there’s nothing wrong with plugging in either kind of value. And because the differential only represents the first-order change, the quotient is always equal to the derivative.
@Toby #9
So to me, they are differential forms in the sense of standard differential geometry; and even though I use Lawvere’s ideas to explain (or to avoid explaining) what space they are differential forms on, I’m not doing SDG.
I’m not sure if differential forms in standard differential geometry are superior to non-invertible differentials when it comes to making sense of a fraction . In fact a single differential form is not invertible ( is not defined) just as for nilsquare infintesimals. What we mean by is the ratio of differentials, which makes sense whenever there is a variable quantity such that . This probably makes sense for any flavour of differentials.
@Zoran 14:
When I studied nonstandard analysis, I read at several places that there are places in applied mathematics where they have several infinitesimal scales involved which are not functionally dependent, and that there are subtle kinds of convergence suited to deal with such situations;
That sounds interesting, do you remember where you read that or what examples they where talking about?
@Mike 18:
The only SDG treatments of integration that I’ve seen basically postulate it as an axiom, which is not really satisfying, especially pedagogically.
I’ve also been wondering about this recently. If I recall correctly SDG postulates the existence of antiderivatives by an axiom and defines the definite integral by the fundamental theorem of calculus. So actually there is no fundamental theorem of calculus in SDG (correct me if I’m wrong).
So how could one restore a fundamental theorem of calculus inside SDG? Related question: how to do things like numerical integration inside SDG? Is there a notion of “discrete approximation” of a space inside SDG? The limit (in the categorical sense) of a family of discrete spaces approximating another space?
Edit: I had a look at the 2009 Book by Kock Synthetic geometry of manifolds where he says p106: “Integration theory in SDG is not very well developed; in most places, like in [36], the theory depends on anti-derivatives. The present text is no exception, and the theory here is even more primitive than in [36]” Here [36] is the older book by Kock on SDG.
do you remember where you read that or what examples they where talking about
I think it was first time from Hoegh-Krohn, but came across later as well. Once I am back from next week conference I will be glad to search for his examples.
@Michael #20: The reciprocal of a nowhere- differential on a -dimensional manifold is defined (and the reciprocal of any differential on a -dimensional manifold is partially defined); this generalizes to any line bundle. (In fact, since the reciprocal line bundle is the dual line bundle, the reciprocal of is the vector field (no subscripts necessary). The notation (or ) is for the application of this vector field to scalar fields, which is really the combination of applying the differential and then the pairing of vector fields with covector fields (aka differential -forms). So is appropriate notation when we're going to multiply directly by a differential.)
However, you are correct that can mean the unique solution to , without any meaning given to . This is important, since sometimes we want without assuming that the unspecified underlying space is -dimensional. So your broader point, that the notation works just fine with nilpotent infinitesimals, is correct.
I see, thanks for clarifying.
It’s a thousand pities that the phrase “differential equation” has come to mean what should really be called a “derivative equation”. So what kind of equation is, say, ?
The two notions are not so different in the one-variable, first-order case. So I would still call that a differential equation.
Yes, I agree with Zhen. Yesterday I told my students ‘A differential equation is an equation with differentials or derivatives in it.’ and gave these three examples, all essentially equivalent:
A higher order differential equation (even partial) is equivalent to a system of equations in which differentials (and no derivatives) appear. For example,
is equivalent to the system
I don't know that this is worth it!
Alternatively, using higher differentials, we could write
Or even
there you go, a second-order–differential equation!
Okay, if anyone comes complaining to me about calling equations-with-differentials “differential equations”, I’ll cite you guys. (-: But I’m not entirely happy with
A differential equation is an equation with differentials or derivatives in it
because it would include equations like
Would you say that a vector equation is an equation with vectors in it? How about
?
No, I would not. In fact, I don’t think I’ve ever had occasion to use the phrase “vector equation”.
Oh. Well, I have.
I guess that my point is that your example is ill formed.
How do you define “ill formed”?
I'm inclined to say that your example fails to be a differential equation for the same reason that
fails to be an equation at all. There is no universal definition of what makes something well formed or ill formed; but one must establish that one's equation has meaning before writing it down.
That said, in both the vector-equation and differential-equation case, the problem is a matter of homogeneity or dimensionalysis. Don't add vectors to scalars, don't add first-order differentials to second-order ones, don't add distances to speeds, etc. Of course, this is not a universal rule; the geometric algebraists break it all the time, but this only splits things up into several independent equations. From that perspective, my equation
splits into this system of equations:
(The unique solution is now immediate.)
So following this, your example
splits into the following infinite system of differential equations:
(Thanks to the first equation, this system has no solutions.) Possibly should be given some other interpretation; but that's the job of the person writing down the equation, I'm just trying to be generous by coming up with something.
That makes sense to me, but do you explain it that way to your students? I would expect that looks just like another variable to them. How do you define “differential” for them in such a way that “don’t add first-order differentials to second-order ones” makes sense?
Now I see your point; I don't really explain that to the students.
That said, I do tell them (much earlier, when discussing how to spot errors in the calculation of the differential of an expression) that if they see an expression with two terms, one of which has a differential factor and one of which doesn't, then there has been a mistake. (Especially in the context of an equation, I can explain this by saying that something infinitely small can't be equal to something finitely small.) So if somebody did see
then I could explain that the problem is similar (especially since is finitesimal).
We only really deal with first-order differential equations in any of the classes that I teach.
Can’t this be tackled with traditional dimensional analysis, at least in physically meaningful cases? You can’t add an area to a volume, after all.
It's certainly the same kind of issue, but if and are dimensionless quantities, still Mike's equation is unbalanced.
And if and are lengths, then is dimensionally balanced but differentially unbalanced.
Toby, I wish I could sit in on one of your classes from start to end. (-: Clearly you’ve thought all this out very carefully, and I sort of have a sense of how you do it after all of our discussions, but not, I think, enough to replicate it myself.
Well, I've thought it out enough to fake whatever I haven't thought out!
One can partly sit in on my classes by reading the notes (near the bottom) for Applied Calculus, regular Calculus, and multivariable Calculus.
I want to redo the discussion of differentials in the last to emphasize curves instead of vectors as the thing that differentials act on, to be less dependent on the precise nature of the unspecified domain.
Sorry for the belated reaction, now from my phone:
Sorry Todd, I did not mean to imply you did not know it, but I did object to the suggestion that there is a defect of nilpotent infinitesimals which is cured by nonstandard analysis.
For the sake of argument, I’ll keep insisting on that. Mike may be sceptical of general claims, but empirically by looking at what happens in practice, this is what I see.
Zoran means to give a counterexample above, but I doubt it: certainly with nilpotent infinitesimals it is not true that they are all proportional to each other. On the contrary.
I see nilpotent differentials govern large areas of maths and deeply so. On the other hand I see nonstandard anaysis as a hack that proves that it can be done if one insists, but that does not show up naturally.
One thing that would convince me of nonstandard analysis is if it could be shown to model differential cohesion, as Toby suggested recently in another thread. That would be neat. But I don’t quite see it yet.
Concerning integration: we once had this discussion before in another thread: in 1-categorical SDG one needs an integration axiom, but not in homotopy SDG. Here integration of (Kaehler) differential forms is given by the quotient of forms modulo homotopy given by closed forms on a disk. This is described for instance in the nlab entry on Lie integration.
Actually, I see Zoran’s stronger point as being the existence of a transfer principle for nonstandard analysis on invertible infinitesimals.
To repeat what I wrote elsewhere:
I wonder if the difference between nilpotent and invertible infinitesimals has something to do with the not altogether straightforward relationship between category theory and model theory, that we once discussed. I mean, you never make any use of that very model-theoretic transfer principle with nilpotent infinitesimals, do you? Perhaps this relates to the difference between geometric and logical morphisms in toposes.
The transfer principle was used by Ngo in his proof of the fundamental lemma. Interesting that it’s appearing in such a core area of maths. I wonder if there’s something fundamental there, or merely a “hack”.
In case anyone’s antipathy to it arises from not knowing this, nonstandard analysis does have a nice category-theoretic description: it’s the filterpower construction on a topos. The canonical functor from a topos to its filterpower is logical and conservative, and that is the transfer principle in a nutshell. From this perspective you can also think of the topos of nonstandard analysis as the “germ at infinity” of the topos of infinite sequences, which is certainly a natural construction and not a hack. There are more toposes in heaven and earth, Horatio. (-:
And I don’t think I can teach Calc I students about Kaehler differential forms and homotopy SDG. I don’t even understand myself what the stuff at Lie integration would look like in SDG – it all seems to be expressed in terms of concrete models?
Anyway, I suggest, Urs, that you not think of NSA as a competitor to SDG, even though they both contain things called “infinitesimals”, but as another tool in the mathematician’s toolbox which serves different purposes.
Colin, what would be the relation of this application of the transfer principle to differential calculus eith explicit differentials?
Mike, we seem to be talking past each other. Not everything that is expressed in topos theory is therefore the natural way to do something. Maybe remember the sympathies and antipathies towards Bohr toposes for another example of just this question.
I still find that when I look around then sdg differentials play a thorough and foundational role in differential geometry both in its basic formulation but in particular in a bunch of powerful modern refinements. All of derived algebraic geometry, all of D-geometry and all the applications to pde theory, variational calculus etc that this has
In contrast, for nonstandard analysis the main statement is that elementary calculus can be phrased this way. Is there anything that goes further?
By the way, also nilpotent differentials have their transfer principle: that’s the statement that for instance the Cahier topos is a model for differential cohesion. This means in particular that there is a certain geometric morphism from the standard smooth topos to that with synthetic infinitesimals.
But i’d think these transfer principles are part of the notion of infinitesimals themselves. Saying that nonstandard analysis is good because it has a transfer principle is a bit like saying that the natural numbers are good because they have an element called zero
If you are really interested in applications of nonstandard analysis, you could try reading some books. Here are a few on my shelf:
I haven’t digested everything in these books, but I’ve learned a lot from them, and each of them goes way beyond elementary calculus. But perhaps the point of departure is that most of the applications are to analysis, whereas in #48 you seem to prefer applications to geometry. Analysis is an area of math that often doesn’t seem especially amenable to elegant category-theoretic formulations, but that doesn’t make it less important. Nonstandard analysis, essentially because it is a “synthetic” way to talk about orders of magnitude, does seem like it provides a more elegant way to do a lot of analysis.
In other words, there’s a reason we say “nonstandard analysis” but “synthetic differential geometry”. (-:
Thanks, Mike, for the analysis/differential geometry dichotomy. I’ll think about that.
I have added the references that you displayed to nonstandard analysis – References. Incidentally, that makes the list already available there a bit longer still; and my impression is that it would be useful if some expert organized these items a little and/or added some comments as to why one would want to track down which of them.
for nonstandard analysis the main statement is that elementary calculus can be phrased this way. Is there anything that goes further?
Surely, there are advanced objects like Loeb nonstandard probability spaces, nonstandard set theory, transfer at the level of certain topoi in the picture etc. Nonstandard analysis is not only about analysis.
We present some parts of a mathematical theory that is sometimes called Heyting-valued analysis (or nonstandard analysis in the broad sense). Sometimes this theory is considered as a part of general topos theory. One may surmise that this theory has some applications outside mathematical logic as well: in algebra and analysis, and even in a still wider context, for example, as in A. Robinson’s well-known work on the application of nonstandard analysis in quantum field theory.
In Chapter I we present the actual method of Heyting-valued (in particular, Boolean-valued) analysis. Chapters II–IV contain specific examples of applications of the method of Heyting-valued analysis. In Chapter II we primarily consider the problem of the existence of a model companion of a locally axiomatizable class of rings. In Chapter III we consider a conjecture of P. S. Novikov [cf. Selected works (Russian), see p. 127, “Nauka”, Moscow, 1979; MR0545907 (80i:01017)]. In this chapter we discuss the transfer from classical to intuitionistic validity in an arbitrary ring. Novikov’s paper established the possibility of such a transfer in the case of the ring Z. In Chapter IV, we construct, for some rings of continuous Y-valued functions (as algebras over the ring Y), a nonstandard representation Y˜ such that in a certain sense this algebra is similar to its ring of scalars Y. The appendix briefly describes examples of applications of Boolean-valued analysis in connection with problems of duality. Practically all the theorems and propositions are given complete proofs.
Just a sample set-theoretic treatise in the framework of nonstandard analysis.
Zoran, thanks for taking the time to provide more pointers. But I think your choice of examples – e-g- nonstandard probability spaces – confirms that the distinction between analysis and differential calculus which Mike amplified above is relevant. Probability spaces are not a topic involving differential calculus.
For me that suggestion of Mike’s is a good conclusion of this little debate here, and I’d tend to leave it at that for the moment, since I should be looking into other things. If I had more time I would maybe add a little paragraph to this effect to the nLab entry.
Concerning the analysis/geometry dichotomy, in view of algebra-geometry duality, this might also be seen as the analysis/algebra dichotomy. Then we have some interesting comments by Terry Tao, which I collected here. In particular, in the section ’Tao on Buzz’ he looks to characterise analysis and algebra in terms of open and closed conditions. There’s also something there on NSA.
Re #41, the new version of the notes on differentials for my Multivariable Calclulus class are done: check them out. I now feel like the end (where I get to this bit) is a bit anticlimactic, and I wonder if I should redo the whole thing starting with the action of differentials on curves.
Incidentally, higher differentials such as (where remember we are not doing the exterior differential, which would just be zero, but rather something relevant to second derivatives) cannot be understood as acting on vectors (since they act on order- jets) but can be understood as acting on curves.
With the emphasis on curves, I suppose that I'm secretly doing calculus on diffeological spaces. (I remarked in class on Thursday that there are very general notions of ‘differentiable space’ even beyond the differentiable manifolds that one is likely to meet in an advanced course, but in theory everything in this course is done on open subspaces of for .)
It seems that I went to far with the strategy of pushing everything back to curves, since (continuously extended to the origin) is not differentiable at the origin (by the usual definition), even though its composite with any differentiable curve is differentiable (indeed continuously so if the curve is continuously differentiable).
Boman's theorem says that you can push things back to curves for smooth maps, and this is what really matters, so I may just do that next term, leaving the fine print for merely differentiable maps to the textbook.
I presume it is also not sufficient to say that is differentiable if is differentiable for all and moreover there exists a differential form such that ? That is, you also need to ensure that the limits defining each derivative happen “simultaneously in all directions”?
My working definition of ‘differential form’ (of rank ) in this class is a formal linear combination of differentials with coefficients from the ring of appreciable quantities (not stated like that, of course, but given by examples). Then is automatically the differential form desired (well, assuming that it exists, since I actually refuse to define until is known to be differentiable, but then your proposal is circular).
But ignoring that context, and defining a differential form abstractly as an operator on differentiable curves with appropriate properties (such as linearity), then the answer is Yes if you require to be a continuous differential form; and in that case, we can conclude that is continuously differentiable.
Without that, I don't know. In the example of , not only is not continuous, it's not linear (at ); if you apply it to a line through the origin with tangent vector , then the result is . So this is not a differential form.
If has the properties of a differential form, does this guarantee that is differentiable in the standard sense? That would be nice! Is additivity sufficient? That would be particularly nice! I don't know.
OK, yes, your idea does work!
Specifically, define as the operation (in general partially defined) on differentiable parametrized curves (in a given Cartesian space, or more generally in a manifold) that takes to (if this exists). Also define to be the restriction of that operation to curves with . Then might be defined on all such curves, and (if so) it might respect the equivalence of curves that defines a tangent vector at , and (if so) it might be linear and so a cotangent vector at . If so, then is differentiable at , as desired.
The proof is that the definition of differentiability itself calls for nothing more than this cotangent vector.
The next step is to make this into a definition of generalized differentiable (rather than smooth) space, by not using the previously known structure of tangent vectors.
Just throwing this in the mix: http://mathoverflow.net/a/7632/4177
Toby, you might have something to contribute here: http://matheducators.stackexchange.com/questions/2246/practical-experience-with-teaching-differentials-in-freshman-calc
Thanks!
The current term's Mulivariable Calculus course has a revised version of the introduction to differentials and -forms, split into two parts. (Although I'm not really done rewriting the second part, it's acceptable, and I needed to hand it out in class today.) There will be more handouts.
This year features the general statement that a differential form is any expression with differentials in it, said with the confidence that I know how to define it if pressed! But in the first handout, , , etc are treated as independent variables in a formal expression, which I never really liked. Fortunately, there is a real definition in the second handout, although what it really does is to define equality as an equivalence relation on such formal expressions.
Edit: Also, the second handout formally defines a function on a Cartesian space to be differentiable at a point if is not only differentiable wherever the value of is and is differentiable there, but also depends only on the derivative of there (the velocity tangent vector) and depends on that linearly (stated as the existence of an appropriate row vector ). That actually came out looking simpler than I had originally anticipated!
I should probably add to differentiable function my proof that this definition of differentiability is correct. It's stronger than requiring all directional derivatives and then requiring these to depend linearly on the direction. Since the derivative of depends on only through the derivative of and yet there exist nondifferentiable functions with linear directional derivatives (example: extended as , at ), one might think that it would be insufficient to require to depend on only linearly through the derivative of . However, the claim that depends on only through the derivative of fails for nondifferentiable functions with linear directional derivatives! (In the example, is at when is a line but not, say, when on .)
Very interesting!
Yes, when I saw that example on differentiable map, I was at first worried that I'd made a mistake, and I tried to put that example through my proof, which made me realize that it didn't apply.
Here is a multivariable Calculus textbook, intended for undergraduates who have had only one-variable Calculus and no more advanced mathematics, that covers the Stokes Theorems using differential forms. http://matrixeditions.com/UnifiedApproach5thedSamples.html
1 to 71 of 71