Looks good.

]]>Okay, I changed my mind again and put it at differentiable map as you suggested. I also changed the redirect for derivative to point there rather than differentiation.

]]>Looking at it again, I think maybe I would rather put it at differentiation, which is currently lacking a “lowbrow” perspective.

]]>I would like to record this observation somewhere on the nLab. However, I’m at a bit of a loss for where to put it. Any ideas?

]]>your argument for it a couple comments down seemed to use only tangent vectors

You mean the argument in #63 there, currently the penultimate comment?

I agree with you now; you have a perfect example of something which has a linear differential but is still not differentiable.

I am actually thinking this term of bring up Taylor series fairly early and saying that a function is $n$ times differentiable if it has a Taylor polynomial of degree up to $n$ with the remainder $R_n$ satisfying $R_n(\mathbf{h})/{\|\mathbf{h}\|^n} \to 0$ as $\mathbf{h} \to 0$. (Officially we don’t cover Taylor series in higher dimensions, but we do cover Taylor polynomials of degree up to $1$, only not under that name, and I’ve always made it a point to mention that the results used there are a special case of a higher-dimensional Taylor’s Theorem.) I have until late next week to decide!

]]>There is an answer to #2 here.

]]>That’s what I was trying to say by “simultaneously in all directions” here. Granted, the discussion there allowed us to consider arbitrary curves rather than just lines through the origin, but your argument for it a couple comments down seemed to use only tangent vectors, so now I don’t believe it. Am I confused?

]]>The difference between this and the first equation is that the first requires for each $\epsilon\gt 0$ a neighborhood of $\mathbf{0}\in\mathbb{R}^n$ in which the quantity is $\lt\epsilon$, whereas the second would allow a different 1-dimensional neighborhood in each direction.

So we need a sort of uniform differentiability, but uniform in the direction rather than in the point (as is actually meant by the term ‘uniformly differentiable’).

]]>I glanced at wikipedia, but I couldn’t figure out what it might have to do with the question.

]]>wikipedia. Sorry, I’ll try to write more tomorrow.

]]>@spitters, what is a divided difference? There’s only one appearance that I can find in the paper, which doesn’t define the term.

]]>I think the answer to #1 is no if the function is allowed to be discontinuous in a neighborhood of the point. For instance, consider

$f(x,y) = \begin{cases} \frac{y^3}{x} &\quad x\neq 0\\ 0 &\quad x=0 \end{cases}$at $(0,0)$. Along $y=m x$ we have $f(x,y) = m^3 x^2 = m y^2$, whose derivative at 0 is 0; thus all directional derivatives at $(0,0)$ are zero (hence in particular are given by a linear map). But since $f\to \infty$ along every line $y=const \neq 0$, every neighborhood of $(0,0)$ contains points where $f$ is arbitrarily large.

]]>Quick answer, could it be connected to the divided differences? We connected some references in this paper.

]]>Actually, my calculus book has a third definition: it says that a 2-variable function $f:\mathbb{R}^2\to\mathbb{R}$ is differentiable at $(x,y)$ if there are functions $E_1(h,k)$ and $E_2(h,k)$ and a linear map $\mathrm{d}f_{(x,y)}$ (actually, it assumes that $\mathrm{d}f_{(x,y)}$ is given by the partial derivatives) such that

$f(x+h,y+k) = f(x,y) + \mathrm{d}f_{(x,y)}(h,k) + E_1(h,k)\cdot h + E_2(h,k)\cdot k$and where $\lim_{(h,k)\to (0,0)} E_1(h,k) = \lim_{(h,k)\to (0,0)} E_2(h,k) = 0$. I see how this implies the usual definition, but I don’t see how it’s equivalent to it.

]]>We discussed this in another thread a while back, but I was never quite satisfied, although I let it drop. Now I’m wondering about it again.

The usual definition of a differentiable function $f:\mathbb{R}^n \to \mathbb{R}$ of $n$ variables is that there is a linear map $\mathrm{d}f_{\mathbf{x}}$ such that

$\lim_{\mathbf{h}\to 0} \frac{f(\mathbf{x}+\mathbf{h}) - f(\mathbf{x}) - \mathrm{d}f_{\mathbf{x}}(\mathbf{h})}{\Vert\mathbf{h}\Vert} = 0.$We know that this implies the existence of all directional derivatives, i.e. for any vector $\mathbf{v}\neq 0$ we have a number $D_{\mathbf{v}}f_{\mathbf{x}}$, namely $\mathrm{d}f_{\mathbf{x}}(\mathbf{v})$, such that

$\lim_{h\to 0} \frac{f(\mathbf{x}+h\mathbf{v}) - f(\mathbf{x}) - D_{\mathbf{v}}f_{\mathbf{x}}\cdot h}{h} = 0.$And the converse doesn’t hold, though it does if the partial derivatives are continuous. However, suppose we assume that all the directional derivatives exist at a given point, and moreover depend linearly on $\mathbf{v}$. I.e. there is a linear map $\mathrm{d}f_{\mathbf{x}}$ such that for any vector $\mathbf{v}\neq 0$ we have

$\lim_{h\to 0} \frac{f(\mathbf{x}+h\mathbf{v}) - f(\mathbf{x}) - \mathrm{d}f_{\mathbf{x}}(\mathbf{v})\cdot h}{h} = 0.$Does it follow that $f$ is differentiable? The difference between this and the first equation is that the first requires for each $\epsilon\gt 0$ a neighborhood of $\mathbf{0}\in\mathbb{R}^n$ in which the quantity is $\lt\epsilon$, whereas the second would allow a different 1-dimensional neighborhood in each direction.

]]>