Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorUrs
    • CommentTimeOct 11th 2010

    created ball

    • CommentRowNumber2.
    • CommentAuthorDavidRoberts
    • CommentTimeApr 29th 2013
    • (edited Apr 29th 2013)

    Does theorem 237 from the notes (in German) by Ferus prove that for a star-shaped region C nC \subset \mathbb{R}^n that CC is diffeomorphic to that n\mathbb{R}^n? Because at ball we have two theorems:

    Theorem In dimension dd \in \mathbb{N} for d4d \neq 4 we have: every open subset of d\mathbb{R}^d which is homeomorphic to 𝔹 d\mathbb{B}^d is also diffeomorphic to it.

    Remark In dimension 4 the analog statement fails due to the existence of exotic smooth structures on 4\mathbb{R}^4. See De Michelis-Freedman.

    Theorem Let C nC \subset \mathbb{R}^n be a star-shaped open subset of a Cartesian space. Then CC is diffeomorphic to n\mathbb{R}^n.

    An open subset homeomorphic to n\mathbb{R}^n is star-shaped, so at the very least, we have to modify the second statement.

    Ideally it would be good to get a theorem that told us we can get a differentiably good open cover of fake 4\mathbb{R}^4s, where the open sets are diffeomorphic to a standard 4\mathbb{R}^4. At present I don’t think we do (or perhaps I just don’t see it).

    • CommentRowNumber3.
    • CommentAuthorUrs
    • CommentTimeApr 29th 2013

    Does theorem 237 from the notes (in German) by Ferus prove that for a star-shaped region C nC \subset \mathbb{R}^n that CC is diffeomorphic to that n\mathbb{R}^n?

    Yes, what is meant is n\mathbb{R}^n with its standard smooth structure.

    An open subset homeomorphic to n\mathbb{R}^n is star-shaped

    Not necessarily. Just take one that is and deform it continuously to make it have a “nose”.

    at the very least, we have to modify the second statement.

    This is a thorny subject, in particular since it seems so simple and is so subtle. So I might have to think about it harder again. But right now I am not sure what you say must be changed.

    • CommentRowNumber4.
    • CommentAuthorDavidRoberts
    • CommentTimeApr 29th 2013

    Ah, overlooked the chance of a ’nose’. I was thinking in terms of balls-shaped regions

    • CommentRowNumber5.
    • CommentAuthorUrs
    • CommentTimeJul 9th 2014

    at open balll the definition for a general metric space wasn’t stated, so I added it. It seems I also created a stub metric topology.

    • CommentRowNumber6.
    • CommentAuthorColin Tan
    • CommentTimeJul 17th 2014

    Wrote down the inverse to the diffeomorphism from nn-dimensional cartesian space to the open n-ball. I used the vector notation implicitly, if that is okay.

    • CommentRowNumber7.
    • CommentAuthorTobyBartels
    • CommentTimeJul 18th 2014

    That looks good to me.

    • CommentRowNumber8.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 20th 2015
    The following simple proof that star-shaped open subset of R^n are diffeomorphic to R^n
    came up at lunch here in Göttingen.
    It uses only two elementary tools from smooth manifolds:
    smooth partitions of unity and smooth flows of smooth vector fields.
    It does seem to be shorter than the proof in Born/Ferus.

    Suppose T is a star-shaped open subset of R^n centered at the origin.
    Theorem 2.29 in Lee proves that there is a function f on R^n such that f>0 on T and f vanishes on the complement of T.
    By applying bump functions we can assume that f≤1 everywhere and f=1 in an open ϵ-neighborhood of the origin, by rescaling the ambient space we can assume ϵ=2.

    The smooth vector field V: x↦f(x)⋅x/∥x∥ is defined on the complement of the origin in T.
    Multiply V by a smooth bump function 0≤b≤1 such that b=1 for ∥x∥>1/2 and b=0 in a neighborhood of 0.
    The new vector field V extends smoothly to the origin and defines a smooth global flow F: R×T→T.
    (The domain of the flow is R and not some interval (−∞,A) because the norm of V is boundaed by 1.)
    Observe that for 1/2<∥x∥<2 the vector field V equals x↦x/∥x∥.
    Also, all flow lines of V are radial rays.

    Now define the flow map p: R^n_{>1/2} → T_{>1/2} as x↦F(∥x∥−1,x/∥x∥) for ∥x∥>1/2.
    (The subscript >1/2 removes the closed ball of radius 1/2.)
    The flow map is the composition of two diffeomorphisms: R^n_{>1/2} → (−1/2,∞)×S^n → T_{>1/2},
    hence itself is a diffeomorphism.
    (The latter map is surjective by elementary properties of flow lines.)

    Finally, define the desired diffeomorphism d: R^n→T as the gluing of the identity map for ∥x∥<2 and as p for ∥x∥>1/2.
    The map g is smooth because for 1/2<∥x∥<2 both definitions give the same value.
    • CommentRowNumber9.
    • CommentAuthorUrs
    • CommentTimeMar 20th 2015

    Thanks, that’s neat. Could you add it to the entry?

    • CommentRowNumber10.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 20th 2015
    Sure, but it would be nice if a few more people read it and confirmed that they think it's correct.
    As we know, this is a rather subtle issue…
    • CommentRowNumber11.
    • CommentAuthorTodd_Trimble
    • CommentTimeMar 20th 2015

    I think it’s okay. But it’s quite an elementary argument, so I’d want to mull over it more. Maybe I’d enjoy a few more words on “surjective by elementary properties of flow lines”.

    • CommentRowNumber12.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 20th 2015
    >surjective by elementary properties of flow lines

    Every flow line is a smooth map of the form L: (A,B)→T,
    where A and B can be finite or infinite.

    If B is finite and the limit of L(t) as t→B exists,
    then the vector field V must vanish at B.

    In our case V can only vanish at the boundary of T,
    which is precisely what we want for surjectivity.
    • CommentRowNumber13.
    • CommentAuthorDavidRoberts
    • CommentTimeMar 21st 2015
    • (edited Mar 22nd 2015)

    @Dmitri

    regarding your comments on MO about differentiably good open covers, it occurred to me that one might not know every point has a geodesically convex neighbourhood. I bring this up here, since I gather you are working on a proof of existence of such good open covers.

    • CommentRowNumber14.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 23rd 2015
    @DavidRoberts: The existence of geodiscally convex neighborhoods is well-documented,
    for example, in Proposition 3.4.2 of do Carmo's Riemannian Geometry.
    • CommentRowNumber15.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 23rd 2015
    I incorporated the proof in the article ball.

    The nLab software seems to have problems with some standard TeX commands and characters such as \bf, <, >.
    Is there any chance that the software can be patched to fix this?
    • CommentRowNumber16.
    • CommentAuthorRodMcGuire
    • CommentTimeMar 23rd 2015

    The nLab software seems to have problems with some standard TeX commands and characters such as \bf, <, >. Is there any chance that the software can be patched to fix this?

    I think the answer to this would be “it is not broken, you are just speaking the wrong dialect”. What you need is \mathbf instead of \bb, \lt for <, \gt for >. (you can pick up the “proper” dialect by seeing what is used to render similar things in the file being edited)

    I made these substitutions. Does this look right now?

    Proof

    Suppose TT is a star-shaped open subset of n{\mathbb {R}}^n centered at the origin. Theorem 2.29 in Lee proves that there is a function ff on n{\mathbb{R}}^n such that f>0f\gt 0 on TT and ff vanishes on the complement of TT. By applying bump functions we can assume that f1f\le1 everywhere and f=1f=1 in an open ε\epsilon-neighborhood of the origin, by rescaling the ambient space we can assume ε=2\epsilon=2.

    The smooth vector field V:xf(x)x/xV\colon x\mapsto f(x)\cdot x/\|x\| is defined on the complement of the origin in TT. Multiply VV by a smooth bump function 0b10\le b\le1 such that b=1b=1 for x>1/2\|x\|\gt 1/2 and b=0b=0 in a neighborhood of 0. The new vector field VV extends smoothly to the origin and defines a smooth global flow F:R×TTF\colon R\times T\to T. (The domain of the flow is RR and not some interval (,A)(-\infty,A) because the norm of VV is bounded by 1.) Observe that for 1/2<x<21/2\lt \|x\|\lt 2 the vector field VV equals xx/xx\mapsto x/\|x\|. Also, all flow lines of VV are radial rays.

    Now define the flow map p: >1/2 nT >1/2p\colon{\mathbb{R}}^n_{\gt 1/2}\to T_{\gt 1/2} as xF(x1,x/x)x\mapsto F(\|x\|-1,x/\|x\|) for x>1/2\|x\|\gt 1/2. (The subscript >1/2\gt 1/2 removes the closed ball of radius 1/21/2.) The flow map is the composition of two diffeomorphisms: >1/2 n(1/2,)×S nT >1/2{\mathbb{R}}^n_{\gt 1/2}\to(-1/2,\infty)\times S^n\to T_{\gt 1/2}, hence itself is a diffeomorphism. (The latter map is surjective by elementary properties of flow lines.)

    Finally, define the desired diffeomorphism d: nTd\colon{\mathbb{R}}^n\to T as the gluing of the identity map for x<2\|x\|\lt 2 and as pp for x>1/2\|x\|\gt 1/2. The map gg is smooth because for 1/2<x<21/2\lt \|x\|\lt 2 both definitions give the same value.

    • CommentRowNumber17.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 23rd 2015
    >it is not broken, you are just speaking the wrong dialect

    I am not sure that not supporting “<” can be even classified as a “dialect”.
    I think it is a good idea not to introduce any weird changes to TeX
    because any mathematician who comes to nLab should be able
    to start editing right away, without having to learn any additional TeX “dialect”.

    Is there any good reason not to interpret < the way it is done in TeX?
    The other ersatz-TeXs such as MathJax do support <, >, and \bf, so why can't nLab?

    Perhaps it is possible to switch nLab to something like MathJax?

    > \mathbf instead of \bb

    I actually used \bf, the bold font, not the blackboard bold font \Bbb.
    • CommentRowNumber18.
    • CommentAuthorZhen Lin
    • CommentTimeMar 23rd 2015
    • (edited Mar 23rd 2015)

    The reason why < and > don’t work is because they have special meaning in HTML.

    • CommentRowNumber19.
    • CommentAuthorTodd_Trimble
    • CommentTimeMar 23rd 2015

    I made some further edits. Just a note that by typing $x/{\|x\|}$, with braces, one gets

    x/xx/{\|x\|}

    whereas without braces, one gets

    x/xx/\|x\|

    Sometimes I think readability is enhanced by using xx\frac{x}{{\|x\|}} instead of x/xx/{\|x\|}. There were other small adjustments made.

    • CommentRowNumber20.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 23rd 2015
    @ToddTrimble: This is clearly a bug in nLab's software; TeX gets the spacing right in this case.

    In general, it seems rather bad than TeX formulas cannot be imported into nLab without additional (rather nontrivial) modifications,
    e.g., not just replacing <, >, \bf with something else, but also fixing some really weird spacing bugs.
    • CommentRowNumber21.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 23rd 2015
    @ZhenLin: But nLab's internal markup language is not HTML.

    Surely it's possible to replace < by &lt; and > by &gt; when compiling to HTML?
    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeMar 23rd 2015

    The spacing might be a bug in MathML that iTeX (the proper name of the program that does our typesetting) has just omitted to fix, I don’t know.

    iTeX is never going to support all of (La)TeX because it doesn’t actually run TeX underneath, it just translates directly to MathML (that’s what enables it to avoid the nasty retypsetting problems that afflict mathjax). In particular, I expect TeX-isms like {\bf ...} are unlikely to be feasible; the LaTeX syntax \mathbf{...} is probably much easier for a program other than TeX to parse. However, I don’t really understand why it couldn’t support < and >, since the source goes through iTeX before it gets interpreted as HTML.

    • CommentRowNumber23.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 23rd 2015
    >iTeX is never going to support all of (La)TeX because it doesn’t actually run TeX underneath, it just translates directly to MathML (that’s what enables it to avoid the nasty retypsetting problems that afflict mathjax).

    Which makes me wonder why everybody who implements a math typesetting system for web (iTeX, MathJax)
    elects to write an ersatz-TeX instead of modifying TeX itself so that it outputs
    the resulting compiled _math list_ (an internal TeX concept; a bit similar to what MathML presents).

    Writing a new subroutine (or rather modifying the existing one that converts a math list to an hlist, another internal TeX concept)
    to MathML or perhaps directly to CSS certainly seems like a lot less work to do than writing a completely new ersatz-TeX.
    As a bonus one gets perfect support for TeX.
    But perhaps I don't understand something…

    >TeX-isms like {\bf ...} are unlikely to be feasible; the LaTeX syntax \mathbf{...} is probably much easier for a program other than TeX to parse.

    I don't see how positioning { before \bf as opposed to after can significantly alter the difficulty of parsing,
    and MathJax has no problems parsing it.
    (Calling it a “TeX-ism” is technically incorrect; \bf is a perfectly acceptable LaTeX syntax
    and LaTeX itself is just a macro package for TeX; the latter, by the way, does not define \bf as a primitive.)

    In any case, MathJax can output to MathML (the corresponding part is called NativeMML),
    so I wonder whether there is any reason for nLab to use iTeX,
    which is clearly inferior.
    • CommentRowNumber24.
    • CommentAuthorMike Shulman
    • CommentTimeMar 23rd 2015

    I meant “TeX-ism” as opposed to HTML, not as opposed to LaTeX. I suppose one could hardcode a syntax like {\bf ... }, but I think it would be a bit misleading to support that but not all of the effects of TeX’s grouping constructs.

    • CommentRowNumber25.
    • CommentAuthorUrs
    • CommentTimeMar 23rd 2015

    Dmitri, I quite share your disappointment with the software. Ultimately it is my fault: when I created the nnLab a few years back didn’t look around much which options there are. We have since long ago talked about migrating to a better system, such as for instance WorkingWiki, which does all we could ask for (MathML, true LaTeX including xypic and all, proper talk pages, and much more…) but nobody as yet had the time and energy to make it happen.

    • CommentRowNumber26.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 23rd 2015
    • (edited Mar 23rd 2015)

    @Urs: I see that transition has already been discussed extensively, e.g., at http://nforum.mathforge.org/discussion/6093/changes-in-nlab-organization/.

    What is the current status of this and what remains to be done?

    Does the main problem lie in changing the syntax of nLab pages to something that can be understood by WorkingWiki?

    Myself, I want to write a small program to convert nLab pages into a properly hyperlinked TeX (the existing converter is not quite good enough, in particular the double bracket links don’t work at all) so that I can compile the nLab using TeX and view it offline when I need it. Such a parser could presumably be adapted to change the current syntax to something else.

    • CommentRowNumber27.
    • CommentAuthorDavidRoberts
    • CommentTimeMar 24th 2015

    @Dmitri - thanks. I think the nLab should then make a more modern citation than a passing remark than Milnor, and certainly not back to Whitehead’s original paper, which is pretty scary. I will add something to this effect.

    • CommentRowNumber28.
    • CommentAuthorUrs
    • CommentTimeMar 24th 2015
    • (edited Mar 24th 2015)

    @Dmitri,

    this is the status: a few months back Adeel and myself had an extensive exchange with Lee Worden on setting up a WorkingWiki installation for the nLab. (See for instance here at Research Math Typesetting and Publishing in WorkingWiki). Lee was effectively ready to go for it, and we had some first scripts to test transporting of nnLab pages. But then we ran out of steam (personally I ran out of more than just steam, and I suppose Adeel is all busy himself, too).

    What it would take, I suppose, would be somebody dedicating himself to looking into the issue of writing scripts that systematically and truthfully transport the nnLab entry database to that WorkingWiki platform.

    If you are interested, or if you know somebody who might be interested, let me know and we should get back into contact with Lee Worden.

    • CommentRowNumber29.
    • CommentAuthorRodMcGuire
    • CommentTimeMar 24th 2015
    • (edited Mar 24th 2015)

    @dmitri 23

    Which makes me wonder why everybody who implements a math typesetting system for web (iTeX, MathJax) elects to write an ersatz-TeX instead of modifying TeX itself so that it outputs the resulting compiled math list (an internal TeX concept; a bit similar to what MathML presents).

    Writing a new subroutine (or rather modifying the existing one that converts a math list to an hlist, another internal TeX concept)

    While TeX and derivatives like LaTeX have enormous success as a way to write academic papers, it is a fairly flawed system for representing text in a form that can be manipulated. TeX is basically a macro language however it has been extended to be a Turing Complete programming language. Manipulating pure macros is easy, manipulating fully general programs is impossible and you have to deal with issues like halting.

    That is why many people are interested in subsets like iTeX that are much simpler and purer. Things like graphics languages written in TeX, while useful, are pretty much abominations when viewed from outside.

    @dmitri 26

    Myself, I want to write a small program to convert nLab pages into a properly hyperlinked TeX (the existing converter is not quite good enough, in particular the double bracket links don’t work at all) so that I can compile the nLab using TeX and view it offline when I need it.

    Why? You can save nLab pages on your computer and view them offline. If you want to send the pages to a place like FedEx Kinko’s to have them printed then it is best to convert them to PDF (Kinko’s can’t understand all formats) but there are many ways to do this.

    • CommentRowNumber30.
    • CommentAuthorTodd_Trimble
    • CommentTimeMar 24th 2015

    Getting back to the mathematics of the article ball for just a second: I’m not sure who wrote the addendum after theorem 1, but the suggested argument there doesn’t seem quite right to me. I’m not saying the statement there (that closures of open starlike regions are homeomorphic to disks if they are compact) is false, just that the suggested modified argument for it doesn’t seem to work, unless I’m missing something.

    What the proof of theorem 1 does in the third bullet point is show that no two boundary points can be positive multiples of each other. The suggested argument purports to prove that this still holds under the hypotheses of the more general statement. But it doesn’t. Consider the open region

    U={(x,y) 2:(x 2+y 2<1y0)(x 2+y 2<4y<0)}U = \{(x, y) \in \mathbb{R}^2: (x^2 + y^2 \lt 1 \; \wedge \; y \geq 0) \; \vee \; (x^2 + y^2 \lt 4 \; \wedge \; y \lt 0)\}

    This UU is the interior of a closed region, consisting of the union CC of a closed northern half-disk centered at the origin, placed atop a larger closed southern half-disk. UU is starlike about the origin. The closure is the closed region CC itself. Notice that (1,0)(1, 0) and (2,0)(2, 0) are both boundary points of CC.

    This is the type of example that had bothered me before hitting on the idea presented in the proof of the theorem (under the more restrictive convexity hypothesis).

    • CommentRowNumber31.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 24th 2015

    While we are at it, I would like to point out that the article good open cover could be improved by amending the proof of Proposition 1 with the remark that geodesically convex subsets are mapped diffeomorphically to a star-shaped open subset of R^n via the exponential map based at some interior point.

    Indeed, the Gauss lemma shows that the tangent map of the exponential map is invertible and the exponential map is injective by definition of geodesic convexity (two different points x,y∈T_p(M) such that exp_p(x)=exp_p(y)=q give two different geodesics from p to q). Hence the exponential map is a diffeomorphism, as desired.

    With this remark one can remove Proposition 2 and its somewhat convoluted proof with injectivity and convexity estimates.

    • CommentRowNumber32.
    • CommentAuthorUrs
    • CommentTimeMar 24th 2015
    • (edited Mar 24th 2015)

    Which addendum? The discussion added in rev 3?

    • CommentRowNumber33.
    • CommentAuthorUrs
    • CommentTimeMar 24th 2015

    Dmitri, feel invited to add your alternative argument. Maybe instead of removing the previous one, you could keep it. Feel invited to advertize it as “convoluted”, if you wish to.

    • CommentRowNumber34.
    • CommentAuthorTodd_Trimble
    • CommentTimeMar 24th 2015
    • (edited Mar 24th 2015)

    Urs #32: By addendum, I meant the paragraph after the proof of theorem 1, where it says that by slightly modifying the proof, the argument can be extended to open star-shaped regions with compact closure. It’s not in revision 3.

    This is very odd. The offending paragraph seems to have been introduced in rev 9, and the revision indicates that it was written by me! I find it incredible that I could write that, as the argument doesn’t even make sense to me (mathematically, and even almost grammatically). I just don’t recognize myself at all in it.

    Re #33: I agree.

    • CommentRowNumber35.
    • CommentAuthorDavidRoberts
    • CommentTimeMar 25th 2015

    @Dmitri - and I even told you on MO that there is a proof in do Carmo! Memory…

    • CommentRowNumber36.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 25th 2015

    @Urs: The proposed proof is a strict subset of the existing proof of Proposition 2; I merely observe that some constructions are unnecessary.

    With this in mind, I wonder if it might look weird to have two proofs of the same proposition, where the second proof contains the first proof.

    • CommentRowNumber37.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 25th 2015

    @RodMcGuire:

    it is a fairly flawed system for representing text in a form that can be manipulated.

    Manipulated for which purpose?

    TeX is basically a macro language however it has been extended to be a Turing Complete programming language.

    True, but the same can be said about PostScript, HTML+CSS (without JavaScript), PDF, all of which are Turing-complete despite being much more primitive languages than TeX. And Markdown is simply too primitive to be compared to TeX.

    At the level of convenience provided by TeX it is impossible not to be Turing-complete.

    That is why many people are interested in subsets like iTeX that are much simpler and purer. Things like graphics languages written in TeX, while useful, are pretty much abominations when viewed from outside.

    I don’t think this is the reason for the existence of iTeX. Rather, a software capable of translating TeX formulas into reasonably-looking HTML was needed, hence iTeX was created because existing translators were incapable of producing (decent-looking) HTML.

    Why? You can save nLab pages on your computer and view them offline. If you want to send the pages to a place like FedEx Kinko’s to have them printed then it is best to convert them to PDF (Kinko’s can’t understand all formats) but there are many ways to do this.

    I already have a local copy of the entire nLab repository on my hard drive. What is desirable, however, is a way to view nLab with decent-looking formulas (i.e., not iTeX). Also, the only way to create PDF with good-looking formulas is via TeX.

    • CommentRowNumber38.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 25th 2015

    I added the new proof to good open cover. There are now two proofs of the same statement there.

    • CommentRowNumber39.
    • CommentAuthorUrs
    • CommentTimeMar 25th 2015

    Todd, of course in principle anyone could sign with your name. I only now realize the following strange behaviour of the software: on each current page it shows the IP address of the last author, after the name that the author gave (on the very bottom of the entry), but on the pages in the history it shows only the name, not the IP.

    • CommentRowNumber40.
    • CommentAuthorTodd_Trimble
    • CommentTimeMar 25th 2015

    That’s amusing. I think I lean somewhat more toward believing I wrote that crap over believing someone was impersonating me!

    Additionally, I seem to have been the one who introduced the statement (in revision 7, I think) that my doppelgänger was trying to prove. More weirdness. I suppose no one will object if I simply remove all of it.

    • CommentRowNumber41.
    • CommentAuthorMike Shulman
    • CommentTimeMar 25th 2015

    HTML+CSS is Turing complete? How does that work?

    • CommentRowNumber42.
    • CommentAuthorZhen Lin
    • CommentTimeMar 25th 2015

    Well, I suppose that depends on what you mean by “Turing complete”. Apparently you can implement rule 110 in HTML+CSS, but only on a finite space and for finitely many steps. By any strict definition of Turing completeness, this would not count – it’s obviously a finite state machine! – but the same criticism would apply to any genuine computer programming language.

    • CommentRowNumber43.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 25th 2015
    • (edited Mar 25th 2015)

    @Mike Shulman: Here is an implementation of a universal computer in CSS: http://eli.fox-epste.in/rule110-full.html.

    • CommentRowNumber44.
    • CommentAuthorRodMcGuire
    • CommentTimeMar 25th 2015
    • (edited Mar 25th 2015)

    Well, I suppose that depends on what you mean by “Turing complete”. Apparently you can implement rule 110 in HTML+CSS, but only on a finite space and for finitely many steps. By any strict definition of Turing completeness, this would not count – it’s obviously a finite state machine!

    As I understand it you can use CSS to compute the next state in a cellular automata - not compute the entire sequence of states. It is impossible to give a browser CSS that doesn’t terminate while a TeX program may not terminate.

    • CommentRowNumber45.
    • CommentAuthorMike Shulman
    • CommentTimeMar 25th 2015

    That’s cute, but I don’t think it’s Turing-completeness. The difference between primitive recursive functions and general recursive functions is precisely the existence of unbounded search. Computing the next state of a Turing machine is a primitive recursive function, but that doesn’t mean that primitive-recursion is Turing complete. To be Turing-complete, a language needs to be able to perform the unbounded search itself, without a user present to say “now try the next number”, “now try the next number”, “now try the next number”, etc. This might seem like a trivial distinction, but I think it’s necessary if “Turing complete” is to have any nonvacuous meaning.

    For a real-world example, consider a total functional programming language like Gallina (the vernacular language of Coq). If “Turing complete” is to have any meaning, then Gallina must not be Turing complete, since every function you can write in it is total. But you can do the same thing with it that was done there with CSS, encoding a general recursive “function” and allowing the user to say “now evaluate the next step”, “now evaluate the next step”, “now evaluate the next step”, until an answer is found or the user gets bored.

    Zhen is right that any physical implementation of a programming language must be bounded in time and space. But that just means that Turing-completeness is a property of a mathematical abstraction of a programming language rather than a physical implementation of it. Even a mathematical abstraction of CSS could not perform an unbounded search on its own.

    • CommentRowNumber46.
    • CommentAuthorTodd_Trimble
    • CommentTimeJun 25th 2016

    My eyes have just run across what looks to be a nice explicit online proof of the fact that open star-shaped domains are diffeomorphic to Euclidean space. But I’m having difficulty figuring out how to incorporate it into the article ball, particularly whether this proof should supplant the one already there, or how prominently it should be mentioned.

    • CommentRowNumber47.
    • CommentAuthorDavidRoberts
    • CommentTimeJun 26th 2016

    Or just as an alternative proof in a new section. Cutting and pasting the MO answer should be ok under the Creative Commons license as long as a clear attribution is given.

    • CommentRowNumber48.
    • CommentAuthorUrs
    • CommentTimeMar 15th 2017

    I have added pointer to that MO proof to the entry, here.

    • CommentRowNumber49.
    • CommentAuthorDmitri Pavlov
    • CommentTimeFeb 13th 2018
    As pointed out on MathOverflow, Lemma IV.6.9 in Demailly's book “Complex analytic and differential geometry”
    offers a simpler (and shorter) proof of the existence of good open covers,
    which does not involve Riemannian metrics or smooth triangulations.
    (One still has to prove that convex open subsets of R^n are diffeomorphic to R^n, though.)
    • CommentRowNumber50.
    • CommentAuthorDavidRoberts
    • CommentTimeFeb 13th 2018

    @Dmitri - link to the Q?

    • CommentRowNumber51.
    • CommentAuthorDmitri Pavlov
    • CommentTimeFeb 13th 2018
    • (edited Feb 13th 2018)
    • CommentRowNumber52.
    • CommentAuthorDavidRoberts
    • CommentTimeFeb 13th 2018

    Thanks. Here are online links:

    Weil: Sur les théorèmes de de Rham. Demailly: Lemma IV.6.9 (page 214)

    • CommentRowNumber53.
    • CommentAuthorDmitri Pavlov
    • CommentTimeDec 29th 2019

    Copy-pasted the Gonnord-Tosel proof from MathOverflow.

    diff, v37, current