Thanks. Here are online links:

]]>Weil: Sur les théorèmes de de Rham. Demailly: Lemma IV.6.9 (page 214)

https://mathoverflow.net/questions/292720/good-covers-of-complex-analytic-spaces

]]>@Dmitri - link to the Q?

]]>offers a simpler (and shorter) proof of the existence of good open covers,

which does not involve Riemannian metrics or smooth triangulations.

(One still has to prove that convex open subsets of R^n are diffeomorphic to R^n, though.) ]]>

I have added pointer to that MO proof to the entry, here.

]]>Or just as an alternative proof in a new section. Cutting and pasting the MO answer should be ok under the Creative Commons license as long as a clear attribution is given.

]]>My eyes have just run across what looks to be a nice explicit online proof of the fact that open star-shaped domains are diffeomorphic to Euclidean space. But I’m having difficulty figuring out how to incorporate it into the article ball, particularly whether this proof should supplant the one already there, or how prominently it should be mentioned.

]]>That’s cute, but I don’t think it’s Turing-completeness. The difference between primitive recursive functions and general recursive functions is *precisely* the existence of unbounded search. Computing the next state of a Turing machine is a primitive recursive function, but that doesn’t mean that primitive-recursion is Turing complete. To be Turing-complete, a language needs to be able to perform the unbounded search *itself*, without a user present to say “now try the next number”, “now try the next number”, “now try the next number”, etc. This might seem like a trivial distinction, but I think it’s necessary if “Turing complete” is to have any nonvacuous meaning.

For a real-world example, consider a total functional programming language like Gallina (the vernacular language of Coq). If “Turing complete” is to have any meaning, then Gallina must not be Turing complete, since every function you can write in it is total. But you can do the same thing with it that was done there with CSS, encoding a general recursive “function” and allowing the user to say “now evaluate the next step”, “now evaluate the next step”, “now evaluate the next step”, until an answer is found or the user gets bored.

Zhen is right that any physical implementation of a programming language must be bounded in time and space. But that just means that Turing-completeness is a property of a mathematical abstraction of a programming language rather than a physical implementation of it. Even a mathematical abstraction of CSS could not perform an unbounded search on its own.

]]>Well, I suppose that depends on what you mean by “Turing complete”. Apparently you can implement rule 110 in HTML+CSS, but only on a finite space and for finitely many steps. By any strict definition of Turing completeness, this would not count – it’s obviously a finite state machine!

As I understand it you can use CSS to compute the next state in a cellular automata - not compute the entire sequence of states. It is impossible to give a browser CSS that doesn’t terminate while a TeX program may not terminate.

]]>@Mike Shulman: Here is an implementation of a universal computer in CSS: http://eli.fox-epste.in/rule110-full.html.

]]>Well, I suppose that depends on what you mean by “Turing complete”. Apparently you can implement rule 110 in HTML+CSS, but only on a finite space and for finitely many steps. By any strict definition of Turing completeness, this would not count – it’s obviously a finite state machine! – but the same criticism would apply to any genuine computer programming language.

]]>HTML+CSS is Turing complete? How does *that* work?

That’s amusing. I think I lean somewhat more toward believing I wrote that crap over believing someone was impersonating me!

Additionally, I seem to have been the one who introduced the statement (in revision 7, I think) that my doppelgänger was trying to prove. More weirdness. I suppose no one will object if I simply remove all of it.

]]>Todd, of course in principle anyone could sign with your name. I only now realize the following strange behaviour of the software: on each current page it shows the IP address of the last author, after the name that the author gave (on the very bottom of the entry), but on the pages in the history it shows only the name, not the IP.

]]>I added the new proof to good open cover. There are now two proofs of the same statement there.

]]>@RodMcGuire:

it is a fairly flawed system for representing text in a form that can be manipulated.

Manipulated for which purpose?

TeX is basically a macro language however it has been extended to be a Turing Complete programming language.

True, but the same can be said about PostScript, HTML+CSS (without JavaScript), PDF, all of which are Turing-complete despite being much more primitive languages than TeX. And Markdown is simply too primitive to be compared to TeX.

At the level of convenience provided by TeX it is impossible not to be Turing-complete.

That is why many people are interested in subsets like iTeX that are much simpler and purer. Things like graphics languages written in TeX, while useful, are pretty much abominations when viewed from outside.

I don’t think this is the reason for the existence of iTeX. Rather, a software capable of translating TeX formulas into reasonably-looking HTML was needed, hence iTeX was created because existing translators were incapable of producing (decent-looking) HTML.

Why? You can save nLab pages on your computer and view them offline. If you want to send the pages to a place like FedEx Kinko’s to have them printed then it is best to convert them to PDF (Kinko’s can’t understand all formats) but there are many ways to do this.

I already have a local copy of the entire nLab repository on my hard drive. What is desirable, however, is a way to view nLab with decent-looking formulas (i.e., not iTeX). Also, the only way to create PDF with good-looking formulas is via TeX.

]]>@Urs: The proposed proof is a strict subset of the existing proof of Proposition 2; I merely observe that some constructions are unnecessary.

With this in mind, I wonder if it might look weird to have two proofs of the same proposition, where the second proof contains the first proof.

]]>@Dmitri - and I even told you on MO that there is a proof in do Carmo! Memory…

]]>Urs #32: By addendum, I meant the paragraph after the proof of theorem 1, where it says that by slightly modifying the proof, the argument can be extended to open star-shaped regions with compact closure. It’s not in revision 3.

This is very odd. The offending paragraph seems to have been introduced in rev 9, and the revision indicates that it was written by me! I find it incredible that I could write that, as the argument doesn’t even make sense to me (mathematically, and even almost grammatically). I just don’t recognize myself at all in it.

Re #33: I agree.

]]>Dmitri, feel invited to add your alternative argument. Maybe instead of removing the previous one, you could keep it. Feel invited to advertize it as “convoluted”, if you wish to.

]]>Which addendum? The discussion added in rev 3?

]]>While we are at it, I would like to point out that the article good open cover could be improved by amending the proof of Proposition 1 with the remark that geodesically convex subsets are mapped diffeomorphically to a star-shaped open subset of R^n via the exponential map based at some interior point.

Indeed, the Gauss lemma shows that the tangent map of the exponential map is invertible and the exponential map is injective by definition of geodesic convexity (two different points x,y∈T_p(M) such that exp_p(x)=exp_p(y)=q give two different geodesics from p to q). Hence the exponential map is a diffeomorphism, as desired.

With this remark one can remove Proposition 2 and its somewhat convoluted proof with injectivity and convexity estimates.

]]>Getting back to the mathematics of the article ball for just a second: I’m not sure who wrote the addendum after theorem 1, but the suggested argument there doesn’t seem quite right to me. I’m not saying the *statement* there (that closures of open starlike regions are homeomorphic to disks if they are compact) is false, just that the suggested modified argument for it doesn’t seem to work, unless I’m missing something.

What the proof of theorem 1 does in the third bullet point is show that no two boundary points can be positive multiples of each other. The suggested argument purports to prove that this still holds under the hypotheses of the more general statement. But it doesn’t. Consider the open region

$U = \{(x, y) \in \mathbb{R}^2: (x^2 + y^2 \lt 1 \; \wedge \; y \geq 0) \; \vee \; (x^2 + y^2 \lt 4 \; \wedge \; y \lt 0)\}$This $U$ is the interior of a closed region, consisting of the union $C$ of a closed northern half-disk centered at the origin, placed atop a larger closed southern half-disk. $U$ is starlike about the origin. The closure is the closed region $C$ itself. Notice that $(1, 0)$ and $(2, 0)$ are both boundary points of $C$.

This is the type of example that had bothered me before hitting on the idea presented in the proof of the theorem (under the more restrictive convexity hypothesis).

]]>@dmitri 23

Which makes me wonder why everybody who implements a math typesetting system for web (iTeX, MathJax) elects to write an ersatz-TeX instead of modifying TeX itself so that it outputs the resulting compiled

math list(an internal TeX concept; a bit similar to what MathML presents).Writing a new subroutine (or rather modifying the existing one that converts a math list to an hlist, another internal TeX concept)

While TeX and derivatives like LaTeX have enormous success as a way to write academic papers, it is a fairly flawed system for representing text in a form that can be manipulated. TeX is basically a macro language however it has been extended to be a Turing Complete programming language. Manipulating pure macros is easy, manipulating fully general programs is impossible and you have to deal with issues like halting.

That is why many people are interested in subsets like iTeX that are much simpler and purer. Things like graphics languages written in TeX, while useful, are pretty much abominations when viewed from outside.

@dmitri 26

Myself, I want to write a small program to convert nLab pages into a properly hyperlinked TeX (the existing converter is not quite good enough, in particular the double bracket links don’t work at all) so that I can compile the nLab using TeX and view it offline when I need it.

Why? You can save nLab pages on your computer and view them offline. If you want to send the pages to a place like FedEx Kinko’s to have them printed then it is best to convert them to PDF (Kinko’s can’t understand all formats) but there are many ways to do this.

]]>