Re #56: “Theorem 5.3.2. Every manifold admits a good cover.”

So this reference should be at good open cover instead.

]]>But now it reads “Excercises … in Theorem …”, which seems odd.

If the previous and remaining pointer to Theorem 5.3.2 seems unsatisfactory, how about “Theorem 5.3.2 and below”.

]]>Corrected the citation to Guillemin–Haine:

“A series of exercises covering the statement appears as Exercises 5.3.i–5.3.iv in…”

]]>The article currently claims that Guillemin–Haine prove that open star-shaped regions are diffeomorphic to a ball.

In fact, Guillemin–Haine never state this fact. They only state a weaker fact, namely, convex open subsets of R^n are diffeomorphic to R^n, as Proposition 5.3.3, which is left without proof.

They do prove that any smooth manifold admits a differentiable good open cover, up to Proposition 5.3.3.

]]>Copy-pasted the Gonnord-Tosel proof from MathOverflow.

]]>Thanks. Here are online links:

]]>Weil: Sur les théorèmes de de Rham. Demailly: Lemma IV.6.9 (page 214)

https://mathoverflow.net/questions/292720/good-covers-of-complex-analytic-spaces

]]>@Dmitri - link to the Q?

]]>offers a simpler (and shorter) proof of the existence of good open covers,

which does not involve Riemannian metrics or smooth triangulations.

(One still has to prove that convex open subsets of R^n are diffeomorphic to R^n, though.) ]]>

I have added pointer to that MO proof to the entry, here.

]]>Or just as an alternative proof in a new section. Cutting and pasting the MO answer should be ok under the Creative Commons license as long as a clear attribution is given.

]]>My eyes have just run across what looks to be a nice explicit online proof of the fact that open star-shaped domains are diffeomorphic to Euclidean space. But I’m having difficulty figuring out how to incorporate it into the article ball, particularly whether this proof should supplant the one already there, or how prominently it should be mentioned.

]]>That’s cute, but I don’t think it’s Turing-completeness. The difference between primitive recursive functions and general recursive functions is *precisely* the existence of unbounded search. Computing the next state of a Turing machine is a primitive recursive function, but that doesn’t mean that primitive-recursion is Turing complete. To be Turing-complete, a language needs to be able to perform the unbounded search *itself*, without a user present to say “now try the next number”, “now try the next number”, “now try the next number”, etc. This might seem like a trivial distinction, but I think it’s necessary if “Turing complete” is to have any nonvacuous meaning.

For a real-world example, consider a total functional programming language like Gallina (the vernacular language of Coq). If “Turing complete” is to have any meaning, then Gallina must not be Turing complete, since every function you can write in it is total. But you can do the same thing with it that was done there with CSS, encoding a general recursive “function” and allowing the user to say “now evaluate the next step”, “now evaluate the next step”, “now evaluate the next step”, until an answer is found or the user gets bored.

Zhen is right that any physical implementation of a programming language must be bounded in time and space. But that just means that Turing-completeness is a property of a mathematical abstraction of a programming language rather than a physical implementation of it. Even a mathematical abstraction of CSS could not perform an unbounded search on its own.

]]>Well, I suppose that depends on what you mean by “Turing complete”. Apparently you can implement rule 110 in HTML+CSS, but only on a finite space and for finitely many steps. By any strict definition of Turing completeness, this would not count – it’s obviously a finite state machine!

As I understand it you can use CSS to compute the next state in a cellular automata - not compute the entire sequence of states. It is impossible to give a browser CSS that doesn’t terminate while a TeX program may not terminate.

]]>@Mike Shulman: Here is an implementation of a universal computer in CSS: http://eli.fox-epste.in/rule110-full.html.

]]>Well, I suppose that depends on what you mean by “Turing complete”. Apparently you can implement rule 110 in HTML+CSS, but only on a finite space and for finitely many steps. By any strict definition of Turing completeness, this would not count – it’s obviously a finite state machine! – but the same criticism would apply to any genuine computer programming language.

]]>HTML+CSS is Turing complete? How does *that* work?

That’s amusing. I think I lean somewhat more toward believing I wrote that crap over believing someone was impersonating me!

Additionally, I seem to have been the one who introduced the statement (in revision 7, I think) that my doppelgänger was trying to prove. More weirdness. I suppose no one will object if I simply remove all of it.

]]>Todd, of course in principle anyone could sign with your name. I only now realize the following strange behaviour of the software: on each current page it shows the IP address of the last author, after the name that the author gave (on the very bottom of the entry), but on the pages in the history it shows only the name, not the IP.

]]>I added the new proof to good open cover. There are now two proofs of the same statement there.

]]>@RodMcGuire:

it is a fairly flawed system for representing text in a form that can be manipulated.

Manipulated for which purpose?

TeX is basically a macro language however it has been extended to be a Turing Complete programming language.

True, but the same can be said about PostScript, HTML+CSS (without JavaScript), PDF, all of which are Turing-complete despite being much more primitive languages than TeX. And Markdown is simply too primitive to be compared to TeX.

At the level of convenience provided by TeX it is impossible not to be Turing-complete.

That is why many people are interested in subsets like iTeX that are much simpler and purer. Things like graphics languages written in TeX, while useful, are pretty much abominations when viewed from outside.

I don’t think this is the reason for the existence of iTeX. Rather, a software capable of translating TeX formulas into reasonably-looking HTML was needed, hence iTeX was created because existing translators were incapable of producing (decent-looking) HTML.

Why? You can save nLab pages on your computer and view them offline. If you want to send the pages to a place like FedEx Kinko’s to have them printed then it is best to convert them to PDF (Kinko’s can’t understand all formats) but there are many ways to do this.

I already have a local copy of the entire nLab repository on my hard drive. What is desirable, however, is a way to view nLab with decent-looking formulas (i.e., not iTeX). Also, the only way to create PDF with good-looking formulas is via TeX.

]]>@Urs: The proposed proof is a strict subset of the existing proof of Proposition 2; I merely observe that some constructions are unnecessary.

With this in mind, I wonder if it might look weird to have two proofs of the same proposition, where the second proof contains the first proof.

]]>@Dmitri - and I even told you on MO that there is a proof in do Carmo! Memory…

]]>Urs #32: By addendum, I meant the paragraph after the proof of theorem 1, where it says that by slightly modifying the proof, the argument can be extended to open star-shaped regions with compact closure. It’s not in revision 3.

This is very odd. The offending paragraph seems to have been introduced in rev 9, and the revision indicates that it was written by me! I find it incredible that I could write that, as the argument doesn’t even make sense to me (mathematically, and even almost grammatically). I just don’t recognize myself at all in it.

Re #33: I agree.

]]>