Suddenly, it works again now.
]]>Hmm, nothing has changed on the server side; as I wrote, the server has been running since July. There doesn’t seem to be much wrong with your script. Could you try running the two curl commands separately? And also maybe provide the command you are using to create the base 64 hash? You could use curl -u instead of the authorization header as an alternative.
]]>Here is my script, it was working just fine for a while, then it suddenly stopped working. So something must have changed on the server side. I replaced the authorization token with ???.
#!/usr/bin/bash
SQL_DUMP_ID="$(curl -X POST -H "Authorization: Basic ???==" https://ncatlab.org/sqldump)"
curl -H "Authorization: Basic ???==" "https://ncatlab.org/sqldump/$SQL_DUMP_ID" >"$(date +"%Y-%m-%dT%H:%M:%S").sql"
]]>
Hi Dmitri, I checked just now, and the sqldump server has been up since July, and my automated job was able to make backups over the last few days. From the logs, it looks like maybe something was wrong in your second API call. Recall that there are two: first POST, and then GET. The GET one seemed to be lacking authentication details.
]]>It was working a couple of days ago, probably the nLab server restarted, which takes it down currently until I notice! I’ll bring it back up when next at my computer.
]]>What happened to https://ncatlab.org/sqldump? It appears to be unresponsive currently.
]]>Hi Dmitri, apologies for omitting to reply to this at the time. You have access to make an SQL dump; you can obtain the source code that way, for example. E.g. use the ’webs’ table to find the web_id of the CatLab, and then just do a SELECT on the pages and revisions tables.
]]>How can I retrieve the source code of a page from Joyal’s Cat Lab, like https://ncatlab.org/joyalscatlab/published/Weak+factorisation+systems?
It’s protected, which seems to make it impossible to retrieve the source code in the usual way.
I now have a candidate script for fixing the codecogs stuff, but have no way of testing it.
]]>Yes, search and replace should be possible, good idea! I don’t have time at the moment, but will give it a go when I get the chance. If you write a script, Dmitri, I am happy to try to use it (though I will need in any case to write some more logic around it).
]]>Which is the standard tool for this decoding ?
]]>Re #35: Of course, the URLs must be properly URL-decoded before search-replace.
There are standard tools to accomplish this, I could write a simple script to do this if somebody is willing to run it on the source files.
]]>The correct version would be
\begin{xymatrix} c\ar[r] & a \ar@<.5ex>[r]\ar@<-.5ex>[r]& a’ \end{xymatrix}
See for example Beck’s theorem (zoranskoda) and compare the corrected source code version 4, https://ncatlab.org/zoranskoda/source/Beck%27s+theorem/4, with the codecogs version 3, https://ncatlab.org/zoranskoda/source/Beck%27s+theorem/3.
]]>Excellent! Can we search-replace all links to codecogs so that they are rendered natively?
]]>Both are available, yes. See the HowTo. But the CatLab is currently I think closed for editing, I believe that Todd would be the person to talk to about that.
]]>What is the current status of commutative diagram packages on the nLab? It appears that TikZ is now operation, what about xy? It would be very helpful if we could finally repair Joyal’s texts, since they are quite useful in teaching.
]]>Great that both of you are passionate about this! If it’s me who implements something here, I’ll try to find some way to do it which balances the various issues; maybe it will be more constructive to wait until we’re slightly further along the way and ready to experiment :-).
]]>I guess I have to say this over and over again: on a wiki where everyone edits everyone else’s text, no syntax is “optional”: you have to at least read the syntax that other people write. I disagree that the meaning of such characters would be “instantly obvious”, and I disagree that they would be a “relatively minor concern”. Also in my experience it’s mainly papers that are already of questionable mathematical quality whose authors are too lazy to write \mathrm
or define macros to do it for them. I agree that the existing syntax for definitions and theorems is problematic, but that doesn’t mean we should introduce new similar problems, it means we should fix the ones we have.
I am sure that once Richard (or sombody lending a hand) implements a minimum of xymatrix
or similar here, all kinds of refinements can then be added incrementally.
We should keep our math format readable (and hence editable) by as wide a subset of mathematicians as possible, and that means sticking as closely as possible to vanilla (La)TeX.
I think a couple of active characters, which are optional to use and whose meaning is instantly obvious from their rendering, is a relatively minor concern compared to many other subtleties of nLab’s syntax, e.g., the syntax for definitions/theorems, etc., which seem to be a much larger hindrance currently.
I am not sure that the alternatives are better. In practice, many mathematicians cannot be bothered with writing things like \mathrm{…}, which is rather tiresome, and instead just leave the text as is, i.e., math italic. If one only needs to insert a single additional character to get it right, this might induce more people to use it.
but at least its failure mode is fairly innocuous: getting ab when you wanted ab or foo when you wanted foo is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics.
It is hardly always innocuous: $x^ab$
, for instance, produces in iTeX instead of . It is rather nontrivial to guess that a space must be inserted.
Pages containing linear logic would have to be converted carefully, if ! and ? were turned into active characters…
The characters themselves can be arbitrary. Even having a single character for the roman font would be beneficial, we could pick something that is never used in formulas, e.g., @.
]]>Pages containing linear logic would have to be converted carefully, if ! and ? were turned into active characters…
]]>It’s not trivial not to use something when you are editing a page that was written by other people. We should keep our math format readable (and hence editable) by as wide a subset of mathematicians as possible, and that means sticking as closely as possible to vanilla (La)TeX.
The worst-case failure mode with special characters is that you get output with lots of weird characters sprinkled throughout your math.
]]>I think that making a bunch of ordinary characters active in nonstandard ways is the antithesis of “allowing TeX text to travel freely to and from the nLab” (which I am much in favor of).
Exporting to TeX requires all sorts of macros to implement the other nLab functionality anyway, so adding another macro to this set will not make it less portable.
It seems to me that seeing these weird characters in the middle of existing formulas, and not knowing what on earth they are there for or how to edit formulas containing them, would indeed create serious difficulties for newcomers.
The nLab already has a help bar on the right (“Markdown+itex2MML formatting tips”), we can easily add a couple of lines there.
(And for me! I certainly would take quite some while to learn the meanings of each of those symbols, and would proably forget them quite quickly when I hadn’t edited the nLab for a while.)
The advantage of these symbols is that it’s trivial not to use them if one doesn’t want to. One can certainly stick to using \rm, \bf, \it, etc.
iTeX’s approach is indeed nonstandard and nonobvious, but at least its failure mode is fairly innocuous: getting ab when you wanted ab or foo when you wanted foo is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics.
This also applies to the special characters: in the worst case one gets a wrong font in the output. By the way, the way these special characters work with sub/superscripts is the same as on the nLab: @sSet^@op produces {\sf sSet}^{\sf op}, for example. In fact, the design of these macros was inspired by the nLab syntax, with the additional constraint of using only standard TeX.
I’m not saying we should stick with it, though — I’d probably be inclined to just go back to writing \mathrm and \mathbb as needed. With a sensible text editor one can set up key shortcuts to make typing these commands less onerous.
I think most of the people who want to edit the nLab don’t have a “sensible editor”, or don’t know or don’t want to use their editor “sensibly”.
]]>I think that making a bunch of ordinary characters active in nonstandard ways is the antithesis of “allowing TeX text to travel freely to and from the nLab” (which I am much in favor of). It seems to me that seeing these weird characters in the middle of existing formulas, and not knowing what on earth they are there for or how to edit formulas containing them, would indeed create serious difficulties for newcomers. (And for me! I certainly would take quite some while to learn the meanings of each of those symbols, and would proably forget them quite quickly when I hadn’t edited the nLab for a while.)
iTeX’s approach is indeed nonstandard and nonobvious, but at least its failure mode is fairly innocuous: getting when you wanted or when you wanted is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics. I’m not saying we should stick with it, though — I’d probably be inclined to just go back to writing \mathrm
and \mathbb
as needed. With a sensible text editor one can set up key shortcuts to make typing these commands less onerous.