Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
for the five topics listed at HomePage (joyalscatlab) I added references to the corresponding nLab entries
for instance for model categories here.
Joyal’s Catlab formulas are currently broken. For instance, see https://ncatlab.org/joyalscatlab/published/Weak+factorisation+systems: all commutative diagrams render as “Invalid Equation”.
Further investigation reveals that commutative diagrams are typeset as images of the form http://latex.codecogs.com/gif.latex?\xymatrix{%20A%20%20\ar[d]_u%20\ar[r]^{x}%20%20%26%20X%20\ar[d]^{f}%20%20\%20B%20\ar[r]_{y}%20%20%20%20%20%20%20%20%20%26%20Y%20%20%20%20%20}
In the decoded form the above URL is
http://latex.codecogs.com/gif.latex?\xymatrix{ A \ar[d]_u \ar[r]^{x} & X \ar[d]^{f} \\ B \ar[r]_{y} & Y }
It appears that latex.codecogs.com no longer allows \xymatrix.
I find Catlab quite useful, can we repair this problem?
Do we really need to typeset such formulas as images? Doesn’t Instiki support \xymatrix?
That would make a world of a difference if Instiki supported xy. But it does not. That’s why we used that codegogs service some years back.
At least the latex source is recorded for these now broken codecogs diagrams. So in principle it should not be too hard to write a script that fixes all these diagrams by replacing them by an image of latex output.
Yes, please add this to the TODO list if you find the time. We should do this, I suggest, not by editing existing database entries, as was done with the unicode character issues, but by making a new entry, as if we edited by hand. Ideally also add a column to the database to indicate a scripted edit (or achieve the same through the author field, or similar).
It seems to me that from a long view, replacing these diagrams by images of latex output would be a step in the wrong direction because it would lose the latex source. The “right” thing to do (though it would be more work) would be something like implementing our own replacement for codecogs that does support xy and tikz.
We could use XyJax: http://sonoisa.github.io/xyjax/xyjax.html It seems to be pretty stable now. This would provide support for \xymatrix right out of the box.
I definitely agree with #5. As in the other recent thread, I plan to add this to the reworked rendering. My “Yes” was just to the fact that we need a script to do some rewriting.
I also agree with #5. If I interpret Richard in #7 correctly, then he has a vision of what to do. :-) But just in case somebody wants to switch the broken URLs by images, please drop me a line first so I can hack together a codecogs replacement. The only obstacles to creating such a service are the manifold ways LaTeX can be tricked to execute arbitrary commands, else it’s a matter of minutes.
please drop me a line first so I can hack together a codecogs replacement.
Please do!
it’s a matter of minutes.
That would be excellent.
I’d be very interested to discuss this with you, Ingo! I can imagine a nice little REST API which one could call. We could host it on the nLab server, at least to begin with. I do not have a computer to hand at the moment, but should be able to discuss/begin implementing something with you then, if it would be convenient.
While replacing codecogs with another server will certainly work, would it not be better (and much easier) to simply invoke XyJax script in addition to MathJax? This would allow us to have \xymatrix everywhere without any additional effort.
We should definitely consider this. My main reason for hesitation is that I’d like not to be tied to xymatrix; I’d like a design which allows tikz as well, for instance. For common shape, I’d like to consider syntax, or at least an API, that is implementation independent. For a commutative square, for instance, one could have the API take a JSON which specifies a ’top left’ vertex, etc. So probably I’d suggest to switch XyJax on (if it works well) only when we have made some progess on the general question of diagrams.
Before I get to diagrams, I think I will take a look at the question of speeding things up. I believe that the main thing that slows rendering down hugely at the moment may be Instiki’s method for resolving links. It does things like checking whether a page exists, and this seems to be slow. It should be possible to rework things a bit to make this work better.
Does XyJax work with MathML display, or only with the javascript version of MathJax that inserts images after a page has loaded (slowly and messing up anchor-links by changing the heights of lines)?
Re #13: MathML has no support for commutative diagrams whatsoever, so the answer is no.
MathJax does not insert images, it compiles formulas and displays them using CSS, not images.
slowly and messing up anchor-links by changing the heights of lines
This is an issue of client-side vs server-side MathJax.
Server-side MathJax (MathJax-node, https://github.com/mathjax/MathJax-node) is now stable (and is used by Wikipedia, for example), so perhaps we should consider using server-side MathJax on nLab.
My main reason for hesitation is that I’d like not to be tied to xymatrix; I’d like a design which allows tikz as well, for instance.
To support multiple commutative diagram packages, such as xy, tikz, METAPOST, and others, one would presumably compile the subset of PDF/PostScript that xy, tikz, METAPOST, and others output to a language like SVG that could be displayed by the browser. Such compilers already exist, so it may be a simple question of invoking them…
Re #14, I have indeed been experimenting with server side MathJax. It can output MathML as well, so ideally we can just drop Itex2MML completely. It is not completely trivial, though, to create something which works for us. If you or anybody else would like to have a go at writing a little command line tool/API, feel free to send it to me, I will be happy to build upon it. I plan to get to it myself eventually, but things will go much faster if there are several involved :-).
If we do drop itex2mml, then a lot of our code would need to be modified. The differences between itex2mml and ordinary LaTeX are small, but not trivial. Notably, $foo$
produces in LaTeX but in itex2mml.
Let’s see how it goes. Once we have tried it, we will be better informed to see whether it is feasible. From a purely software point of view, it would be good to drop Itex2MML, because it is hardly used and has few active contributors compared to MathJax, and also the code around it in Instiki is pretty incomprehensible; there are several layers, including an automated translation.
Yeah, I see the argument for dropping itex2mml, I’m just saying it won’t be a drop-in switch. (And I’ll be a little sad to have to start writing $\mathrm{foo}$
all the time instead of just $foo$
.)
FWIW, the “hardly used and few active contributors” point you make here in favor of mathjax over itex2mml is, I think, the same argument I’ve made before in favor of switching to a more widely used and maintained wiki software over instiki or anything homegrown.
I do something similar in my own tex files, but with macros rather than active characters; here’s my code. I automatically create all the macros for single letters in different typefaces, and have a shortcut \autodefs
command for making multi-letter ones.
I do think that for the nLab we should stick close to standard TeX, to make it easier for newcomers and drive-by editors. It would probably be okay to define some standard nLab-wide macros, but I think introducing a bunch of active characters would be too unfamiliar.
I’m just saying it won’t be a drop-in switch.
Absolutely! Great that you raised this point. We also have work to do for instance with the syntax for definitions, theorems, etc.
FWIW, the “hardly used and few active contributors” point you make here in favor of mathjax over itex2mml is, I think, the same argument I’ve made before in favor of switching to a more widely used and maintained wiki software over instiki or anything homegrown.
Indeed! Definitely that argument has a lot of merit too :-).
I see a slight difference in that Instiki was written originally, even if only as a prototype, by a highly competent and respected programmer, and I think the original design is actually not too bad (I have changed my opinion on this a bit since working with the codebase!). I feel that a wiki is in essence not very complicated, and Instiki’s codebase does actually, I feel, once one becomes used to Ruby on Rails, do a reasonable job of expressing the kind of simple thing I have in mind, even if it is far from perfect. The code is quite understandable and readable. This is where I see is a difference with regards to itex2mml; it has been sort of tacked onto the original Instiki (not by the same programmer, though I do not mean by this to implicitly criticise the programmers who made the additions, it is just my feeling), and the code is, for me at least, not understandable or readable. At its heart, itex2mml is just a quite simple parser, but the ruby version of it is wrapped up, as I mentioned, in several layers of difficult-to-follow code.
For me, if the code is understandable and reasonably simple and short, it is not so much of a concern if it is not widely used or old, as long as it performs well. Instiki does not at the moment perform as well as we’d like, but it’s not too bad, and I am relatively optimistic that we can end up with something that does perform pretty well. I also think the nLab has a certain identity in its current form, which one might not wish to lose.
Where I do see a significant risk is that few people in the nLab community have worked with the Instiki codebase; basically Adeel and myself are the only ones who are currently active. I really think that we need others onboard, at least to try to understand something. Students will help with getting particular things done, but this probably wouldn’t help too much with the long-term maintenance. There are times when I do not have time to work on the nLab, and one never knows what is around the corner!
I do think that for the nLab we should stick close to standard TeX, to make it easier for newcomers and drive-by editors. It would probably be okay to define some standard nLab-wide macros, but I think introducing a bunch of active characters would be too unfamiliar.
Probably not more unfamiliar than the current iTeX idiosyncrasies, e.g., that $foo$
means $\rm foo$
.
These characters (@, “, ‘, !, ?) are not normally used in formulas, so it’s unlikely they will present difficulties to newcomers,
i.e., one can easily edit the pages without any previous awareness of these special characters.
What kind of alternative do you have in mind? Defining a new macro for each occurrence of {\rm foo} is tiresome, as is spelling it explicitly every time it occurs. The current solution is convenient, but (1) nonstandard and nonobvious to newcomers; (2) limited in functionality to the roman font in a nonfixable way; (3) cannot be transplanted to standard TeX.
I think allowing TeX text to travel freely to and from the nLab is very important. This would allow: (1) easily transplanting existing notes written in TeX to the nLab; (2) easily transplanting existing notes on the nLab to TeX (e.g., if one wants to put them on arXiv).
In particular, (2) encourages people to write notes on the nLab right away, whereas in the current setting one is stuck in the rather ugly-looking nLab format that cannot be easily turned into a TeX file (e.g., see how Urs is forced to produce ugly-looking PDFs from HTML, instead of just compiling a TeX file).
I think that making a bunch of ordinary characters active in nonstandard ways is the antithesis of “allowing TeX text to travel freely to and from the nLab” (which I am much in favor of). It seems to me that seeing these weird characters in the middle of existing formulas, and not knowing what on earth they are there for or how to edit formulas containing them, would indeed create serious difficulties for newcomers. (And for me! I certainly would take quite some while to learn the meanings of each of those symbols, and would proably forget them quite quickly when I hadn’t edited the nLab for a while.)
iTeX’s approach is indeed nonstandard and nonobvious, but at least its failure mode is fairly innocuous: getting when you wanted or when you wanted is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics. I’m not saying we should stick with it, though — I’d probably be inclined to just go back to writing \mathrm
and \mathbb
as needed. With a sensible text editor one can set up key shortcuts to make typing these commands less onerous.
I think that making a bunch of ordinary characters active in nonstandard ways is the antithesis of “allowing TeX text to travel freely to and from the nLab” (which I am much in favor of).
Exporting to TeX requires all sorts of macros to implement the other nLab functionality anyway, so adding another macro to this set will not make it less portable.
It seems to me that seeing these weird characters in the middle of existing formulas, and not knowing what on earth they are there for or how to edit formulas containing them, would indeed create serious difficulties for newcomers.
The nLab already has a help bar on the right (“Markdown+itex2MML formatting tips”), we can easily add a couple of lines there.
(And for me! I certainly would take quite some while to learn the meanings of each of those symbols, and would proably forget them quite quickly when I hadn’t edited the nLab for a while.)
The advantage of these symbols is that it’s trivial not to use them if one doesn’t want to. One can certainly stick to using \rm, \bf, \it, etc.
iTeX’s approach is indeed nonstandard and nonobvious, but at least its failure mode is fairly innocuous: getting ab when you wanted ab or foo when you wanted foo is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics.
This also applies to the special characters: in the worst case one gets a wrong font in the output. By the way, the way these special characters work with sub/superscripts is the same as on the nLab: @sSet^@op produces {\sf sSet}^{\sf op}, for example. In fact, the design of these macros was inspired by the nLab syntax, with the additional constraint of using only standard TeX.
I’m not saying we should stick with it, though — I’d probably be inclined to just go back to writing \mathrm and \mathbb as needed. With a sensible text editor one can set up key shortcuts to make typing these commands less onerous.
I think most of the people who want to edit the nLab don’t have a “sensible editor”, or don’t know or don’t want to use their editor “sensibly”.
It’s not trivial not to use something when you are editing a page that was written by other people. We should keep our math format readable (and hence editable) by as wide a subset of mathematicians as possible, and that means sticking as closely as possible to vanilla (La)TeX.
The worst-case failure mode with special characters is that you get output with lots of weird characters sprinkled throughout your math.
Pages containing linear logic would have to be converted carefully, if ! and ? were turned into active characters…
We should keep our math format readable (and hence editable) by as wide a subset of mathematicians as possible, and that means sticking as closely as possible to vanilla (La)TeX.
I think a couple of active characters, which are optional to use and whose meaning is instantly obvious from their rendering, is a relatively minor concern compared to many other subtleties of nLab’s syntax, e.g., the syntax for definitions/theorems, etc., which seem to be a much larger hindrance currently.
I am not sure that the alternatives are better. In practice, many mathematicians cannot be bothered with writing things like \mathrm{…}, which is rather tiresome, and instead just leave the text as is, i.e., math italic. If one only needs to insert a single additional character to get it right, this might induce more people to use it.
but at least its failure mode is fairly innocuous: getting ab when you wanted ab or foo when you wanted foo is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics.
It is hardly always innocuous: $x^ab$
, for instance, produces in iTeX instead of . It is rather nontrivial to guess that a space must be inserted.
Pages containing linear logic would have to be converted carefully, if ! and ? were turned into active characters…
The characters themselves can be arbitrary. Even having a single character for the roman font would be beneficial, we could pick something that is never used in formulas, e.g., @.
I am sure that once Richard (or sombody lending a hand) implements a minimum of xymatrix
or similar here, all kinds of refinements can then be added incrementally.
I guess I have to say this over and over again: on a wiki where everyone edits everyone else’s text, no syntax is “optional”: you have to at least read the syntax that other people write. I disagree that the meaning of such characters would be “instantly obvious”, and I disagree that they would be a “relatively minor concern”. Also in my experience it’s mainly papers that are already of questionable mathematical quality whose authors are too lazy to write \mathrm
or define macros to do it for them. I agree that the existing syntax for definitions and theorems is problematic, but that doesn’t mean we should introduce new similar problems, it means we should fix the ones we have.
Great that both of you are passionate about this! If it’s me who implements something here, I’ll try to find some way to do it which balances the various issues; maybe it will be more constructive to wait until we’re slightly further along the way and ready to experiment :-).
What is the current status of commutative diagram packages on the nLab? It appears that TikZ is now operation, what about xy? It would be very helpful if we could finally repair Joyal’s texts, since they are quite useful in teaching.
Both are available, yes. See the HowTo. But the CatLab is currently I think closed for editing, I believe that Todd would be the person to talk to about that.
Excellent! Can we search-replace all links to codecogs so that they are rendered natively?
The correct version would be
\begin{xymatrix} c\ar[r] & a \ar@<.5ex>[r]\ar@<-.5ex>[r]& a’ \end{xymatrix}
See for example Beck’s theorem (zoranskoda) and compare the corrected source code version 4, https://ncatlab.org/zoranskoda/source/Beck%27s+theorem/4, with the codecogs version 3, https://ncatlab.org/zoranskoda/source/Beck%27s+theorem/3.
Re #35: Of course, the URLs must be properly URL-decoded before search-replace.
There are standard tools to accomplish this, I could write a simple script to do this if somebody is willing to run it on the source files.
Which is the standard tool for this decoding ?
Yes, search and replace should be possible, good idea! I don’t have time at the moment, but will give it a go when I get the chance. If you write a script, Dmitri, I am happy to try to use it (though I will need in any case to write some more logic around it).
How can I retrieve the source code of a page from Joyal’s Cat Lab, like https://ncatlab.org/joyalscatlab/published/Weak+factorisation+systems?
It’s protected, which seems to make it impossible to retrieve the source code in the usual way.
I now have a candidate script for fixing the codecogs stuff, but have no way of testing it.
Hi Dmitri, apologies for omitting to reply to this at the time. You have access to make an SQL dump; you can obtain the source code that way, for example. E.g. use the ’webs’ table to find the web_id of the CatLab, and then just do a SELECT on the pages and revisions tables.
What happened to https://ncatlab.org/sqldump? It appears to be unresponsive currently.
It was working a couple of days ago, probably the nLab server restarted, which takes it down currently until I notice! I’ll bring it back up when next at my computer.
Hi Dmitri, I checked just now, and the sqldump server has been up since July, and my automated job was able to make backups over the last few days. From the logs, it looks like maybe something was wrong in your second API call. Recall that there are two: first POST, and then GET. The GET one seemed to be lacking authentication details.
Here is my script, it was working just fine for a while, then it suddenly stopped working. So something must have changed on the server side. I replaced the authorization token with ???.
#!/usr/bin/bash
SQL_DUMP_ID="$(curl -X POST -H "Authorization: Basic ???==" https://ncatlab.org/sqldump)"
curl -H "Authorization: Basic ???==" "https://ncatlab.org/sqldump/$SQL_DUMP_ID" >"$(date +"%Y-%m-%dT%H:%M:%S").sql"
Hmm, nothing has changed on the server side; as I wrote, the server has been running since July. There doesn’t seem to be much wrong with your script. Could you try running the two curl commands separately? And also maybe provide the command you are using to create the base 64 hash? You could use curl -u instead of the authorization header as an alternative.
Suddenly, it works again now.
1 to 47 of 47