Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2010

    for the five topics listed at HomePage (joyalscatlab) I added references to the corresponding nLab entries

    for instance for model categories here.

    • CommentRowNumber2.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 12th 2018
    • (edited Jul 12th 2018)

    Joyal’s Catlab formulas are currently broken. For instance, see https://ncatlab.org/joyalscatlab/published/Weak+factorisation+systems: all commutative diagrams render as “Invalid Equation”.

    Further investigation reveals that commutative diagrams are typeset as images of the form http://latex.codecogs.com/gif.latex?\xymatrix{%20A%20%20\ar[d]_u%20\ar[r]^{x}%20%20%26%20X%20\ar[d]^{f}%20%20\%20B%20\ar[r]_{y}%20%20%20%20%20%20%20%20%20%26%20Y%20%20%20%20%20}

    In the decoded form the above URL is

    http://latex.codecogs.com/gif.latex?\xymatrix{ A \ar[d]_u \ar[r]^{x} & X \ar[d]^{f} \\ B \ar[r]_{y} & Y }

    It appears that latex.codecogs.com no longer allows \xymatrix.

    I find Catlab quite useful, can we repair this problem?

    Do we really need to typeset such formulas as images? Doesn’t Instiki support \xymatrix?

    • CommentRowNumber3.
    • CommentAuthorUrs
    • CommentTimeJul 12th 2018

    That would make a world of a difference if Instiki supported xy. But it does not. That’s why we used that codegogs service some years back.

    At least the latex source is recorded for these now broken codecogs diagrams. So in principle it should not be too hard to write a script that fixes all these diagrams by replacing them by an image of latex output.

  1. Yes, please add this to the TODO list if you find the time. We should do this, I suggest, not by editing existing database entries, as was done with the unicode character issues, but by making a new entry, as if we edited by hand. Ideally also add a column to the database to indicate a scripted edit (or achieve the same through the author field, or similar).

    • CommentRowNumber5.
    • CommentAuthorMike Shulman
    • CommentTimeJul 13th 2018

    It seems to me that from a long view, replacing these diagrams by images of latex output would be a step in the wrong direction because it would lose the latex source. The “right” thing to do (though it would be more work) would be something like implementing our own replacement for codecogs that does support xy and tikz.

    • CommentRowNumber6.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 13th 2018
    • (edited Jul 13th 2018)

    We could use XyJax: http://sonoisa.github.io/xyjax/xyjax.html It seems to be pretty stable now. This would provide support for \xymatrix right out of the box.

  2. I definitely agree with #5. As in the other recent thread, I plan to add this to the reworked rendering. My “Yes” was just to the fact that we need a script to do some rewriting.

  3. I also agree with #5. If I interpret Richard in #7 correctly, then he has a vision of what to do. :-) But just in case somebody wants to switch the broken URLs by images, please drop me a line first so I can hack together a codecogs replacement. The only obstacles to creating such a service are the manifold ways LaTeX can be tricked to execute arbitrary commands, else it’s a matter of minutes.

    • CommentRowNumber9.
    • CommentAuthorUrs
    • CommentTimeJul 13th 2018

    please drop me a line first so I can hack together a codecogs replacement.

    Please do!

    it’s a matter of minutes.

    That would be excellent.

  4. I’d be very interested to discuss this with you, Ingo! I can imagine a nice little REST API which one could call. We could host it on the nLab server, at least to begin with. I do not have a computer to hand at the moment, but should be able to discuss/begin implementing something with you then, if it would be convenient.

    • CommentRowNumber11.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 16th 2018

    While replacing codecogs with another server will certainly work, would it not be better (and much easier) to simply invoke XyJax script in addition to MathJax? This would allow us to have \xymatrix everywhere without any additional effort.

    • CommentRowNumber12.
    • CommentAuthorRichard Williamson
    • CommentTimeJul 16th 2018
    • (edited Jul 16th 2018)

    We should definitely consider this. My main reason for hesitation is that I’d like not to be tied to xymatrix; I’d like a design which allows tikz as well, for instance. For common shape, I’d like to consider syntax, or at least an API, that is implementation independent. For a commutative square, for instance, one could have the API take a JSON which specifies a ’top left’ vertex, etc. So probably I’d suggest to switch XyJax on (if it works well) only when we have made some progess on the general question of diagrams.

    Before I get to diagrams, I think I will take a look at the question of speeding things up. I believe that the main thing that slows rendering down hugely at the moment may be Instiki’s method for resolving links. It does things like checking whether a page exists, and this seems to be slow. It should be possible to rework things a bit to make this work better.

    • CommentRowNumber13.
    • CommentAuthorMike Shulman
    • CommentTimeJul 16th 2018

    Does XyJax work with MathML display, or only with the javascript version of MathJax that inserts images after a page has loaded (slowly and messing up anchor-links by changing the heights of lines)?

    • CommentRowNumber14.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 17th 2018

    Re #13: MathML has no support for commutative diagrams whatsoever, so the answer is no.

    MathJax does not insert images, it compiles formulas and displays them using CSS, not images.

    slowly and messing up anchor-links by changing the heights of lines

    This is an issue of client-side vs server-side MathJax.

    Server-side MathJax (MathJax-node, https://github.com/mathjax/MathJax-node) is now stable (and is used by Wikipedia, for example), so perhaps we should consider using server-side MathJax on nLab.

    • CommentRowNumber15.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 17th 2018

    My main reason for hesitation is that I’d like not to be tied to xymatrix; I’d like a design which allows tikz as well, for instance.

    To support multiple commutative diagram packages, such as xy, tikz, METAPOST, and others, one would presumably compile the subset of PDF/PostScript that xy, tikz, METAPOST, and others output to a language like SVG that could be displayed by the browser. Such compilers already exist, so it may be a simple question of invoking them…

  5. Re #14, I have indeed been experimenting with server side MathJax. It can output MathML as well, so ideally we can just drop Itex2MML completely. It is not completely trivial, though, to create something which works for us. If you or anybody else would like to have a go at writing a little command line tool/API, feel free to send it to me, I will be happy to build upon it. I plan to get to it myself eventually, but things will go much faster if there are several involved :-).

    • CommentRowNumber17.
    • CommentAuthorMike Shulman
    • CommentTimeJul 17th 2018

    If we do drop itex2mml, then a lot of our code would need to be modified. The differences between itex2mml and ordinary LaTeX are small, but not trivial. Notably, $foo$ produces foof o o in LaTeX but foofoo in itex2mml.

  6. Let’s see how it goes. Once we have tried it, we will be better informed to see whether it is feasible. From a purely software point of view, it would be good to drop Itex2MML, because it is hardly used and has few active contributors compared to MathJax, and also the code around it in Instiki is pretty incomprehensible; there are several layers, including an automated translation.

    • CommentRowNumber19.
    • CommentAuthorMike Shulman
    • CommentTimeJul 17th 2018

    Yeah, I see the argument for dropping itex2mml, I’m just saying it won’t be a drop-in switch. (And I’ll be a little sad to have to start writing $\mathrm{foo}$ all the time instead of just $foo$.)

    FWIW, the “hardly used and few active contributors” point you make here in favor of mathjax over itex2mml is, I think, the same argument I’ve made before in favor of switching to a more widely used and maintained wiki software over instiki or anything homegrown.

    • CommentRowNumber20.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 17th 2018
    • (edited Jul 17th 2018)
    Re #19:

    >And I’ll be a little sad to have to start writing $\mathrm{foo}$ all the time instead of just $foo$.

    I resolved this problem for myself in the following way:
    In my TeX files, the characters @, ", !, ?, and ` are made active in mathematical formulas.
    Then I can type formulas like
    "Fun(@Simp^@op,@Set)→@sSet
    or
    !B G=!N(!pt//G)
    or
    ?id_X:X→X
    or
    X→?Ex^∞((Δ^1×X) ⊔_X Y)×_{?Ex^∞ Y}(?Ex^∞ Y)^{Δ^1}×_{?Ex^∞ Y}Y→Y

    Here @Simp, @op, @Set, @sSet will be typeset in the sans serif font, used to denote categories,
    "Fun will be typeset in the Euler font, used for functors,
    !B, !N, !pt will be typeset in roman font,
    ?id and ?Ex^∞ will also be typeset in roman font, but will be an operator (like \mathop), so the spacing is different (e.g., $?log x$ produces a space between log and x),
    `Z, `N, `Q, `R will be typeset in bold font.

    Notice that only a single additional character is needed, no braces {...} are necessary.
    (The macro reads the longest string of letters that follow the special character.)

    This eliminates the necessity of creating long chains of macros or manually inputting the font commands in formulas.

    Unlike the nLab solution, it allows for multiple fonts (not just the roman font)
    and has the advantage of being perfectly standard TeX that compiles everywhere.

    Perhaps we could adopt something like this for nLab if we decide to drop iTeX.
    This would be minimally disruptive and dropping the iTeX idiosyncrasies would allow for much easier import of TeX files into nLab.
    • CommentRowNumber21.
    • CommentAuthorMike Shulman
    • CommentTimeJul 17th 2018

    I do something similar in my own tex files, but with macros rather than active characters; here’s my code. I automatically create all the macros for single letters in different typefaces, and have a shortcut \autodefs command for making multi-letter ones.

    I do think that for the nLab we should stick close to standard TeX, to make it easier for newcomers and drive-by editors. It would probably be okay to define some standard nLab-wide macros, but I think introducing a bunch of active characters would be too unfamiliar.

    • CommentRowNumber22.
    • CommentAuthorRichard Williamson
    • CommentTimeJul 17th 2018
    • (edited Jul 17th 2018)

    I’m just saying it won’t be a drop-in switch.

    Absolutely! Great that you raised this point. We also have work to do for instance with the syntax for definitions, theorems, etc.

    FWIW, the “hardly used and few active contributors” point you make here in favor of mathjax over itex2mml is, I think, the same argument I’ve made before in favor of switching to a more widely used and maintained wiki software over instiki or anything homegrown.

    Indeed! Definitely that argument has a lot of merit too :-).

    I see a slight difference in that Instiki was written originally, even if only as a prototype, by a highly competent and respected programmer, and I think the original design is actually not too bad (I have changed my opinion on this a bit since working with the codebase!). I feel that a wiki is in essence not very complicated, and Instiki’s codebase does actually, I feel, once one becomes used to Ruby on Rails, do a reasonable job of expressing the kind of simple thing I have in mind, even if it is far from perfect. The code is quite understandable and readable. This is where I see is a difference with regards to itex2mml; it has been sort of tacked onto the original Instiki (not by the same programmer, though I do not mean by this to implicitly criticise the programmers who made the additions, it is just my feeling), and the code is, for me at least, not understandable or readable. At its heart, itex2mml is just a quite simple parser, but the ruby version of it is wrapped up, as I mentioned, in several layers of difficult-to-follow code.

    For me, if the code is understandable and reasonably simple and short, it is not so much of a concern if it is not widely used or old, as long as it performs well. Instiki does not at the moment perform as well as we’d like, but it’s not too bad, and I am relatively optimistic that we can end up with something that does perform pretty well. I also think the nLab has a certain identity in its current form, which one might not wish to lose.

    Where I do see a significant risk is that few people in the nLab community have worked with the Instiki codebase; basically Adeel and myself are the only ones who are currently active. I really think that we need others onboard, at least to try to understand something. Students will help with getting particular things done, but this probably wouldn’t help too much with the long-term maintenance. There are times when I do not have time to work on the nLab, and one never knows what is around the corner!

    • CommentRowNumber23.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 17th 2018
    • (edited Jul 17th 2018)

    I do think that for the nLab we should stick close to standard TeX, to make it easier for newcomers and drive-by editors. It would probably be okay to define some standard nLab-wide macros, but I think introducing a bunch of active characters would be too unfamiliar.

    Probably not more unfamiliar than the current iTeX idiosyncrasies, e.g., that $foo$ means $\rm foo$. These characters (@, “, ‘, !, ?) are not normally used in formulas, so it’s unlikely they will present difficulties to newcomers, i.e., one can easily edit the pages without any previous awareness of these special characters.

    What kind of alternative do you have in mind? Defining a new macro for each occurrence of {\rm foo} is tiresome, as is spelling it explicitly every time it occurs. The current solution is convenient, but (1) nonstandard and nonobvious to newcomers; (2) limited in functionality to the roman font in a nonfixable way; (3) cannot be transplanted to standard TeX.

    I think allowing TeX text to travel freely to and from the nLab is very important. This would allow: (1) easily transplanting existing notes written in TeX to the nLab; (2) easily transplanting existing notes on the nLab to TeX (e.g., if one wants to put them on arXiv).

    In particular, (2) encourages people to write notes on the nLab right away, whereas in the current setting one is stuck in the rather ugly-looking nLab format that cannot be easily turned into a TeX file (e.g., see how Urs is forced to produce ugly-looking PDFs from HTML, instead of just compiling a TeX file).

    • CommentRowNumber24.
    • CommentAuthorMike Shulman
    • CommentTimeJul 17th 2018

    I think that making a bunch of ordinary characters active in nonstandard ways is the antithesis of “allowing TeX text to travel freely to and from the nLab” (which I am much in favor of). It seems to me that seeing these weird characters in the middle of existing formulas, and not knowing what on earth they are there for or how to edit formulas containing them, would indeed create serious difficulties for newcomers. (And for me! I certainly would take quite some while to learn the meanings of each of those symbols, and would proably forget them quite quickly when I hadn’t edited the nLab for a while.)

    iTeX’s approach is indeed nonstandard and nonobvious, but at least its failure mode is fairly innocuous: getting abab when you wanted aba b or foof o o when you wanted foofoo is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics. I’m not saying we should stick with it, though — I’d probably be inclined to just go back to writing \mathrm and \mathbb as needed. With a sensible text editor one can set up key shortcuts to make typing these commands less onerous.

    • CommentRowNumber25.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 18th 2018
    • (edited Jul 18th 2018)

    I think that making a bunch of ordinary characters active in nonstandard ways is the antithesis of “allowing TeX text to travel freely to and from the nLab” (which I am much in favor of).

    Exporting to TeX requires all sorts of macros to implement the other nLab functionality anyway, so adding another macro to this set will not make it less portable.

    It seems to me that seeing these weird characters in the middle of existing formulas, and not knowing what on earth they are there for or how to edit formulas containing them, would indeed create serious difficulties for newcomers.

    The nLab already has a help bar on the right (“Markdown+itex2MML formatting tips”), we can easily add a couple of lines there.

    (And for me! I certainly would take quite some while to learn the meanings of each of those symbols, and would proably forget them quite quickly when I hadn’t edited the nLab for a while.)

    The advantage of these symbols is that it’s trivial not to use them if one doesn’t want to. One can certainly stick to using \rm, \bf, \it, etc.

    iTeX’s approach is indeed nonstandard and nonobvious, but at least its failure mode is fairly innocuous: getting ab when you wanted ab or foo when you wanted foo is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics.

    This also applies to the special characters: in the worst case one gets a wrong font in the output. By the way, the way these special characters work with sub/superscripts is the same as on the nLab: @sSet^@op produces {\sf sSet}^{\sf op}, for example. In fact, the design of these macros was inspired by the nLab syntax, with the additional constraint of using only standard TeX.

    I’m not saying we should stick with it, though — I’d probably be inclined to just go back to writing \mathrm and \mathbb as needed. With a sensible text editor one can set up key shortcuts to make typing these commands less onerous.

    I think most of the people who want to edit the nLab don’t have a “sensible editor”, or don’t know or don’t want to use their editor “sensibly”.

    • CommentRowNumber26.
    • CommentAuthorMike Shulman
    • CommentTimeJul 18th 2018

    It’s not trivial not to use something when you are editing a page that was written by other people. We should keep our math format readable (and hence editable) by as wide a subset of mathematicians as possible, and that means sticking as closely as possible to vanilla (La)TeX.

    The worst-case failure mode with special characters is that you get output with lots of weird characters sprinkled throughout your math.

    • CommentRowNumber27.
    • CommentAuthorDavidRoberts
    • CommentTimeJul 18th 2018

    Pages containing linear logic would have to be converted carefully, if ! and ? were turned into active characters…

    • CommentRowNumber28.
    • CommentAuthorDmitri Pavlov
    • CommentTimeJul 18th 2018

    We should keep our math format readable (and hence editable) by as wide a subset of mathematicians as possible, and that means sticking as closely as possible to vanilla (La)TeX.

    I think a couple of active characters, which are optional to use and whose meaning is instantly obvious from their rendering, is a relatively minor concern compared to many other subtleties of nLab’s syntax, e.g., the syntax for definitions/theorems, etc., which seem to be a much larger hindrance currently.

    I am not sure that the alternatives are better. In practice, many mathematicians cannot be bothered with writing things like \mathrm{…}, which is rather tiresome, and instead just leave the text as is, i.e., math italic. If one only needs to insert a single additional character to get it right, this might induce more people to use it.

    but at least its failure mode is fairly innocuous: getting ab when you wanted ab or foo when you wanted foo is not the end of the world, doesn’t affect the editability of formula source, and rarely impacts the readability of the mathematics.

    It is hardly always innocuous: $x^ab$, for instance, produces x abx^ab in iTeX instead of x abx^a b. It is rather nontrivial to guess that a space must be inserted.

    Pages containing linear logic would have to be converted carefully, if ! and ? were turned into active characters…

    The characters themselves can be arbitrary. Even having a single character for the roman font would be beneficial, we could pick something that is never used in formulas, e.g., @.

    • CommentRowNumber29.
    • CommentAuthorUrs
    • CommentTimeJul 18th 2018

    I am sure that once Richard (or sombody lending a hand) implements a minimum of xymatrixor similar here, all kinds of refinements can then be added incrementally.

    • CommentRowNumber30.
    • CommentAuthorMike Shulman
    • CommentTimeJul 18th 2018

    I guess I have to say this over and over again: on a wiki where everyone edits everyone else’s text, no syntax is “optional”: you have to at least read the syntax that other people write. I disagree that the meaning of such characters would be “instantly obvious”, and I disagree that they would be a “relatively minor concern”. Also in my experience it’s mainly papers that are already of questionable mathematical quality whose authors are too lazy to write \mathrm or define macros to do it for them. I agree that the existing syntax for definitions and theorems is problematic, but that doesn’t mean we should introduce new similar problems, it means we should fix the ones we have.

  7. Great that both of you are passionate about this! If it’s me who implements something here, I’ll try to find some way to do it which balances the various issues; maybe it will be more constructive to wait until we’re slightly further along the way and ready to experiment :-).

    • CommentRowNumber32.
    • CommentAuthorDmitri Pavlov
    • CommentTimeSep 2nd 2019
    • (edited Sep 2nd 2019)

    What is the current status of commutative diagram packages on the nLab? It appears that TikZ is now operation, what about xy? It would be very helpful if we could finally repair Joyal’s texts, since they are quite useful in teaching.

  8. Both are available, yes. See the HowTo. But the CatLab is currently I think closed for editing, I believe that Todd would be the person to talk to about that.

    • CommentRowNumber34.
    • CommentAuthorDmitri Pavlov
    • CommentTimeSep 2nd 2019

    Excellent! Can we search-replace all links to codecogs so that they are rendered natively?

    • CommentRowNumber35.
    • CommentAuthorzskoda
    • CommentTimeSep 9th 2019
    • (edited Sep 9th 2019)
    In that case, the search should also change some of the special characters, which are not exactly as in xy. For example

    <center><img src="http://latex.codecogs.com/gif.latex?\xymatrix{c\ar[r] & a \ar@%3C.5ex%3E[r]\ar@%3C-.5ex%3E[r]& a'}
    " /></center>

    has code

    c\ar[r] & a \ar@%3C.5ex%3E[r]\ar@%3C-.5ex%3E[r]& a'

    where you can see replacements like %3C, %3E, as there were needed to transfer angle brackets and arrows with codecogs; (also used %26 often for &). See https://ascii.cl for the ASCII codes, when needed.
    • CommentRowNumber36.
    • CommentAuthorzskoda
    • CommentTimeSep 9th 2019
    • (edited Sep 9th 2019)

    The correct version would be

    \begin{xymatrix} c\ar[r] & a \ar@<.5ex>[r]\ar@<-.5ex>[r]& a’ \end{xymatrix}

    See for example Beck’s theorem (zoranskoda) and compare the corrected source code version 4, https://ncatlab.org/zoranskoda/source/Beck%27s+theorem/4, with the codecogs version 3, https://ncatlab.org/zoranskoda/source/Beck%27s+theorem/3.

    • CommentRowNumber37.
    • CommentAuthorDmitri Pavlov
    • CommentTimeSep 9th 2019

    Re #35: Of course, the URLs must be properly URL-decoded before search-replace.

    There are standard tools to accomplish this, I could write a simple script to do this if somebody is willing to run it on the source files.

    • CommentRowNumber38.
    • CommentAuthorzskoda
    • CommentTimeSep 9th 2019

    Which is the standard tool for this decoding ?

  9. Yes, search and replace should be possible, good idea! I don’t have time at the moment, but will give it a go when I get the chance. If you write a script, Dmitri, I am happy to try to use it (though I will need in any case to write some more logic around it).

    • CommentRowNumber40.
    • CommentAuthorDmitri Pavlov
    • CommentTimeOct 14th 2019

    How can I retrieve the source code of a page from Joyal’s Cat Lab, like https://ncatlab.org/joyalscatlab/published/Weak+factorisation+systems?

    It’s protected, which seems to make it impossible to retrieve the source code in the usual way.

    I now have a candidate script for fixing the codecogs stuff, but have no way of testing it.

  10. Hi Dmitri, apologies for omitting to reply to this at the time. You have access to make an SQL dump; you can obtain the source code that way, for example. E.g. use the ’webs’ table to find the web_id of the CatLab, and then just do a SELECT on the pages and revisions tables.

    • CommentRowNumber42.
    • CommentAuthorDmitri Pavlov
    • CommentTimeNov 11th 2020

    What happened to https://ncatlab.org/sqldump? It appears to be unresponsive currently.

  11. It was working a couple of days ago, probably the nLab server restarted, which takes it down currently until I notice! I’ll bring it back up when next at my computer.

  12. Hi Dmitri, I checked just now, and the sqldump server has been up since July, and my automated job was able to make backups over the last few days. From the logs, it looks like maybe something was wrong in your second API call. Recall that there are two: first POST, and then GET. The GET one seemed to be lacking authentication details.

    • CommentRowNumber45.
    • CommentAuthorDmitri Pavlov
    • CommentTimeNov 12th 2020

    Here is my script, it was working just fine for a while, then it suddenly stopped working. So something must have changed on the server side. I replaced the authorization token with ???.

    #!/usr/bin/bash
    SQL_DUMP_ID="$(curl -X POST -H "Authorization: Basic ???==" https://ncatlab.org/sqldump)"
    curl -H "Authorization: Basic ???==" "https://ncatlab.org/sqldump/$SQL_DUMP_ID" >"$(date +"%Y-%m-%dT%H:%M:%S").sql"
    
  13. Hmm, nothing has changed on the server side; as I wrote, the server has been running since July. There doesn’t seem to be much wrong with your script. Could you try running the two curl commands separately? And also maybe provide the command you are using to create the base 64 hash? You could use curl -u instead of the authorization header as an alternative.

    • CommentRowNumber47.
    • CommentAuthorDmitri Pavlov
    • CommentTimeNov 13th 2020

    Suddenly, it works again now.