Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I have not seen any kind of 'To Do' list. Let me qualify that: for the content parts of the nLab, this is largely self-documenting, although having a 'wish list' of high-priority pages to be added/edited/cleaned up could be useful. It is for the infrastructure aspects of nLab (and n-Forum) where such a list would be rather helpful.
A bit of background - back in August 2008 there was a discussion on the categories list of renewed Bourbaki project, done using modern ideas and technology. I volunteered to be the 'infrastructure person' for that project. A group of us (including André Joyal) took this offline and had a number of discussions on that topic. Unfortunately, all of us got rather busy, and things were put on hold. Then nLab evolved, and more and more it looks like it is evolving into a good part of what we had discussed. So the sane thing to do, from my perspective, is to join you and offer to help where I can [infrastructure rather than content, at least right now].
More background: my own research is in 'mechanized mathematics'. The snappy description: I teach computers to do university-level mathematics. I did that in industry for 11 years (computer algebra), and now as an academic. Although educated as a mathematician, I am now best described as a computer scientist and software engineer. I have been very active in the Calculemus community, whose aims is to reconcile computation and deduction (as per Leibniz's dream).
So, what can I do to help?
Jacques
Dear Jacques,
many thanks ideed for your message and for coming here.
As for what you can do, the quick answer is: we all more or less add material to the nLab to the extent that it serves our personal needs. For instance when I teach some seminar and look up and collect material for that, I put it into the appropriate nLab entries. When I collect facts from the literature for my research, I put that collection into the relevant nLab entries. And so on.
So there is not so much a master plan as to how the nLab should grow, but it grows the way its contributors happen to feel the need. At least that’s how it used to be, with all contributors being busy otherwise and their activity here being a side effect of their “genuine occupation”.
So I would think you might look around the nLab and try to see where you personally feel you would want to add or edit in order to improve the Lab.
Apart from these general remarks, you should know, if you don’t already, that Andre Joyal himself is busy expanding his personal area of the nLab: Joyal’s CatLab.
Dear Urs,
I have 'lurked' here for a while before posting, so I understand the dynamics of the development of content on nLab. I have earlier today done some edits (for grammar, mostly). What I am wondering about is the development of the nLab infrastructure, which is where I may be able to help, more so than with the core content, at least right now. It is exactly because nLab grows organically as a side effect of people's "genuine occupation" that I was searching for a ToDo list on the one component which is hardest to grow in that way!
And I am extremely glad that André has chosen to join nLab (in his own way) [and I did know he had, but you were right to point it out to me].
I would be extremely pleased if the basic answer was: we're pretty happy with the current infrastructure, it doesn't seem to need much work right now. I also realize that if the answer is that the infrastructure does need some work, why should any newcomer be trusted to touch the one component which is a single-point-of-failure of nLab? That is a good question that would also need to be answered (but this is perhaps not the best forum on which to delve on that point).
Ah, I see. Thanks for the clarification.
One technological aspect of the wiki that I and I think others are missing is the ability to encode diagrams on nLab pages in a way that one is used to from one’s LaTeX documents. In particular it would be nice if somehow we could enter xypic code into an nLab entry and have the corresponding diagram displayed. There are workarounds to achieve this, but they are still pretty suboptimal.
Generally, I think it would be a big boost for the nLab if porting nLab pages to and from LaTeX would be more automated. Currently there is a functionality that produces for a given nLab page a file that contains LaTeX code which is supposed to represent the page’s content, but when I tried this once it was so far from working well that it was effectively useless.
I don’t know if these are things you might have any interest in looking into, but this is what comes to my mind.
That seems like the kind of coding which I might enjoy doing when I need a distraction from my "genuine occupation". If the site administrators can detail to me the infrastructure used, I can create a duplicate of that in a 'sandbox' on one of my own machines, prototype some ideas, and then offer them back when I think I have something that works. The porting to-and-fro of LaTeX in particular seems like an interesting aspect to work on.
Hi jacques,
I second Urs’ call for a LaTeX export facility incl. diagrams. One reason I am hesitant to write all my stuff up in the nlab is that I then have to write it up again (I make heavy use of diagrams) for a paper. If there was a seamless nLab LaTeX transfer then I reckon that (slightly) more people would be attracted to use the nlab as a current research tool: the promise of everything you write up being instantly transformable into a paper is enticing - as is the typo-fixing facility the interested onlookers provide :) I certainly would just write everything on the nlab (modulo spam-filter problems) and then turn the handle to get a paper out. OR vice-versa, write a paper, and just upload it as an nlab page, whereon a smattering of links will finish the task.
Firstly, coding help is greatly welcomed! It would help if you could say a little about where your expertise lies.
Secondly, there is an Instiki “TODO” list on the Instiki Instiki: http://golem.ph.utexas.edu/wiki/instiki/show/TODO (see also the “Known Bugs” page there). Anything that improves Instiki, improves the nLab.
Thirdly, it is already possible to export nLab pages to LaTeX. It is even possible to do so with diagrams. The problem is that the best format for diagrams on the web (SVG) is not (as far as I know) directly importable into LaTeX and so needs converting to something like PDF first. This could be automated without too much difficulty, though. It might even be possible to write a LaTeX macro that did it automatically (or semi-automatically, a bit like how TikZ can deal with gnuplot functions).
Fourthly, it is going to almost impossible to write a full LaTeX->nLab converter since LaTeX is a combination processor-and-renderer but the nLab separates the processing from the rendering.
So when people ask for a ’LaTeX<->nLab’ converter, they need to be a little more precise about what they actually want to be able to do.
Thirdly, it is already possible to export nLab pages to LaTeX.
Yes, but it seems to be pretty useless. For one of my previous articles I wanted to take large chunks of materiall form my personal web and convert it into a LaTeX document. I don’t recall all the things that went wrong, but the automated export functionality turned out to be not helpful at all for this. I ended up copy-and-pasting the nLab source code into my LaTeX file and then modifying the syntax by hand, as needed.
So when people ask for a ’LaTeX nLab’ converter, they need to be a little more precise about what they actually want to be able to do
I understand that this is a tall order, and I don’t expect this easily done, much less request it, but just metnion it for the record: the more seamless it is to gro from LaTeX documents to nLab pages and back, the better.
To create lectCatCh0 (zoranskoda) from LaTeX file the most of the time I spent for 2 things
changing {\bf text} , {\it text} etc. into text, text and alike
changing LaTeX symbols for accented characters like \'c to the character they represent, which is encodable in $n$lab
To go other way around, from nlab to a file, again one needs to scrape off various brackets and stars if cutting and pasting. The automatic tex output gives mainly unusable quasilatex code which mostly does not parse and the adaptation takes more time than taking the source code mixture and then adapt.
Programming expertize: once upon a time, I was a professional programmer (then designer, ..., architect), for many years. I have written programs as part of my job(s) in more than 25 languages. I have fixed bugs in code written in programming language which I had never seen before. I can learn a new one in a day or so, but it takes me at least 2 weeks to become idiomatic [it took me much longer than that for Haskell, and longer still for metaocaml, but there I invented some of the techniques which are now considered idiomatic]. (Apologies if this comes off as bragging - but I was asked, and all the above is true!)
My own research, as I said above is 'mechanized mathematics' (expertize in computer algebra and increasingly in theorem provers). But I've done my share of web work too [I lead a dot-com inspired project to put Maple as a computational engine on the Web in 2001 - it failed because the business model was idiotic], as well as math-on-the-web work [I'm active in the MKM - Mathematical Knowledge Management community; long ago, I also designed parts of Maple's MathML support, as well as making sure that Maple's LaTeX export got some maintenance time]. I'm also Chair of the Electronic Services Committee of the Canadian Mathematical Society [for the last 3 years, renewed for another 2], providing policy advice for the web offerings of the CMS.
Regarding LaTeX converters: I have no special expertize there, but I do know several of the world experts quite well. My plan was to borrow as much of their open source software as I can and leverage their expertize when I run into a problem. Some of them have used all the papers of the arXiv as their 'test cases' for LaTeX tools (and wrote up the results, published in the last 3 years' worth of CICM conferences). Plus I myself do a lot of work around translation of Domain Specific Languages (DSLs), so this looks like a fun problem to tackle.
And while it is indeed impossible to write a full LaTeX converter, experience (see above) shows that most people do not use LaTeX in such sophisticated ways. In other words, a very good converter should be feasible, although people will always want more. And people are already working hard on various LaTeX converters. Once I understand the requirements of the nLab better, I know who to ask to find out what is the state of the art.
I will look at the Instiki link. But here I am less convinced that I can bring any particular expertize to the problems, so I may not be as much help. We'll see.
@zskoda: a priori, these kinds of problems all seem relatively easy to tackle. More precisely: it should be possible to have import/export facilities which deal with at least 90% of such issues automatically. It's not entirely trivial, but definitely doable.
Apologies if this comes off as bragging
:| :O :D
11 Of course, I know, I can make my own filters to do several conversions like this, but it would be a waste of time to do 1-purpose hack.
Fourthly, it is going to almost impossible to write a full LaTeX->nLab converter since LaTeX is a combination processor-and-renderer but the nLab separates the processing from the rendering.
Unless some parameters are taken from outside the whole process is predictable at compile time, hence this is not a matter of a principle. Of course, I understand the difficulties.
I totally agree that having every user write their own filters and 1-time hacks is a complete waste of time. [Some of] These filters ought to be part of the standard conversion process.
Zoran wrote:
Unless some parameters are taken from outside the whole process is predictable at compile time, hence this is not a matter of a principle.
That’s precisely why it is almost impossible. It may even be actually impossible. There are parameters taken from outside. LaTeX knows exactly what font it is going to use, for example, and so can hack things together according to the precise dimensions of characters (\settowidth
springs to mind). Without a vast amount of javascript, instiki does not know that. It can suggest a font, but if I decide that I want to view the nLab with an elaborate copperplate font, instiki has no way of knowing that and so no way of implementing such commands.
Jacques wrote:
experience shows that most people do not use LaTeX in such sophisticated ways
Obviously never looked at my papers! But I take your point.
For export, I would much rather fix the intrinsic exporter than write a new one. For import, I agree that it wouldn’t be hard to write a fairly simple script that could cope with the majority of LaTeX.
Is the 'intrinsic exporter' part of the standard distribution of Instiki? And I agree, fixing the intrinsic one is the best route (if it is possible).
Yes, the ’intrinsic exporter’ is part of the standard distribution of Instiki. I’ve only made a very minor change to the nLab version and that’s to do with logging, so anything that you can see on the nLab can be replicated in a generic installation. (The nForum is a little more hacked, but that’s another story.)
The exporter itself, though, could be thought of as being a “module” in that it is based on ’maruku’, a ruby version of Markdown. I think that Jacques has made one or two changes to it, though, so best to get it via Instiki than from the source. If memory serves me right, one of the main issues with exporting was that wikilinks don’t get converted to anything in particular (well, they didn’t last time I looked which was back in December - they may have changed by now).
Ok, I'll start looking. I've only ever marked student assignments written in Ruby, never had to write any of it myself (yet) - this will be fun! I've started reading the Instiki descriptions on the Wiki, they are very informative indeed.
I should mention that Jacques Distler (gosh, am I now going to have to figure out which Jacques is which?) keeps a close eye on the nLab and is very open to suggestions and help.
Jacques wrote:
I will look at the Instiki link. But here I am less convinced that I can bring any particular expertize to the problems, so I may not be as much help.
I doubt that, Instiki as an open software written in Ruby using Ruby on Rails as a webframework. Judging from your expertise it should take you a couple of weeks at most to learn Ruby, Ruby on Rails and the Instiki software itself.
Andrew wrote:
The problem is that the best format for diagrams on the web (SVG) is not (as far as I know) directly importable into LaTeX and so needs converting to something like PDF first.
If we were talking about Java, I would point out the apache project batik which lets you convert SVG to several other formats quite easily, but I’m a newcomer at Ruby and do not know if something similar already exists. Maybe I don’t even understand the problem properly…
Regarding SVG and round-tripping in general: my favoured solution would be to support as much of LaTeX (aka iTex) as possible (including TikZ) as the main 'source' language. One should then fully embed the source markup inside the xhtml markup (as MathML already allows for math), so that 'round tripping' is very easy, since it is mostly 'extraction' rather than 'translation'. I believe this basic idea is also possible for a lot of other markup (XML and namespaces are specifically designed to solve that problem).
Since the vast majority of pages on nLab will have originated that way, this would in practice be quite effective.
I definitely agree that the best solution would be to allow an xypic- or tikz-like syntax in itex which would be automatically parsed into SVG by the itex2mml processor. That would make it much easier to write diagrams in nlab pages, and also much easier to export them to latex. I also agree that this would be an awesome and very helpful thing to have. We had some discussion about the possibility almost a year ago in several discussions in the “Diagrams” category, which you have probably seen already, e.g. here and here.
If you do choose TikZ, or something TikZ-alike, as the input syntax (which I would support, as it seems more flexible and easier to use than xypic, although I am not as familiar with it yet), then can I request that you please include early-on something like double -implies
for drawing double-shafted arrows? We use a lot of those.
Extending itex2mml to support the drawing of diagrams is likely a larger project -- something that might be best done as a student project, closely supervised by a faculty mentor.
Get one of your graduate students to do it =D!
I would love to - but I don't have any students working on a topic even close to this at the moment. And I am not one of those supervisors who dumps distracting tasks on his students (for one, I need them to finish in a reasonable amount of time). But I may well have some future student work on related items, at which time this would make sense.
Okay, I misunderstood what you meant by
my favoured solution would be to support as much of LaTeX (aka iTex) as possible (including TikZ) as the main ’source’ language.
What exactly is being proposed to implement?
But isn't it that the essential part of the software which codecogs site uses to convert xypic to SVG is available as standalone ? So our server could rin it just for those xypic blocks of code directly, without using SVG. Not ?
@Mike: I would like to extend the tool chain maruku+itex2mml with new features that allow TikZ in the input. I would also like to extend that tool chain to embed the 'input' (LaTeX) into the output XML (xhtml + svg + mathml + ...), to make round-tripping as easy as possible.
I am just learning some of the details of the technology now, so the important part is "extend the current tool chain" and "embed the input in the output", I might be making some mistakes in the technological details.
@Zoran, codecogs converts xy to PNG, not SVG, as far as I know. And so there’s no gain beyond just generating the picture yourself and including it as a regular picture.
@Jacques, I was about to say that I thought that the whole issue of diagrams is a little large to start with but from what you write then I don’t think that I need to say that any more! (That’s not a comment about your abilities, rather that I don’t think that the problem is well-defined yet. I’ll make one comment though: Jacques Distler is very much in favour of TikZ (or a subset thereof) as input language.) There are several “bite-sized” things that could be done. Apart from Jacques Distler’s TODO list and what’s talked about above, here’s some others:
Clearly I’m being dense, but I still don’t understand why #23 and #28 are not contradictory. Isn’t extending the tool chain to allow tikz in the input a way of extending itex2mml to draw diagrams?
Another feature that it would be really nice for instiki to have is for links that go through a redirect to show up in the “Linked from” list at the bottom of the page.
codecogs converts xy to PNG, not SVG, as far as I know. And so there’s no gain beyond just generating the picture yourself and including it as a regular picture.
I think there is a gain in using codecogs, already: it’s that the source code for the diagram is entered into the nLab entry, instead of just a link. That means the code is available for modification, notably. But it also means that later on it is easier to have that same code be interpreted by another engine.
One problem with using codecogs currently, apart from the fact that we don’t have control over their server and hence risk that nLab pages fail to display correctly if they decide to close their sevice down or something, is that one cannot pass the xypic-code in a way that one would want to pass it, but has to replace all whitespace and all ampersands by escape code.
29: the homepage for codecogs does list SVG as one of the possible outputs (there are many others). To me, truthfully, it did not work, to get SVG output, while it got OK with several other formats they list.
At some point somebody (not Jacques Distler ?) was starting some xypic tool (there was some trial version, I can not find the link now). Now you say that Jacques prefers TikZ now. I should note that the large percentage of the LaTeX papers in the category community uses xypic including most of the nlab contributors. I myself would rather use my existing xypic files along with hack like codecogs rather than write new TikZ code.
It was me who had a go at an xypic->SVG converter. It’s still available and it sort-of works.
The basic problem, as I see it, is the following: everyone wants an automatic converter so that they can enter their code, be it tikz or xypic, and have it look the same as both LaTeX and on the web. That isn’t possible. The next best would be to have a basic converter (a bit like the one I wrote) which can convert xypic into a rough form of SVG ready to be tweaked by the SVG-Editor. My program would need modification to do this, by the way. Then we could include the xypic code and the SVG and do it in such a way that when viewed via the web you get the SVG and in the LaTeX export then you get the xypic (or tikz or whatever). That is certainly possible. Indeed, if you replace “xypic” with “external graphic” then it is there already so it wouldn’t take a vast amount of tweaking the code to produce this.
be it tikz or xypic, and have it look the same as both LaTeX and on the web
It is not possible for the web document. But the true solution for big formulas including (not inline formulas) would be not to use the various local fonts but to have a true dvi plugin for webbrowsers, with the only difference from usual dvi in 2 things: one that it could allow multiple dvi mini-windows of any size (not letter size pr A4, but size of one formula) and that the plugin could have a small supply of fonts for dvi and download from the server if any special fonts are used. There was some similar project over 10 years ago, I think idvi, and there was still some hope of some form of LaTeX in some xhtml standard, but instead the stupid mathML has replaced it (I say stupid because of idiotic syntax of mathML which is not for human writing, and the creators said no problem we will write tools for helping write mathML, and John Wheeler has answered in one discussion that having a tool is an indication of a problem and not a solution of the problem). The idvi had also the idea of inline expansions, so they planned to be able to expand buttons with idvi formulas within the browser window. This kind of expandable hierarchies or content-zoom in feature is highly desirable in long structured texts.
I mean nobody expects videos to use the local fonts, and one has all that flash etc. plugins. So why the big math diagrams would not use plugins in some future technology ? I mean the mathML or any kind of stupid technique can solve the problems with inline formulas, with a bit of ugliness in font size but for true diagrams original dvi-like techniques are superb (in bandwidth/memory as well as in look) in comparison both to various standard picture-formats and to mathML hacks; and in my opinion look nicer in fine tuned proportions than SVG.
which can convert xypic into a rough form of SVG ready to be tweaked by the SVG-Editor. My program would need modification to do this, by the way.
I do not see much need in this. Either one uses xypic with result as it is, or one writes in some other software. Getting to fiddle in the middle is work which will in general have better results from the scratch.
Zoran, I think one of the reasons MathML is preferred over images or dvi is that it contains some content information, rather than just display information, and therefore is more accessible to e.g. blind people who have to have the content translated into some other form of information, rather than merely looking at a picture.
I don’t see why the use of tools to write MathML is any worse that the use of tools to convert latex to dvi. Just because a format is built out of text symbols like XML, one shouldn’t necessarily expect it to be human-readable and -writeable.
Mike you misunderstood my point as above I discuss several things at once. It was discussion on a baby form of LaTeX as opposed to mathML as an emerging standard for CONTENT. It was NOT a discussion on idvi, this is another issue from about the same time, the discussions were even maybe in the same forum/mailing list, but the issues are separate. The argument from mathML people was, that parsing mathML is easy and robust. E.g. if part of the code does not arrive it still parses. Somebody said to this that if w3 consortium can not find a good parser writers than this is really gross. Then somebody said but why do you care, the difference is not important as we are writing tools which will help writing complicated code in mathML describing simple equations (somebody gave a very simple equation as an example, with one page of code, which would take several characters in LaTeX). At the end John Wheeler of Princeton said, but having the tools to help write the code is not a solution of the problem but the indication of the problem.
So historically having mathML and not some baby LaTeX for formulas was of that nature. If somebody is blind and a machine reads LaTeX for him (there are tools for that) it is easier to understand that long nested sequences of mathML-style text. My boss is 95% blind and I am pretty much aware of difficulties for the blind people.
Ah ok, sorry.
I still think there is a valid point that latex, like dvi, is all about the way something looks, rather than what it means. At root, tex is a formatting language, not a language to describe mathematics. MathML is definitely ugly, but I believe that in at least some of its versions, it’s trying to include “content” information as well (though perhaps poorly).
Maybe they try but the massiveness of its NESTED syntax (with many levels of nesting) makes it hard to comprehend, definitely harder than LaTeX; wheather in principle mathML can have more information on FORMULA structure than LaTeX i do not know (LaTeX is concerned in principle with various graphical details but simple LaTeX mainly with the structure of formulas which are presented far simpler than in MathML), if so some tools could make this useful for blind. There is a LaTeX audio reading system devised by a blind person in USA and some have good experience with it I was told (my boss does not use it yet).
Edit: By the way, it is a bit annoying that edit/permalink etc. bar on the RHS is lower in the new representation than the top row with the author and the number of the comment where it belongs. This way if there is a quotation block, it overlaps with the ribbon of the block, and for the first line of the text in general it makes it shorter and uglier. Anybody sharing my feeling ?
Another wishlist item: it would be nice if you could redirect to an anchor in the middle of a page. So some syntax like
[[!redirects other page {#anchor}]]
placed on page “this page” would make all links [[other page]] point instead to “this page#anchor”.
1 to 39 of 39