Yes, I can do a 301. That’s not a problem. I had hoped that a CNAME would work since then the requests get diverted at the DNS level and never reach my server. But I guess that the number of such requests is so small (being just a few links here and there) that it’s not an issue.
I’ll get that set up in the next few days.
]]>So do I, however a little bit of digging gets me the following:
~% host nforum.mathforge.org
nforum.mathforge.org is an alias for nforum.ncatlab.org.
nforum.ncatlab.org has address 104.27.170.19
nforum.ncatlab.org has address 104.27.171.19
nforum.ncatlab.org has IPv6 address 2400:cb00:2048:1::681b:ab13
nforum.ncatlab.org has IPv6 address 2400:cb00:2048:1::681b:aa13
~% host nforum.ncatlab.org
nforum.ncatlab.org has address 104.27.171.19
nforum.ncatlab.org has address 104.27.170.19
nforum.ncatlab.org has IPv6 address 2400:cb00:2048:1::681b:aa13
nforum.ncatlab.org has IPv6 address 2400:cb00:2048:1::681b:ab13
So the alias is pointing to the right place. Doing whois 104.27.171.19
, I get that it is registered to cloudfare.
So cloudfare is at your end, not mine. It may well be that the type of redirect I’ve set up doesn’t work with cloudfare’s system in which case I can set it back to the other type of redirect, but then I need to keep an eye on your IP address to ensure that it tracks it. Or it might be that the alternative hostnames need registering with cloudfare so that it knows to expect them.
]]>Older nLabians may or may not recall that the original URL of the nForum was nforum.mathforge.org, and that at times it was useful to have a back-up hostname for the nlab itself which was nlab.mathforge.org. As I still rent the mathforge.org domain, I’ve kept these pointing at the nforum and nlab to avoid dead links.
I’m in the process of doing a bit of housekeeping at mathforge, and in so doing I’m updating the nforum.mathforge.org and nlab.mathforge.org DNS entries so that they are CNAME not A entries. (If that means nothing to you, you probably should have stopped reading a while ago.) This is what they should have been originally: cname is like a symbolic link and means that if ncatlab.org moves then these will follow it.
For the record, once DNS hosts are updated, then they will alias as follows:
nforum.mathforge.org
to nforum.ncatlab.org
nlab.mathforge.org
to ncatlab.org
If they should be otherwise, let me know.
I’m pleased to see that the nLab/nForum are still going strong!
]]>(Just driving by …)
I got the definition from (where else?) The Convenient Setting of Global Analysis, where it is given on p585 after Result 52.24 (that result is referenced from Jarchow, 1981, 10.4.3, p202 and Horvath, 1966, p277). It wouldn’t surprise me that it went back to Grothendieck, nor even that I knew of that at the time. I just had a quick look in Pietsch’s book on Nuclear Spaces, there was stuff on s-Nuclear spaces but not on Schwartz spaces - however, I suspect that the two may be the same.
Certainly the Schwartz space is a Schwartz space. The reason for this definition is to figure out what characterises the Schwartz space in terms of functional analytic properties.
Oh, and I replied to the Math.SE question for the sake of completeness.
]]>Oh botheration.
]]>Got it. In fixing an issue with &
s getting into the wrong places, I put an urlencode
function in the wrong place. I think I have the right place now.
Ah, no. That shouldn’t be there. There appears to be a stray /
at the start of the URL. Not sure where that’s coming from. I’ll dig it out next chance I get. Thanks for pointing it out.
There’s been no change on this side. That link looks right because in actual fact the feeds are handled via the search facility internally. Has your reader updated recently?
]]>Does KaTeX convert from MathML or only from its LaTeX subset? If it doesn’t convert from MathML then it is irrelevant for the nGroup webpages. And that’s the conversion that you’d need to look at, not conversion from its LaTeX subset. Moreover, I wonder how much of that performance gain is due to its limitations.
So keep an eye on it, but a wary one.
]]>Unfortunately, your question exposed a deeply buried bug in the forum software (you - presumably inadvertently - made &
a tag and the system didn’t expect that). I’ve now fixed the bug and deleted the extra discussions you started.
What I really wanted to do was split the server resources so that different types of request went into different queues. In particular, I wanted to put show
in one queue, edit
and similar in another, and then the rest in a third. That way, they wouldn’t block each other at the server level.
I think that to do that one would need to have a family of alternative URLs, such as show.ncatlab.org
, which all pointed to the same location (ie the current nlab server). Then a request to ncatlab.org
would get diverted to ABC.ncatlab.org
by Apache’s rewrite module. Each ABC.ncatlab.org
would then be running a separate instance of passenger but the passengers would all point to the same instiki installation. This, I think, would relieve the main bottleneck which is the number of concurrent requests and the fact that a few big ones can block the others.
Using a multi-line grep on the bzr repository should pick these up. Experimenting with fan theorem, I think that the following expression should pick them up:
pcregrep -M '^ *[^\*\s][^\r\n]+ *\r\n *\*'
I’m on a Mac so my line endings are \r\n
. Other systems may differ.
The regexp is: match the beginning of the line, then arbitrary spaces, then a non-asterisk and non-space character, then any non-line-ending characters, then maybe spaces (actually, could skip this), then a newline, then maybe spaces, and finally an asterisk.
Reading the pcresyntax
man page, and experimenting a bit with what’s allowed before a list, I can condense it a little to:
pcregrep -M '^ *\w[^\r\n]+\R *\* `
but this still produces a lot of false positives, mainly where we’re in a list and the previous item goes over two lines so the naive search thinks the previous line is part of normal text instead of a list item.
With the above regexp, I got 240 pages.
Here’s the full “script” that I ran on my command line:
for f in *.meta; do
g=$(pcregrep -M '^ *\w[^\r\n]+\R *\* ' $f:r)
if [[ $g != '' ]]; then
grep -m 1 '^name:' $f
print $g
fi
done
That prints the page name (might get it wrong if the page has been renamed) and the matching part. Then you can check if the match is genuine or warrants further investigation.
]]>Fixed now.
(I didn’t consider this a major issue since search item number three is helpfully labelled “Daring Fireball: Markdown Syntax Documentation which contains full instructions on how to create link texts.)
]]>It’s bzr
only for historical reasons. It wouldn’t be impossible to convert it completely to git
and then to push to github. For security, the nLab should have its own github account (since it would need an SSH key with empty passphrase to do the push). Alternatively, there’s Launchpad which is the bazaar version of github.
The repository contains as much of the information as I could extract from the database without compromising personal data (ie web passwords). It is possible to reconstruct the nLab from the repository (and I have a script that does exactly that). Hmm, if the nLab had a github account then I could dump all these scripts that I’ve written over the years there as well which would be as good a way of handing them on as any.
]]>Urs, click on the ’cohomolohy’ tag in the list: or click here.
]]>Ah, I guess that needs a new place to reside. I’ll find somewhere suitable.
]]>The latest instiki update fixes the cookie issue hopefully once and for all so this ought to go away once that has been put in place (no longer my responsibility).
]]>Minor additions:
I did something like this for matrices (I wanted to modify the way sage outputted matrices for LaTeX). What I did was to find the export function in the Sage code and modify it to my needs. I suspect that it wouldn’t be hard to do the same thing for tables. Basically, it concatenates a variety of arrays so you simply need to modify the joining strings.
You’d need to make it do Markdown table syntax, but if it’s maths in the entries then you’ll need to add the dollars before and after each cell. Nevertheless, it’s a simple enough join. I’d experiment a little in the Sandbox to get the format right, then take a look at the Sage code (which Rod has linked to) to find the function to modify.
Oh, and I’d start by simply writing it as a function rather than a class method. As an example, here’s my matrix output code:
def print_matrix(m):
latex = sage.misc.latex.latex
LatexExpr = sage.misc.latex.LatexExpr
nr = m.nrows()
nc = m.ncols()
if nr == 0 or nc == 0:
return ""
S = m.list()
rows = []
for r in range(0,nr):
s = ""
for c in range(0,nc):
if c == nc-1:
sep=""
else:
sep=" & "
entry = latex(S[r*nc+c])
s = s + entry + sep
rows.append(s)
tmp = []
for row in rows:
tmp.append(str(row))
s = " \\\\\n".join(tmp)
return LatexExpr("\\begin{bmatrix}\n" + s + "\n\\end{bmatrix}")
def print_matrix_columns(m):
latex = sage.misc.latex.latex
LatexExpr = sage.misc.latex.LatexExpr
nr = m.nrows()
nc = m.ncols()
if nr == 0 or nc == 0:
return ""
S = m.list()
rows = []
for c in range(0,nc):
s = ""
for r in range(0,nr):
if r == nr-1:
sep=""
else:
sep=" \\\\ "
entry = latex(S[r*nc+c])
s = s + entry + sep
rows.append(s)
tmp = []
for row in rows:
tmp.append(str(row))
s = "\n\\end{bmatrix},\n\\begin{bmatrix}\n".join(tmp)
return LatexExpr("\\left\\{\\begin{bmatrix}\n" + s + "\n\\end{bmatrix}\\right\\}")
def print_vector(v):
latex = sage.misc.latex.latex
LatexExpr = sage.misc.latex.LatexExpr
nr = len(v)
if nr == 0:
return ""
S = v.list()
s = ""
for r in range(0,nr):
if r == nr-1:
sep=""
else:
sep=" \\\\ "
entry = latex(S[r])
s = s + entry + sep
return LatexExpr("\\begin{bmatrix}\n" + s + "\n\\end{bmatrix}")
I’ve written almost no python code before, this is all figured out by looking at the existing code and changing little bits.
]]>You could write a new export routine to output Markdown+itex.
]]>Oh, except that he hasn’t yet created any links from the home page to the rest of the material so that doesn’t help much …
]]>I’m glad this got caught so early. It was a hassle to fix, but it would have been worse if it had gone on longer.
It was my fault. It was part of the process of converting Andres’ notes into his new web. There were a few stages, and it transpires that in one of them I inadvertently changed a load of page names to Empty N
. I’ve changed back as many as I could figure out, which includes all of the nLab ones. There are a couple on Joyal’s cat lab that I didn’t know what they should be, and on Stephen Spahn’s web (there’s also one on Doriath, but as that’s one great big sandbox it doesn’t matter).
Sorry about that! I’ll try to avoid being helpful in future!
Incidentally, if when trying to edit a page you get sent back to the HomePage then that’s a sign that the page doesn’t really exist and that you’re only seeing a ghost in the cache.
(Hey, who turned out the lights?)
]]>Ha! And there was me using xhost
. Should’ve just googled it.