Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
That’s right, and right again.
For many years I had paid the nLab out of my pocket, for some of these years jointly with Andrew Stacy. Right when worst came to worst Steve Awodey won a big grant and offered to host the Lab server using a little bit of that money.
Ever since, as our home page says here:
The nLab runs on a server at Carnegie Mellon University that is supported by MURI grant FA9550-15-1-0053 from the Air Force Office of Scientific Research. Any opinions, findings and conclusions or recommendations expressed on the nLab are those of the authors and do not necessarily reflect the views of the AFOSR.
If you google for MURI grants, with a little effort you eventually find them admit what past MURI grants had done for them, and it is, unsurprisingly, about increasing laser targeting precision of bombs and such things.
We have had discussion of other funding models recently. I think Mike is optimistic about winning a grant, but, as others have suggested, too, it might be easiest if we simply collect some money from the pocket of a few regulars and be done with it. I forget in which thread we last discussed this. But we could easily buy a server and have it shipped to Richard’s home.
And then Grothendieck would no longer have to be ashamed of us. As he surely would be right now, I agree.
Here is an email from when we made the migration to CMU happen:
From: Urs Schreiber urs.schreiber@googlemail.com
Date: Sat, 16 May 2015 19:03:01 +0200
Subject: Re: nlab at CMU
To: steve awodey steveawodey@icloud.com
Cc: Bas Spitters b.a.w.spitters@gmail.com, Adeel Khan kadeel@gmail.com, Michael Shulman shulman@sandiego.edu, Joseph Ramsey jsph.ramsey@gmail.com, Urs Schreiber urs.schreiber@gmail.com
Hi,
since it seems that Adeel is about to do the migration, and since I
will have the need to make public announcements about that, I should
know who is to be credited for what.
Here is my understanding, please correct and expand:
Mike for initiating it all;
Steve for providing part of his grant resources;
Bas for catalyzing the process;
Joseph for physically handling the server;
Adeel for remotely administrating the installation;
the US Department of Defense for giving Steve the grant.
Is that right?
Looking at
http://www.defense.gov/releases/release.aspx?releaseid=16641
I suppose Grothendieck wouldn’t have accepted such grant resources,
but luckily we are not Grothendieck, I suppose ;-)
Best wishes,
Urs
I’d be very happy to contribute as a regular.
With due respect to the mathematical genius of Grothendieck, his political views were somewhat odd, so I am not bothered about his shame, nor do I have any problem with accepting funds from the US military. In fact, I don’t even understand why people who dislike the military object to taking their money and doing something nonviolent with it.
In fact, I don’t even understand why people who dislike the military object to taking their money and doing something nonviolent with it.
I’ve heard the very same argument made by leftist thinkers as well, e.g., Noam Chomsky who is employed by MIT. To me it carries some force.
(I’d rather not talk about politics here at the nLab. I have enough of that outside the nLab, being reasonably active on a variety of fronts.)
I certainly don’t want to talk about politics here either; I just wanted to make sure everyone is aware that opinions about politics may differ. However, it may not be a wholly “academic” question, even going forward. For instance, with participation in the MURI as an “in”, there’s a chance that I may personally be in a position to secure further smaller personal grants from AFOSR, and I would be happy to write nLab support into such grants as well (as I already did for my CAREER proposal to the NSF).
For myself, I’m not opposed to our reliance on this funding source. More important for me is that the PI or PIs for the grant deem the nLab worthy of support.
It’s not politics if one reflects on ones wherabouts.
Speaking of reflection: I can understand the attitudes “I don’t care” or “I am actually fond of this state of affairs” (somebody should say that, nobody fond of the small chance that his academic work might actually contribute to national security?), but the suggestion that academics financed by the military are secretly outsmarting the system by using up money without “doing something violent” seems to be intellectually untenable to me.
But irrespective of such philosophizing, we should use this occasion to pick up the thread we had left dangling elsewhere: No matter where the Lab funding comes from, it is desireable for us to get back physical control over our server.
I suggest we get serious about setting up our own server, maybe at Richard’s place, or maybe in the cloud, but under our control.
From messages we have seen here and elsewhere, we easily have enough volunteers that collecting the required sum will be very easy to accomplish. Everyone who contributes is then, of course, free to see where to take his or her contribution from, whether from salary or from a grant or from robbing people in the neighbourhood.
I suggest Richard briefly recalls for us what sum he thinks it takes to install and run a (virtual or real) server in a way he finds appropriate. Then we have an official call for names of volunteers who would offer support, so that everyone gets a rough idea of what the required individual share is. Then Richard gives us his PayPal account number or something like this, and we get going.
I will reply to #10 properly later. I just wanted to point to this thread. As there, the main sticking point for me if we use the cloud is that I am uncomfortable with using a personal credit card, and think that we need to set up an nLab organisation to which we can attach a credit card. David C mentioned there that he might possibly be willing to look into it?
From what I could see at the time, the bureaucracy is horrific in Germany, but not too bad in the UK or in Norway; the problem with doing it Norway is that the documents will be in Norwegian, which makes things a bit too dependent on me, i.e. not robust enough.
As mentioned in a different thread, if we do not use the cloud, then I cannot have the server in my actual home, but I could try to find somewhere to put it, perhaps at the local university.
But I completely support the fact, purely for reasons of stability and robustness, that we really need to look into financing a new server, or ideally more than one.
Thanks. Remind me, what’s the issue with the credit card? Anything beyond the general issue with credit cards? We could use mine (as we did for many years before). I’d rather not “set up an organization” if we can handle it more informally.
If it just involves a small fee, I’m sure we could arrange payment on an informal basis.
But what are we talking about here? Can someone provide figures for annual fees for these options just mentioned, and any others - virtual server, elastic IPs, physical server, …
What if nlab became part of the Wikimedia foundation?
I don’t know much about Wikimedia, but if it would mean that the nLab becomes beholden to the whims of a corporate entity, then I’d be leery of that. I see enough of that with MathOverflow as part of Stack Exchange, Inc., and all their wonderful “improvements”.
I would like to point out that it is not that expensive to set up one’s own server, e.g., if you look at https://www.ramnode.com/vps.php or any similar provider, using a KVM NVMe virtual server (which gives full control over the system to us and with an NVMe SSD is as fast as one could get with a nonvolatile storage) costs only 144 USD per year, which we can easily fund (I will be happy to contribute my part).
The resources provided (3 TB bandwidth and 25 GB of storage) will be more than sufficient to cover nLab’s needs.
For what it’s worth, I would like to register my very strong opposition to Wikimedia (and it’s not like they want us anyway). Wikimedia is now a typical nonprofit corporation that doesn’t seem concerned that much with Wikipedia’s quality. I doubt they will care about nLab at all.
I would be happy to try out Dmitri’s suggestion in #18. I do not have experience with ramnode, and in particular don’t know anything about how ethical they are or how high quality they are, but I believe they are widely used. As Dmitri’s suggests, the cost he mentions is such that we will easily be able to raise the funds for it, so we could just go ahead and try.
We are actually using much more than 25GB on the server currently, more like 150GB. But I suppose there exist solutions of that size too.
An infrastructual question to consider with this setup is the one I mentioned about elastic IPs. Namely if we wish to avoid downtime, we need to be able to switch IP from one server to another. The only reasonable way I know to do this (without manually changing the DNS entry) is to basically offload the problem to a cloud provider such as Amazon, who have purchased a number of IP addresses, so that they have a kind of pool. And usually one is then roped into using their servers as well. I believe it is more or less impossible to purchase IP addresses as an individual.
So we basically need to decide whether or not some downtime now and then is acceptable.
I agree that Wikimedia would be a bad idea, and probably not even feasible.
I think we have agreed that wikimedia is not such a great idea. However one thing they are good at is asking for money. It wouldn’t be too harmful to ask for donations on the nlab now and again. This would of course require some legal considerations with non-profit organisations etc. It’s no secret that many researchers use nlab, so it wouldn’t be so bad to ask Universities to provide donations? Like the way arXiv does it? Unless I am utterly clueless about the whole thing.
This would of course require some legal considerations with non-profit organisations etc.
Yes, this is something I personally feel we are going to have to do at some point, I was asking about it above. I think everyone agrees that there are many people who are willing to donate money, but we need some way of handling it.
Though I am not so familiar with the possibilities here. Say if Urs has a credit card he is willing to use to pay for nLab stuff with. If we just asked that people payed him directly, would this work (if we simply trust Urs to not misuse the money)? Would Urs run into tax problems for instance?
It’s no secret that many researchers use nlab, so it wouldn’t be so bad to ask Universities to provide donations?
Absolutely, even things like the Simons Foundation if people are ethically OK with it.
I had mentioned the idea of setting up a non-profit here, but at the time it was greeted with no enthusiasm (and some pushback from Mike). It may be true that setting up a non-profit is labor-intensive, but I can ask.
Would be great to find out how difficult it is in the US. As mentioned earlier, I tried to have a look in Europe: in the UK and Norway at least, the bureaucracy does not look too bad. We probably just need someone/a group of people who is/are willing to go through with it; I can’t imagine anyone will object to it being done by somebody else.
The free software foundation would have lots of experience with non-profit organisations in the US (and possibly worldwide too?). They have lots of members who wouldn’t mind explaining how things work.
Hi, Regarding #20, Would be good to know how much money we are talking about for a virtual server. (Amazon or whatever.) I’m also slightly surprised that we use as much as 150GB storage.
Re #29: It’s a little difficult for me to estimate precisely. My experience with Amazon was at a much larger scale (tens of thousands per month), so it is not so easy for me to judge our needs. I think earlier I made a very rough guess of several hundred dollars per year, perhaps up to 1000, if we run in Amazon. But it might be much less than this. Amazon is typically pay-per-use, which is very flexible, but also not all that easy to predict beforehand.
One thing I have thought about is that we could perhaps run something very lightweight in Amazon, say just a server running nginx (functioning as a load balancer). We make sure that we set things up so that this server essentially never goes down (as is possible with this kind of cloud provider). Then we just use nginx to redirect traffic outside of Amazon to however many static servers we wish to use around the world. This should keep the costs in the cloud as low as possible (and also avoid us being too locked into any one provider), whilst retaining the benefits of having quite large and powerful servers living here and there (e.g. purchasing a server every now and then might fit better with funding via grants, as with the current one).
But we should be aware that this kind of setup, indeed any kind of setup where we have more than one server, is going to increase the infrastructural complexity significantly. Without further people able to help out with the software, this may be challenging.
Regarding the 150GB, Instiki logs profusely; I just removed 80GB or so of its logs. But we have 1.7TB or so in total on the server, so we have plenty of room to spare. We are using about 43GB now, after the removal. There are further logs and things like the ElasticSearch server which could be stripped down if absolutely necessary. The database itself is between 2GB and 3GB.
A completely crazy idea I have is that one day we might not need a server at all, and instead use some kind of peer-to-peer setup; this would be ultra-robust and have no central costs, if done correctly. But this is for maybe 20 years time :-).
I think much less than 20 years, probably 10-12 years. Many leading projects point in this direction (as you probably as an active IT specialist know better than I). Say, for static files projects like IPFS, but there is much more. With blockchain (see blockchain (zoranskoda)) technology (and more generally, Merkle tree-involving algorithms) taking momentum, the economy of giving incentives for participants in distributed computation, including storage, is critically studied and tested and innovative technological algorithms are springing up. We shall simply use some system like that, 2-3 generations of platforms beyond what is now IPFS.
These things are also interesting from mathematical point of view and I am surprised to see almost vanishing interest in Lab community to these issues.
Zoran: you say:
I am surprised to see almost vanishing interest in nnLab community to these issues.
There is interest but not that essential quantity ‘enough time’, nor, as you know too well, for several of us enough backup from a mainstream institution.
John B is doing some interesting stuff using categorical methods, of course as is evident from Azimuth, but one hits the problem of funding. I recall that I started on several CS and AI related projects about 15 years ago but got no support from my then instituion, in fact they shut us down, so no more students to interact with, no more conferences, etc. There are lots of young (much younger that me!!!) good researchers in categorical areas who have had to change track because of no funding within ‘the System’, and what little funding (in the US) seems to be linked to DARPA or highly commercial units such as Google or Microsoft. Perhaps we are not advertising the ideas enough.
I agree the ideas from CS and AI are a very rich and fruitful area for n-lab type research. There are however problems of interpreting some of the ideas above dimension one e.g. higher dimensional rewriting is hard. Again probabilistic category theory requires a lot of knowledge of other deep areas of mathematics. I used to teach Petri nets, discrete event systems, Max+/tropical algebra, etc. to our students and was actively researching the categorical underpinnings, back in 2003.
As a suggestion for another aspect of this Lab-book I would suggest someone starts another TODO-type entry but as a list of ideas to follow up (possibly jointly), in Applications of n-Lab contexts. outside the central on of Physics which is already well represented. Lots of lab-books have such pages, so it might be feasible. This would go ‘off topic’ however for this thread here in the nForum, so I will stop.
Re #32: interesting! I have started a page blockchain now, as you may see in its thread.
I think that it would be possible to create something today which would work for the nLab technologically. I have two main points of doubt: how people not logged into the network would find out about nLab pages (because search engines could not interact with it directly); and what would happen if there are few/no users of the nLab online at a given time.
However, both of these might be solved by using one server as a kind of gateway from the traditional web to the peer-to-peer network, which is always on as a node on the peer-to-peer network, at the same time as statically serving the content, but with a notice encouraging people to switch to the peer-to-peer network.
The fact that the nLab is a relatively small and specialised, but not too much so, and highly educated community might make the chances higher than average that it would succeed.
Higher than average, maybe, but not I think high enough to be worth pursuing. Our primary purpose is to serve ourselves and the community as a mathematics wiki, not experiment with emerging technologies.
35 I suppose you are negative only about realization, not the mathematics behind those. Now there is more papers on the archive per day on machine learning than on all string theory and related mathematical physics. If we touch on areas like AI, machine learning, distributed computing and blockchain related we may attract many theoretical contributors who scatter their knowledge in forums instead wiki. On the other hand, all applications of category theory which I have seen there are rather weak or inessential so far. Is it really category theory so unimportant thete. Of course there is much usage there both of graph theory and type theory, including dependent types.
I think Mike was purely referring to my suggestion in #34 that it would be technically feasible to write some kind of peer-to-peer version of the nLab software.
Regarding #35, I certainly agree that it would be wrong to drop all our eggs in some very unproven basket. However, I think it is worth noting that for those who are troubled by the ethics or practical issues of relying on one or more large and costly servers, there is a possible alternative that involves no servers and no (central) costs. At the very least, it could serve as an alternative for those willing to use it for times when the nLab is down, or when somebody is working offline.
I should also clarify that the idea would be that the nLab would look and function exactly as now, and one would still use a web browser. It is just that the means of communication and transmission of data underneath would be different.
Finally, a quick comment that there is not really any emergent technology here: everything would still be HTML, and everything is still bytes going over the web. It is just that each person using the nLab communicates directly with (some of) the other people using the nLab, so that there is no central server (though, as in #34, we would probably have a node on the network which is an orthodox server, so that things function for people using the web in the more traditional way).
“everything would still be HTML, and everything is still bytes going over the web” means only that not all of the technologies involved would be emergent.
It depends upon what means by a technology I suppose :-)! What one is doing in a peer-to-peer network is just sending bytes between sockets, which is exactly what one is doing when browsing the web in the usual sense. The only significant difference is that when one browses the web in the usual sense, one of the sockets is typically a ’server’, and one is a ’client’. In a peer-to-peer network, this distinction more or less vanishes, all sockets are equal, one might say. But this server/client distinction is not a technological one: a socket is a socket.
In other words, it’s just a question of programming. Each node would have its own partial copy of the nLab, it would be able to query other nodes if it doesn’t have a particular page, it would send out edits to other nodes, etc. One needs some thinking to make sure it all works out, but it doesn’t strike me as all that radical. As I say, though, I’m certainly not suggesting that we go all out for this option: I did introduce it as a crazy idea for 20 years time above! But it may be useful to keep it in mind as a possible option. I do not have time at the moment to explore it even if I would wish to, but I do think it’s an interesting direction.
1 to 39 of 39