Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
When doing a rollback of Initiality Project - Raw Syntax to version 19 (creating the new version 21) I received 500 Internal Server Error, but the rollback seems to have been successful.
Intriguing. I cannot reproduce it (tried rolling back Sandbox and everything is fine, and tried rolling back to version 19 of the page you mention, and this worked fine as well) at the moment, and I have not found anything significant in the logs so far, though I haven’t looked extremely closely. Let me know if you can reproduce it, or if you see the same thing again at a later point.
I do see a 500 in the logs, but it looks to me like it came from a timeout, which can happen on an edit (or rollback, which is basically just an edit as well to Instiki), though it is harmless; the edit will still go through. Could it have been a timeout in your case? When I rolled back to the same version, it did it quickly, but it is maybe possible that it could have been different the first time around.
I have seen this before too. I can’t remember what I was editing back then however.
How would I tell whether it could have been a timeout? It didn’t seem to me to take noticeably longer than edits normally do (which is kinda long these days, but they always go through eventually).
I think the best solution for now is to keep what you wrote in a textfile somewhere just in case.
I got some kind of error when I did a rollback a while back.
There was an issue a little while ago with rolling back, but that was fixed a couple of months ago. Other than that, I don’t think I’ve observed any issues myself. Unless we can reproduce the issue or I see some useful log, it is difficult to debug it :-).
How would I tell whether it could have been a timeout?
You may see a page from Cloudflare pop up briefly before the 500 comes. The 500 comes from the fact that Cloudfare makes some unusual http request (not one of the standard ones like GET, POST, etc).
It didn’t seem to me to take noticeably longer than edits normally do (which is kinda long these days, but they always go through eventually).
The edit should always go through, even if one gets a timeout. Cloudflare has an absolute timeout of 100 seconds, which we cannot change. However, if this timeout occurs, it does not stop the processing of the edit, it just cuts your connection to the server.
Regarding the length of time, this will be helped considerably once Maruku is removed. The algorithm in the new renderer is quite efficient, it passes only once through the file; its implementation can always be optimised, of course, but we should in the end be able to make it pretty fast. We could also for example cache the parsed tex rather than re-parsing with every new edit. But no time for any of that at the moment.
I didn’t see a Cloudfare page, but I might have been on another tab at the time if it only popped up briefly.
Yes, it would be extremely brief, you would have to be looking at the page and maybe even know what to look for! Looking at the nginx logs, though, I don’t think it was a timeout. Mysterious!
If anybody encounters this again, please let me know (with as many details as possible).
Does the server log every 500 explicitly with as many details about what happened as possible? If you know the exact time and date of the event (as here, presumably the timestamp on version 21), can you find the exact 500 event in the logs? I would hope it would be possible to configure the server to write to the logs exactly what error occurred whenever there is a 500.
There are several logs. The nginx logs are only for HTTP requests, they do not know anything about the server itself (nginx is a reverse proxy in front of the web server). I found the 500 in the nginx logs with the URL, timestamp, request time, etc, but not, for the reason I just mentioned, with any details about the cause of the error.
I do not see the error in the Instiki logs. It should definitely indeed be logged, yes, but it was not. It would be nice to add a logging handler which captures all exceptions; I imagine this is possible in Rails, I will look into it when I get the chance (it is certainly possible in other frameworks).
For the APIs I have written myself, e.g. the new renderer, I log carefully, and there is no error in any of those logs, which means that the 500 came from within Instiki itself somewhere, i.e. not from any of those APIs.
Interesting, thanks for the info.
Got this again rolling back flat functor.
This time I was able to reproduce the error, and I believe that I have now found the bug and fixed it in this commit.
It would also have been fixed in a different way if announcements were added for rollbacks; we should do that eventually I think.
Thanks!
1 to 16 of 16