Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 4 of 4
Both the nLab and nForum are happy to accept some invisible Unicode characters that may cause problems. One example so far is the U+200E left-to-right mark which is mentioned here in the nForum where a Unicode LRM was probably introduced by copying and pasting.
I don’t understand enough of how nLab or nForum work to give a good solution but I would like something like the Unicode LRM character to be translated into ‎
in the editor so that it is visible. Then again if that happened then the current software would work fine as shown below. Maybe instead whatever URL delimiting tests that work for ‎
should be extended to Unicode LRMs.
The current operation seems to be:
source | * | rendition | url string |
---|---|---|---|
[qux](http://w.com) |
* | qux | http://w.com%e2%80%8e |
[foo](http://x.com‎) |
foo | http://x.com | |
[bar](http://y.com‎) |
bar | http://y.com%e2%80%8e |
where the *
in the first line indicates that the source URL is terminated by an invisible Unicode LRM.
I have no idea why <m;
is not incorporated into a URL while the equivalent ‎
is.
I have no idea why
‎
is not incorporated into a URL while the equivalent‎
is.
Weird.
The invisible LRM problem just showed up in the nCafe, so this issue is common to all 3 sites.
A LRM showed up in this nForum thread
Urs says:
Those LRMs come from copying the green URLs in Google’s result list!
One can fix it by removing what looks like “pdf” at the end with the backspace key and then re-typing “pdf” (and pre-fixing the url with its “http://”).
Maybe this didn’t always used to be this way with Google. I do this fixing a lot lately.
1 to 4 of 4