Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 27 of 27
I had given it an Lab page already a while back, so that I could stably link to it without it being already there:
Now it’s even “there” in the sense of being incarnated as a pdf.
I just noticed that PlanetMath has wiki-fied the entire HoTT book, the top-level entry point is here.
That’s great, that they do this. This is the grand vision, that one day all textbook-like materal will be available in hypertext form. That’s what the visionaries of hypertext had thought it should be like, long before the internet actually came into being.
I have added the link to Homotopy Type Theory – Univalent Foundations of Mathematics.
I wonder how automated the conversion is. The HoTT book source is still being corrected - will they have to track corrections and put them in by hand?
Also is that wiki dedicated to just the “true” text of the HoTT book or can editors make correction, expansions, worked examples, new exercises and answers, etc. I didn’t see any mention of a guiding philosophy along these lines.
Yes, this is apparently an awesome project about which much more should be known. Over here in the comment section some people know a bit more about it.
The electronic PDF versions of the book are, of course, already hyperlinked to a certain extent. And not that more hyperlinking isn’t good, but choosing a random link in a random section of a random chapter of the planetmath version I see that it is incorrect: “quasi-inverse” is linked to http://planetmath.org/quasiinverseofafunction, which is not the HoTT meaning of the word.
One should tell the author, so that it be fixed. But I don’t know how to interact with PlanetMath authors. The “ask a question”-link at the bottom of their pages gives me an “access denied”.
Raymond Puzio seens to be doing something elaborate translatiing the LaTex into MathMl - even parsing it into Content MathML. For example is converted to
{math xref="p5.1.m5.1.cmml" display="inline" alttext="A\wedge B" class="ltx_Math" id="p5.1.m5.1"} {semantics xref="p5.1.m5.1.cmml" id="p5.1.m5.1a"} {mrow xref="p5.1.m5.1.4.cmml" id="p5.1.m5.1.4"} {mi xref="p5.1.m5.1.1.cmml" id="p5.1.m5.1.1"}A{/mi} {mo xref="p5.1.m5.1.2.cmml" id="p5.1.m5.1.2"}@and;{/mo} {mi xref="p5.1.m5.1.3.cmml" id="p5.1.m5.1.3"}B{/mi} {/mrow} {annotation-xml xref="p5.1.m5.1" encoding="MathML-Content" id="p5.1.m5.1.cmml"} {apply xref="p5.1.m5.1.4" id="p5.1.m5.1.4.cmml"} {and xref="p5.1.m5.1.2" id="p5.1.m5.1.2.cmml"/} {ci xref="p5.1.m5.1.1" id="p5.1.m5.1.1.cmml"}A{/ci} {ci xref="p5.1.m5.1.3" id="p5.1.m5.1.3.cmml"}B{/ci} {/apply} {/annotation-xml} {annotation xref="p5.1.m5.1.cmml" encoding="application/x-tex" id="p5.1.m5.1b"}A\wedge B{/annotation} {/semantics} {/math}
(above I had to substitute {,},@
for <,>,&
because they appear to get interpreted and give errors when wrapped by pre
. Is there some way to avoid this?)
As a side remark: it is natural to wonder, as in this case, whether the verbosity of MathML code is part of the reason why browsers don’t want to be supporting this (and to pre-empt the comment: granted that it is only read by machines, the standard still seems overly inefficient)
Urs - in the nForum/nLab the mathml for is just;
{math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"} {semantics} {mrow} {mi}A{/mi} {mo}@and;{/mo} {mi}B{/mi} {/mrow} {annotation encoding="application/x-tex"}A \wedge B{/annotation} {/semantics} {/math}
Now that’s succinct!
;-)
Rod, read the Markdown specs (not that they exist …) if you use Markdown syntax, it handles escaping. If you embed XHTML, it assumes you knew what you were doing. Thus:
<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"> <semantics> <mrow> <mi>A</mi> <mo>∧</mo> <mi>B</mi> </mrow> <annotation encoding="application/x-tex">A \wedge B</annotation> </semantics> </math>
What ought to work is directly putting the code in between fences. Interestingly, that doesn’t work. It appears to be a bug with MarkdownExtra (which introduced the fences). Before processing a Markdown document, it goes through and ensures that any genuine HTML blocks are “saved”. This happens before code blocks are processed. So code blocks containing block-level HTML tags don’t get processed correctly. However, the processing only happens to non-indented tags so if you use the Markdown syntax for code (ie indentation) then it works. You can see this by trying:
~~~
<p>hello</p>
~~~
That will cause a complaint if you preview it, but the following won’t:
<p>hello</p>
So one option is to indent rather than fence code if it contains HTML block elements. Another is to fence, but simply indent the opening block tag.
<p>
hello
</p>
<math xmlns="http://www.w3.org/1998/Math/MathML" display="inline">
<semantics>
<mrow>
<mi>A</mi>
<mo>∧</mo>
<mi>B</mi>
</mrow>
<annotation encoding="application/x-tex">A \wedge B</annotation>
</semantics>
</math>
Interestingly, there’s still something a bit special about the <math>
tag which I can’t fathom.
Anyway, while I’m on the subject:
it is natural to wonder, as in this case, whether the verbosity of MathML code is part of the reason why browsers don’t want to be supporting this
Nope. It is the complexity of the rendering that is the problem, not of the input. Mathematics is inherently more complicated than anything else that browser developers have had to render, particularly in terms of how much they would have to develop from scratch rather than importing others’ work (such as for rendering images).
You are the expert, but I am thinking: First, the MathML input isn’t complex, it’s massively redundant. Second, as we just discussed elsewhere, rendering engines for MathML that already exist have been actively pushed away by all major browsers. If they wanted to, they could simply use them, displaying maths on computers is not black magic in the 21st century. But there is some reason they don’t want to.
Urs - mathematicians appear to suffer greatly from “finger fatigue” - they will invent concise notational abbreviations and conventions (some involving fonts and symbol placement) to save time in writing formulas on a blackboard. Then again they may have to explain the conventions or just assume everybody has their level of familiarity. They sometimes confuse conciseness with simplicity and ignore the complexity and subject matter dependency of their conventions.
XML basically doesn’t need a parser with any subject matter knowledge, and with its “named parenthesis redundancy” it can easily detect badly formed input. Making it more concise, but still trivial to parse would be easy. For example the MathMl for the layout of $A \wedge B$
is
{mrow} {mi}A{/mi} {mo}@and;{/mo} {mi}B{/mi} {/mrow}
This could be sugared to the more concise
(r (i A)(o ∧)(i B))
but there would be no point to this. MathML is only looked at by people who try to generate or display it or detect bugs in these processes. Sure, this concise notation saves characters but such savings can also be achieved just through a compression algorithm.
(I was somewhat kidding about the “finger fatigue” crack above. One big reason to employ concise notations such as for is so that the largest number of formulas can be put on just one page and be stared at. Computers don’t have this attention problem.)
Andrew #17
Could you put “fencing” on the base Markdown+Itex Help page (link found at the bottom of message composition). I was unable to find that that was what I was supposed to try, though with <math>
making it fail I would have probably concluded that it didn’t work for anything.
test:
<xxmath xmlns="http://www.w3.org/1998/Math/MathML" display="inline">
<semantics> ...
<mo>∧</mo> ...
</semantics>
</math>
Rod,
There is a contradiction between “does not need a parser” and “is not looked at by people”.
I don’t know if the immense verbosity of the MathML standard is part of the reason why the main browser developer teams do not want to support it. It just seems an obvious thought to come to mind.
There must be some reason that the main browser developer teams actively disfavor MathML. I can’t believe that the reason is that displaying maths is too hard for them, because the tools do exist, such as that by DesignScience, the thing is rather that the browser developer teams decide not to incorporate them, or otherwise actively disfavor them.
If it’s not the form of the standard that they are unhappy with, then what might it be?
Not that it would help us if we knew the answer. We might rather quit this conversation, probably. It was just the impression of the code that you had posted above which made me think aloud that somethig looks wrong here.
Rendering mathematics correctly is difficult. For that matter, even rendering ordinary webpages correctly is difficult. Remember the old days when Internet Explorer used to do things wrong? And it’s rarely possible to port features from one product to another when they don’t have the same codebase.
Rod,
Yes, that help link is rubbish! I ought to improve it (though it does link to the sources …). But you do know that you can click on the word “Source” at the top right of any post to see the source of it … don’t you?
Okay, I’ve improved the help a little. Please let me know of further improvements.
Zhen Lin, the “old days”? Is there a new IE that I haven’t heard of that actually renders stuff correctly?
Urs, it’s just that as Zhen Lin said, rendering maths is hard, and the browser folk don’t see it as a priority. There aren’t enough people who would benefit. Or maybe there aren’t enough people who would complain about it not being supported. If we could use fluffy kittens to render mathematics it might be a different story …
@Andrew Stacey
IE 9 and 10 pass Acid3, apparently. That surely counts as something?
MathPlayer exists, the MathML support code for Chrome and Opera exists. In both cases however the main design teams have decided not to keep it. Why do you think that is?
Okay, I’ve improved the help a little. Please let me know of further improvements.
Thanks!
Here are some things I think should be mentioned though they may not be part of pure Markdown+Itex
.
Use of >
to quote previous discussion in nForum.
[[namespace:pagename|linktext]]
syntax and what namespace
defaults to,
e.g. twisty Urs from [[schreiber:Background+fields+in+twisted+differential+nonabelian+cohomology|twisty Urs]]
Maybe some mention that links may need to be percent encoded.
What special characters need to be escaped and how to do it.
Getting back to the HoTT Book Wiki discussion, I wonder if there is some standard TeX way of partially evaluating a project (I wasn’t able to google anything up.)
Regular LaTeX processing can be regarded as
Under partial evaluation, some of the definitions would be disabled and the output would be a reduced LaTeX file that still contains disabled macro use, and this file can be run through a crippled that disallows outside \def
s to define macros, E.g.
A reduced LaTex might be a good representation for conversion to XHtml if standardized, though there will be the problem of deciding what should remain unreduced in such a standard.
That’s a frequently asked question on TeX-SX. I’d recommend starting with this question and in particular read the comment on the question by Joseph Wright. There’s also no such thing as “standardised LaTeX”, any more than there is “standardised English”.
Having said that, my internet
class does pretty much this to convert LaTeX maths to something suitable that itex can read.
Andrew,
From those TeX discussions, it seems the problem is that some macros set values while others use those values as text or in tests to guide macro expansion, and this prevents a general solution.
However for special cases, limited scope, or a less total solution partial reduction seems possible. Let’s call such value getting/setting steps “unsanitary”. It would seem that you shouldn’t be allowed to reduce those while sanitary reduction would be safe. In addition if you analyzed all the “variables” used you should be able to classify them into “constants” (set once and never changed) and “true variables”. Steps involving constants would then be sanitary.
I’d bet your internet
class (whatever it is) just does sanitary reduction. I haven’t hacked TeX macros much and most of it was sanitary - maybe years ago I did something unsanitary. I have no idea how much unsanitary cruft would result in the output of a full sanitary reduction of something like the HoTT book.
A different alternative would be to write a new TeX like system in a functional language that is designed to handle partial evaluations, though I doubt there would be little support for it or people who wanted to use it. Knuth wrote TeX in Pascal well before there was widespread interest in functional languages and he still doesn’t appear to be a big fan of them: “With 1 or even 2 hands tied behind your back it’s hard to do anything dangerous.” and I’m sure many TeX users or hackers are strongly attached to the TeX paradigm.
Of course I only partially understand TeX so all this may just be facile wishful thinking, though I think fixing TeX so it gives useful output for all sorts of XML conversions might be a better partitioning than emulating TeX within particular converters.
There is melt if you want to combine functional programming with latex. https://forge.ocamlcore.org/projects/melt/
My internet
class actually allows for the opposite. It allows for unsanitary stuff because anything it doesn’t understand gets passed on to TeX. This is the opposite of the variety of converters because anything they don’t understand can’t be dealt with.
Replaced the pointer to
its use as foundations of mathematics
with
its use as univalent foundations for mathematics
removing the pointer to “PM wiki version”, since it seems to have disappeared
1 to 27 of 27