Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Todd,
you added to Yoneda lemma the sentence
In brief, the principle is that the identity morphism is the universal generalized element of . This simple principle is surprisingly pervasive throughout category theory.
Maybe it would be good to expand on that. One might think that the universal property of a genralized element is that every other one factors through it uniquely. That this is true for the generalized element is a tautological statement that does not need or imply the Yoneda lemma, it seems.
Yes, I was working on that, making an entry universal element (still under construction). Of course, the Yoneda lemma already verges on tautology – it just happens to be the deepest such near-tautology in mathematics (is that an exaggeration? I believe it anyway).
The best way of understanding these things is through examples. Some will be forthcoming.
Okay, great, thanks.
Just to say why I thought of adding that cryptic sentence to the Idea of Yoneda lemma: I wanted to record in the Lab the answer I gave to Dan Lior over in this discussion. The sentence was added to create an opening for myself to create universal element, an article I’ll add more to later.
I added some hyperlinks and in particular added formal theorem/proof-environments. Have a look to see if you like it.
Looks nice, Urs – thanks.
On Yoneda lemma, it says:
The Yoneda Lemma
Let be a locally small category, the category of presheaves on , then for all there is a canonical isomorphism
The left hand side seems a little notationally heavy for something that is actually simple. It’s just saying the set of natural transforms and the set are in 1-1 correspondence. Is there a way to lighten things a little? Maybe
?
Apparently notational heaviness is in the eye of the beholder. (-:
I would regard as more notationally heavy than since (1) it uses the redundant extra letters “Hom”, and it requires one to remember (2) that denotes the category of presheaves on , which means the functor category , and (3) that denotes the representable functor represented by . By contrast, the notation is a special case of a general notation for functor categories, and the notation makes it evident what this functor does to an object , namely it takes it to .
I agree with Eric.
Well, I agree more with Mike. And of course, my natural inclination would be to write
but I also like .
Maybe is the placement of parentheses and brackets that is bothering Eric, or some other purely visual aspect? (Also: why does that page have instead of ?)
why does that page have instead of ?
That’s probably me. I always use \simeq, never \cong. hm. Is this a problem?
I’m a huge fan of the notation, which is standard in algebraic geometry.
Also, Todd, How about the shortest and most intuitive notation: ? For some reason, since is so specific, it’s often fine to use it without specifying which functor category you’re talking about (especially the writer probably just said something like: Let F,G be functors C->D).
Finally, why is there so much opposition to the notation for or ? It’s no doubt easier to type and unambiguous to anyone who clicks the link on the lab page that says presheaf.
I mean, look at how many special characters you have to type out for the other two notations:
[C^{op},Set]
vs.
Set^{C^{op}}
vs.
Psh(C)
It’s a category that category theorists use often enough to have given it a name. I mean, even the French use Psh (rather than Pref or Prefais, which one would expect). I don’t understand the opposition.
@Urs: \simeq is usually used for a weaker notion of equivalence than isomorphism, which is always written as \cong.
If you are an expert in category theory and you know the Yoneda lemma like the back of your hand, then the notation
will seem completely obvious. In fact, even now that I understand what is going on, the notation seems obvious to me. But BEFORE I understood Yoneda lemma (to the minimal extent I currently do understand it), the notation was anything but obvious. It just looked like one big scary string of letters. This is sad, since the concept is so simple (and from what I hear, important).
Any student reading Yoneda lemma will likely have a foundation in maths OR physics (OR computer science). If their foundation is physics (or CS), then is probably a little more clear than . You are doing something to . is the operation of taking at category and looking at the contravariant maps from to .
If you know what a category is, then you know what a hom-set is, so is the set of all morphisms in from to . A little checking will tell you that the morphisms in are natural transformations, so is a set of natural transformations.
The link to hom-functor will tell you what is. I think physicists will like this notation because it is very similar to tensor notation. is a contravariant map and is a covariant map. Hence is “like” a (1,1)-tensor, i.e. contravariant in one slot and covariant in the other with “components” .
I do not disagree with Mike’s point. The notation
is clean efficient and to the point if you already know what it means. That is common observation about the nLab. Most things are written for the sake of those who already know what everything means. It is not easy to learn from the nLab (yet). That is not a criticism because I don’t think that is necessarily what the nLab is for, i.e. it is a tool for you guys to help with your work, not necessarily to be a teaching tool for noobs like me.
Please take my word for it. If you do not know what Yoneda lemma means, then the notation is intimidating even though the concept is simple enough.
The cleanest and most efficient notation is not necessarily the best for learning. Sometimes the notation can help guide the learning process. For example,
tells you you need to know what means, you need to know what means, and you need to know what means. Hopefully, you know what means, so move onto , etc.
Can I interject in the hom-set wars :) and remind people of the notation (sometimes even is used). I believe this is a Grothendieckian thing. Then the disputed glyph will be
where we know is the free cocompletion of , and we denote an object of by when thinking of it as an element of i.e. the representable presheaf. Yoneda is then
(on previewing: widehats seem to come out very small in Firefox. Anyone else have a problem?)
Well, first of all, David, the notation you’re alluding to is literally , which appears that way all throughout EGA and SGA. It is his equivalent of Psh(C). However, Grothendieck uses honest Hom functors.
Here is his notation for the Yoneda lemma cf. SGA 4 expose 1:
.
Letter-for-letter. As I said earlier, Grothendieck used the actual written out Hom functor. I think Mac Lane invented the other notation.
Anyway, we’ve got grothendieck on our side (because life is a contest)!
David, I would happy with that, provided a reminder is given for and . It’s actually somewhat Lawverean in style, and it seems to be a good compromise: no redundant , and it’s very uncluttered. +1
Urs, in my experience, the usual symbol for isomorphism is , with reserved for equivalence of categories and for bi-equivalence of bicategories (and from there on up?). It could be that people are beginning to default to to take care of any level of non-evil equivalence except for 0-equivalence (equality), and I’m just old-fashioned. I don’t think it bothers me, since I only just noticed it now while we’re squabbling over notation.
@Todd: or are 100% standard notation for representable functors. Why would we change that?
Also, is just crappier than in every possible way. It is nonstandard (it is really only used by Moerdijk and Mac Lane, and Johnstone). I think that it’s horrible notation. I agree with Eric that notation should be suggestive of the meaning, and this has taken all of the meaning out. If people start using this combination on the lab, I intend to change it whenever I come across it.
I’m sorry if that is considered rude, but I find that notation that bad.
I propose the following convention: if you’re going to write it as , could you replace with ? This breaks it up from the normal roman letters.
Harry, having accommodated you over at operad by writing a whole bunch of ’s (which I am just about to announce at Latest Changes), just to please you and against my custom, your level of aggression regarding notation here and elsewhere is seriously pissing me off.
If people start using this combination on the lab, I intend to change it whenever I come across it.
Sigh.
Listen, it wasn’t my suggestion, it was David’s. I was merely indicating willingness to go along with it, for the sake of harmony.
I really don’t like that notation, but I am not going to actually do what I said I haven’t slept now in 46 hours. Sorry if I came off like a jerk. I’m really harmless and don’t mean half of the things I say. I’ve been told that I have no “filter”.
Yes, please get some rest now.
I propose the following convention: if you’re going to write it as , could you replace C with ?
That’s not something I plan to do when I write pages. Sorry, but I have my reasons. If you want to do it when you write pages, that’s your prerogative.
That’s not something I plan to do when I write pages. Sorry, but I have my reasons.
Care to elaborate? (I’m done arguing. It’s just that this intrigued me.)
I’m curious, is there any mathematician out there that doesn’t like to see and prefers keeping track of covariant versus contravariant functors instead? The reason for throwing all over the place seems to be so that you only need to keep track of one kind of functor. I can imagine situations where it is cleaner to not use any s and just explicitly keep track of two different classes of functors instead.
By the way, I was reading MacLane tonight and saw him write things like “contravariant functor ”, which makes me cringe a little bit (just a little) every time I see it. That is minor though. I can live with occasional cringes :)
But as far as notation for and is concerned, whatever choice is made, I hope it is at least sufficient to handle the related cases of and cleanly as well. For example, I like for and for . If Grothendieck uses for , what would he use for ?
By the way, the feeling I get when I think about as “components” is too strong to ignore. Is there any real sense in which this can relate to old fashioned tensor fields (and its notation) on manifolds?
I would like to also direct your attention to the nLab page dedicated to Notation.
I have to be in the mood to write math calligraphy; I don’t much like big fancy fonts. For one, they take longer to type. For another: very often I’ve registered the observation that mathematicians save their big fancy fonts for their biggest fanciest notions, and what that might signify is that they are themselves not totally comfortable with their notion. (I’m not saying this applies in the present circumstance.) I picked up this idea from Ross Street, who counteracts this tendency by writing something like ’’ for a hypercover in a topos, and I have a liking for the freedom from enthrallment to notation that this implies, even if I don’t go so far as Ross does sometimes.
Third, regarding the present circumstance, it’s the same category that’s meant to be referred to, so I see no reason to change the font – that I would find confusing if I were reading it. Those are some intellectual arguments I can give. There may be emotional components as well at this stage of the discussion.
Third, regarding the present circumstance, it’s the same category that’s meant to be referred to, so I see no reason to change the font – that I would find confusing if I were reading it. Those are some intellectual arguments I can give. There may be emotional components as well at this stage of the discussion.
Ah, well if that’s how it came out, that must’ve seemed pretty stupid. I was saying that you should just use calligraphy (or boldface) for all one-letter categories when you write your Hom-sets as C(X,Y). This makes it easier to tell what’s a category and what’s a functor or an object in long written expressions.
I didn’t know about the Notation page. That’s nice.
I’m curious, is there any mathematician out there that doesn’t like to see and prefers keeping track of covariant versus contravariant functors instead?
It’s sometimes done, and I guess used to be done more. It has a kind of old-fashioned feel to me.
I was reading MacLane tonight and saw him write things like “contravariant functor ”, which makes me cringe a little bit (just a little) every time I see it.
That’s understandable; it’s sort of like a double negative. In some languages (such as Russian, I’m told) and colloquial English, double negatives may signify no’s with greater emphasis, and I think that’s more or less the same function here. It may seem a little odd in the more formal language of mathematics, but it doesn’t bother me much if at all. I may be guilty of sometimes doing that myself.
@Harry: I still disagree, but basta.
I’m curious, is there any mathematician out there that doesn’t like to see and prefers keeping track of covariant versus contravariant functors instead?
It’s sometimes done, and I guess used to be done more. It has a kind of old-fashioned feel to me.
If you or anyone can think of an example reference, I’d be curious to have a look. I’d like to know the motivation if only out of curiosity.
I think is still generally used for isomorphism, but is used for all sorts of equivalence weaker than this. In particular is commonly used for weak equivalences in homotopy theory, which can be yet weaker than biequivalences. I’d rather not use for any of these; to me that symbol looks too “flimsy” to have such a meaning. (-:
It’s a category that category theorists use often enough to have given it a name.
It has a name: or . I much prefer having general rules for notation, such as “this is the notation for a functor category,” and then whenever you see a functor category, you use that notation for it, rather than inventing new notations for particular functor categories all the time. I find that when people invent new notations for things that are special cases of existing more general notations, it makes their papers harder to read, and in particular harder to skim: if I open up the paper in the middle and see notation I don’t know, I have to search backwards until I find the definition of that notation. The notation “Psh” is common enough that I, personally, have learned what it means by now, but on general principles I still think it is better to use general notation. I feel the same way about and .
That isn’t to say that I haven’t used these shorter notations myself sometimes, locally, but I generally try to to remind the reader of what that notation means at the beginning of any discussion that uses it. My comment #8 was intentionally somewhat overstated to make a point; I don’t object strongly to these notations but I think it is always better to say what they mean when you use them.
BTW, Harry, I think you’re wrong about the (non)standardness of ; I’ve seen it in numerous other papers.
The choice of script letters versus roman letters is something where I think we should also respect the authors of individual pages and maintain consistency with what’s already written. I tend to use script letters much less in the nLab than I do in ordinary papers, for two main reasons: (1) on the nLab I have to write out \mathcal{C}
instead of being able to define a macro \sC
, and I’m lazy, and (2) script letters may not display well for people who don’t have math fonts installed.
Also I think that using different fonts is most helpful when there are a lot of different kinds of things floating around, so that fonts can help distinguish them; I often use a convention like boldface roman capitals = categories, script capitals = 2-categories, blackboard bold = double categories, roman capitals = objects of categories, roman lower case = morphisms in categories, greek lower case = 2-cells in categories. I find this very helpful myself. But few nLab pages are long enough for that to be worthwhile, I think.
Re: contravariance, I expressed my opinion here.
Also I think that using different fonts is most helpful when there are a lot of different kinds of things floating around, so that fonts can help distinguish them; I often use a convention like boldface roman capitals = categories, script capitals = 2-categories, blackboard bold = double categories, roman capitals = objects of categories, roman lower case = morphisms in categories, greek lower case = 2-cells in categories. I find this very helpful myself. But few nLab pages are long enough for that to be worthwhile, I think.
Too late, I like this convention, so now I’ll be saying “let’s adopt Mike’s convention”.
=D!
BTW, Harry, I think you’re wrong about the (non)standardness of ; I’ve seen it in numerous other papers.
We should ask Urs to ask Moerdijk if is a bastardization of or if it’s the other way around.
I meant to write something like , but couldn’t figure it out at the time. I prefer it to , as that could just be something along the lines of “take categories and …”
I don’t quite have the time to follow this discussion, but would just briefly like to make one point:
the nLab is not supposed to be a gauge for canonicity, but a place where things are explained. If it happens, as it does here, that there is one notation preferred by practitoners, but another notation better suited to introduce the subject to newcomers to the field (by the own account of these newcomers), then by all means the entry should say so and explain both notations somewhere at the beginning.
Sounds good. We could even have a pseudo notation
@Eric: That is exactly what
means. As I was saying earlier, this is the notation that gives the proper interpretation.
Also, the explanation for why Grothendieck uses rather than is that is actually for for some universe with the universe suppressed.
That is exactly what means.
Er. It’s also exactly what means! ;-)
As I was saying earlier, this is the notation that gives the proper interpretation.
Somehow this remark is missing a point…
Anyway, I now went ahead and edited the paragraph to reflect the discussion here a little. See Yoneda lemma – Statement of the lemma
s/interpretation/intuition/
Hi,
the above is a fairly ancient discussion, I forget what I was trying to make Todd do back then! :-)
Of course in the proof of the Yoneda lemma one ends up looking at all generalized elements of the representing object (which appear as the components in the domain of the natural transformation into the given presheaf) and observes that they all factor uniquely through the universal one, concluding that hence the component of that natural transformation is fixed on all components once it is fixed on the universal one.
I’d think what the entry needs is not more heavy language, but maybe just more details on how this simple proof works. Recently I pointed a young student to the page and realized that just displaying that naturality diagram would help newcomers see, as it should be, that that which is trivial is trivially trivial ;-)
Maybe somebody here has the time and energy to draw the relevant naturality diagram and add it to the entry, for better illustration of the proof?
It is the universal element of , in the sense of representing the functor .
I would be happy to remove the offending phrase, as I’m not really inclined to spend much time defending it. :-)
Edit: done.
By the way, Urs, I inserted the desired diagram (at Yoneda lemma). The proof now has some redundancy in it; please feel free to trim out the redundancy.
Thanks!
Someone tried to turn Yoneda lemma into a slide show (and failed!). I have rolled back one stage.
I have added to Yoneda lemma#necessity_of_naturality a couple of examples illustrating the necessity of naturality. I came up with these in reply to the question of Michael Barr on the category theory mailing list today. It took me a few goes to get the finite example right, as you’ll see if my messages pass the moderation!
Amusingly, at least three other people came up with exactly the same infinite example. Funny how the mind works (I at least certainly just came up with it from scratch)! I think the finite example I gave is minimal; I haven’t checked whether it is the same as the one Eduardo Dubuc gave (or whether his is correct).
I have edited a little to streamline the formatting. Merged the subsection “Preliminaries” and “The Yoneda lemma” into a single section “Statement and proof”, instead gave the “preliminaries” a numbered Definiton-environment. Removed two puny “Remark”-subsubsections which both made the same remark about generalized spaces, that is better had at Yoneda embedding.
I added some links and mentioned what happens to the Yoneda lemma in a semicategory.
Actually, I avoid the redirects intentionally since I don’t like the way pages are displayed when opened by a redirect. Maybe one (Richard, sorry!) could suppress the ’redirected from..’ messages at least with these plural-singular redirects and purely orthographic redirects like capitals, hyphens etc. or create a second version of the redirect command that does not display the messages !?
I also suppose that putting in ’[[A|B]]’ is a bit more efficient than letting the engine search through all redirects looking for B , is it not !?
Okay, sorry, I see. I was thinking that it makes the source easier to read.
I agree with Urs. Suppressing the redirection message for purely orthographic and plural/singular redirects is a nice idea, if we can think of a good way to do it, but until then I think it’s better to live with the redirect message. Or we could perhaps make the redirection message less obtrusive in all cases.
Thanks for adding this to the TODO list, Mike! I suggest that we not try to guess plural forms of words, and instead add something to the syntax when writing a redirect to indicate this. We could have
redirect plural
or something like that. Not sure how to do it in general. Could have
redirect orthographic
or
orthographic redirect
or something, maybe. Just let me know what you’d prefer.
That was my first thought too. However, then I realized that if we did it that way then it would not take effect on any existing nLab pages unless someone went through manually, looked at all the redirects, and changed the syntax of the appropriate ones. I doubt that anyone wants to do that.
Rather than trying to guess plural forms, what about just suppressing the redirection message whenever the page title and redirected name are “close” in some heuristic distance? We don’t have to be exact here; it’s not the end of the world if a few plural/orthographic redirects get the hatnote and a few non-orthographic ones don’t. E.g. the maximum over all “words” of the number of changed/inserted/deleted letters in that word has to be . (The cutoff can’t be less than 3 if we want to catch “category categories”, and I can’t think of a reason to make it any larger than 3.) We could also add in manually a few changes that are declared to be distance-1, such as “infinity ∞”.
I don’t think that both strategies are exclusive. To a have redirect*- command pemits fine-tuning, is computationally cheap and a useful option for future edits and the star is easily apppended while editing an already existing page.
In my experience plural “s”, hyphens and initial capitals are the most frequent redirects that occur in practice and just having rules for them is already a huge improvement. Using error correction would be cool to catch typos and would even make the written redirects obsolete but I see potential problems with the computational load and short names like Cat etc. though I don’t know how serious this would be.
Minor inconsistency: The Yoneda functor starts as lower case and later morphs into upper case . I had to go back and convince myself that it’s the same functor.
Thanks for raising, Bartosz! Good idea to make it consistent. Would you like to edit the page to do so?
Thanks!
Recently I see more and more the kana “よ” (“Yo”) used for the Yoneda embedding. It looks slightly unusual, but I guess I quite like it, because unlike the common alternatives , and , it doesn’t collide with those otherwise-occurring symbols. (For instance, I’d like my second object of consideration after to be .)
Is there a general opinion on this new option?
I prefer . I’ve never had a conflict (typically I would apply it to an object , i.e., with a name nearer the beginning of the alphabet).
On purely aesthetic grounds I might come one day to like it, but it would be annoying to have to hunt down each time how to typeset it, whether via html or copy-and-paste.
This is the first I’ve heard of this usage. Can you give some references? I admit to never having been completely satisfied with for the reasons mentioned; usually I choose or or sometimes . But I’m not sure about introducing a totally new alphabet into mathematics.
In LaTeX of course one could just define an appropriate macro \yo
, and on the nLab/nForum we could add a simple command to our parser, so in these contexts we wouldn’t have to keep hunting down how to type it (though in other web contexts like mathoverflow that use vanilla mathjax the problem would remain).
Recently I see more and more the kana “よ” (“Yo”) used for the Yoneda embedding.
Most mathematicians (except mostly native readers) don’t know hiragana. As is the case for Cyrillic, Hebrew and other scripts. (but they do know the Latin and Greek scripts)
The problem with using an unknown script is you can’t recognize the characters and instantly be able to tell that two characters in that script are different without visually comparing them.
Sure one could make a unique exception for “よ”, similar to how is the only Hebrew letter used in math. (the rule being if it looks Japanese it is “yo” while if looking Hebrew its “aleph”). If you open the floodgates to more characters in these scripts then you run into the recognizably problem.
isn’t the only Hebrew letter used in math. It’s probably the most commonly recognized one, but is also used in cardinal arithmetic: wikipedia.
I think the problem of distinguishability only arises if you use characters that look similar. I don’t know the Hebrew script but I don’t have any trouble telling the difference between and . By contrast, the latex symbols list also includes a \gimel
that I would have a very hard time telling apart from . (And a \daleth
that is probably sufficiently distinct.)
The problem of distinguishability doesn’t only arise with different alphabets, but even with different fonts sometimes. I have a very hard time telling apart from .
@Mike 60: I first stumbled on this usage in:
Other references include (in no particular order):
LaTeX code for producing the symbol is:
% lifted from https://arxiv.org/abs/1506.08870
\DeclareFontFamily{U}{min}{}
\DeclareFontShape{U}{min}{m}{n}{<-> udmj30}{}
\newcommand\yon{\!\text{\usefont{U}{min}{m}{n}\symbol{'210}}\!}
Thanks. That’s a very curious collection of conflicting attributions!
I think I could probably get used to よ. It would be nice to have a clearly distinguished symbol for such an important object.
Curious that they defined the command to be \yon
rather than \yo
…
By the way, for those of us without a Japanese keyboard, よ can be produced on the web (without copy-paste) by よ
.
(BTW, according to Wikipedia, the katakana version of the same kana is ヨ, which would of course be a terrible choice to use for anything in mathematics!)
Theo points me to
which is an earlier reference than any of the others.
I guess the arXiv dates are not a perfect guide to online release date then!
Well, in general no, but in this case the point is to look at arXiv revisions. The use of よ doesn’t appear until v3 of the Loregian paper, posted in Jan 2017.
Added link to authors’ copy of (Reyes–Reyes–Zolfaghari 2004).
Just a note about this from #63:
% lifted from https://arxiv.org/abs/1506.08870
\DeclareFontFamily{U}{min}{}
\DeclareFontShape{U}{min}{m}{n}{<-> udmj30}{}
\newcommand\yon{\!\text{\usefont{U}{min}{m}{n}\symbol{'210}}\!}
to possibly save others some searching: in Ubuntu, the necessary font is available in the package latex-cjk-japanese
.
Just thought I would add a note here that よ for the Yoneda embedding follows a bit in the footsteps of the use of Cyrillic Ш (“sha”) for the Tate-Shafarevich group. Apparently Ш is also sometimes used for the Dirac comb (although Wikipedia sticks to the ASCII-art version ) and its lowercase version was the origin of the symbol ⧢ for the shuffle product.
I wonder whether we should have considered Ш instead of ʃ for the shape modality. (Or did we? I can’t remember.)
I like that the “esh” symbol, while being different from an integral sign, is still reminiscent of one, since this matches well with the understsnding of shape as geometric realization. Therefore I would still prefer it over “sha”.
Yes, and I don’t think it would be a good idea to try to change the notation now, even if we thought that Ш was better (which I don’t either). It just occurred to me by-the-way that “sha” could also stand for “shape”.
Maybe if we ever encounter another shape-like modality we should consider Ш for it.
I like that the “esh” symbol, while being different from an integral sign,
I always thought that this symbol is the integral sign. It certainly does not look different when typeset on the nLab pages.
It doesn’t look very different, but I can tell the difference when they are typeset side by side: ʃ vs .
This probably depends on the font. In some fonts it seems to look essentially the same.
But don’t we choose the font in which the nlab is displayed, so that it should be the same for everyone? I remember a long discussion recently about what that font should be.
Well, what if the chosen font is not available on user’s system? Then a different font is substituted.
The chosen font for formulas currently appears to be DejaVu Serif followed by Cambria. If these two fonts are not available, then whatever font exists on user’s system is substituted.
This could be avoided by actually loading the font files in the CSS, but currently we do this only for STIX fonts.
Are you saying you don’t have those fonts available and that’s why they don’t look different for you?
I do have DejaVu fonts installed now, but this was a separate step that I had to do, and a lot of Linux users don’t have them installed.
The Stix fonts are embedded for exactly this reason, to serve as a backup. Is the problem that the two symbols look the same in Stix? Should we embed something else?
No, the problem was that if a user does not have DejaVu Sans or Cambria installed, then formulas will use whatever sans serif font is installed, which need not distinguish between esh and integral.
Concerning STIX: I suggest that we switch to the improved XITS fonts: https://en.wikipedia.org/wiki/XITS_font_project
Most mathematicians (except mostly native readers) don’t know hiragana.
Sure, they don’t. Perhaps because they are never exposed to it :-)
Mathematical notation is one of my pet-peeves (those of you who know me in person, or worse, those of you who read one of my papers, might have noticed it…), but with the passing of time this idiosyncrasy acquired a deeper meaning.
We name things so that they can be distinguished: I see no point in trying to economize in this respect, as if names were subject to exhaustion. But it is even more a nuisance that mathematical symbols are very often overloaded with meaning (“normal”, “nice”, “admissible”, “good”, …). However, that’s not my point here.
I chose to use the hiragana よ for the Yoneda embedding essentially because there was a precedent in this respect, but all the more because Yoneda was Japanese. But there’s a deeper reason behind this and similar choices.[¹]
Do you ever wonder, in doing Mathematics, to what extent the ideology that permeates it (like in every other product of the human intellect, Mathematics has an ideology) is centered in Latin/Germanic cultures? I often do, and at the same time I wonder what I can do to balance this trend. Human culture is wide and rich: we have built a word for basically any concept, provided we search far enough from our home culture.
Thus, to me, using Georgian, cuneiform, hiragana, Hebrew and Sanskrit alphabets is a precise “political” choice (that’s a heavy word, I guess, but I can’t find a better one): when I need to denote something exotic, or when I need syntax to convey a certain “flavour” together with semantics, I sometimes find restrictive to limit to Latin and Greek alphabet.
Maybe, notation is a different matter, but I feel I’m not the only one having this peculiar ésprit, especially among category theorists :-) when we study topos theory, we sketch an elephant and not a squid, because of an old Jain tale about five (four? six?) blind men touching an animal (=a category) that resembles a tree, a throne, a fan… according to the side where you approach it from. So are toposes! You can only grasp their true nature if you approach them from every side at once. And the methaphor is borrowed from Indian culture, because Western philosophy doesn’t contain a concept similar to Jain anekanta (more or less, “manifoldness of thought”).
===
[¹] See e.g. recollements or yosegi.
Re #84: That should not be possible for displayed mathematics. The embedded Stix font should be used if the other two are not found. If you see other behaviour, please post a screenshot from something like F12 in Chromium which shows which font is used.
Here is what Firefox shows for the first displayed formula in the Yoneda lemma article as the applied CSS style:
body { font-family: “dejavu serif”,cambria,Serif !important; }
This comes from the beginning of the nlab.css file.
So if DejaVu Sans or Cambria are installed, they will be used, otherwise, the generic serif font will be used.
The style specified for the math element, which includes STIX fonts, simply does not apply here, because it specifies @media max-width: 960px, but my screen width is 3840 pixels.
Thanks very much! Good catch regarding max width, that seems like it would explain it. I’ll investigate that when back from holiday.
I actually do not know anything about this @media gadget! Of course I will look it up, but respecting it might be browser dependent. Which browser were you using, Matt? I think when experimenting in the past I have always got the Stix fonts as well.
Firefox. W3Schools said Firefox has supported @media since 3.5.
I’m no web dev hotshot, but I wonder if Dmitri misread nlab.css. (Or maybe it’s been changed already?) Here’s the part using @media:
@media only screen and (max-width:960px){ .navigation:not(.navfoot){top:60px} h1#pageName{margin-top:100px} }
So it doesn’t look like this should affect loading or use of STIX.
Re #91: This is correct, I misread it.
However, the rule
math {
font-family:dejavu math tex gyre,cambria math,stix two math!important;
font-size:19px
}
in nlab.css somehow does not apply anyway, and the rule that I pointed above applies instead.
This is revealed by Firefox’s inspection tools.
I am still confused, Dmitri, because it would seem that the styling for body rather than math is being used, which seems unlikely, especially since Matt does not see this, and I have not previously observed it either when testing in Firefox.
Re #94: Yes, I see that too, in the “Rules” inspector tab. I don’t know why it doesn’t show the math
rules. But it does show them in the “Computed” tab. Also, when I uncheck the body font family rule (in “Rules”), body text changes, but not formulas.
Great, then I think the issue may after all be that the two symbols are too similar in Stix. I can verify that when back from holiday.
Regarding the ubuntu package (#72): for me it only worked after I also installed “latex-cjk-japanese-wadalab”. This package is ’recommended’ by the package “latex-cjk-japanese” that Mike mentioned, but depending on your configuration recommended packages might not be automatically installed.
Added a reference to