Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I’ve added some items to mathematicscontents.
I never did much with the contents pages, so I may not have organised this in the best way.
It looks fine to me, except I am somewhat unhappy that “evil” is in the main math contents. It is rather specialist concept within category theory (even one school), rather than generally accepted wide subject in mathematics. But this is just a personal opinion.
I also thought that “evil” wasn’t quite a headline topic. But I don’t have strong opinions about it.
Also physicscontents currently contains too many minor topics (mostly added by me, I should say). Eventually I want to break it up.
Regarding that in physics part so many major topics are missing so far it is good even with the minor topics now. More canonical revision can be made once the physics part gets higher maturity. Now we could be happy with the contents.
I think that evil is an important concept in the foundations of mathematics, although it is one that only becomes clear with the help of category theory (actually, higher category theory, or at least higher groupoid theory). So I put it under the foundations heading.
There are many important missing concepts in the foundations of mathematics, e.g. recursion and recursive function, which are used much more widely (by both mainstream and applied mathematicians, as well as by logicians) than the higher organizational principle of (still discutably defined) maximal categorical weakness; then e.g. modal logics, ultrafilters, categoricity (in the sense of classification of structures) etc. If it were the table of contents of logics only than it could be proportional, but as part of mathematics TOC it is a bit extremist to state it as one of just 6 sub-subjects (or 7 if the nonstandard analysis which is listed under analysis counts). Also at some point one should have subjects like proof theory, linear logics and so on. Which are much bigger than the notion of what you call “evil” each.
So your complaint is that the table of contents is incomplete? OK, I agree with that. Although as far as balance is concerned, we do tend to look at foundations from the nPOV, which affects that somewhat.
No, this is not my complaint. My complaint is that if we go to so small items rather than the big division into subjects/areas we will have to expand too much. Such small items should be in separate TOCs for areas. So for TOC logic/foundastion and set theory I agree that evil may enter, as well as many other things like ultrafilters, categoricity, recursion and forcing. The toc are meant in my understanding mainly for those who do not know nlab very well so need help in finding material. Thus for me toc are not that important but when I advertise nlab to some guy he comes and he needs to find what is there. So it is good that toc be mainly conventional with roughly classical division of subject with few innovations. I spent tremendous effort writing to colleagues about nlab and adevrtising it and get repsonses like “I am more practical and like examples more than abstract nonsense”. So making the front/end of nlab maximally nPOV scares people. The heart should be nPOV and surface should be friendly as possible to mainstream mathematician or theoretical physicist if we want people to join in usage and contribution.
get responses like “I am more practical and like examples more than abstract nonsense”.
I believe that’s a very valid and valuable response. We should be putting much more effort into examples. I don’t just mean examples as laundry lists, but really thoughtfully chosen examples which help point the reader to the heart of thinking about a subject.
I guess I generally agree with Zoran on the organization of tables of contents. Regarding
There are many important missing concepts in the foundations of mathematics, e.g. recursion and recursive function, which are used much more widely (by both mainstream and applied mathematicians, as well as by logicians) than the higher organizational principle of (still discutably defined) maximal categorical weakness
I first of all admit that I had to look up “discutable”, but second, think I might disagree with Zoran about the relative stature of the concept of “evil”. It’s much more than a concept: it’s a fundamental guiding principle for anyone who uses category theory (and higher category theory) and wishes to formulate a new categorical concept. It’s an overarching guiding principle, in contrast to things like recursion and forcing. But being a guiding principle, it may be tricky to place it correctly within toc. For example, is it “foundational”? Yes, if you take Lawvere’s point of view toward foundations; no, if you accept Kreisel’s.
…the concept of “evil”. It’s much more than a concept: it’s a fundamental guiding principle
At some point we should think about phrasing it positively: It’s good to remember isomorphisms . The concept “good” is a fundamental guiding principle. :-)
I call it a principle of maximal weakness and knew it (as most of category theorists did) years before heard of word “evil” (which I still do not accept); on the other hand I do not believe that it is still well understood at the rigorous level. I disagree that recursion is not a guiding principle; when creating proofs getting finiteness of procedures and finding levels measuring the complexity is much more widely used principle in building mathematical proofs and constructions.
How about definability ? Can we consider it also a guiding principle ? It is much elucidated by the category theory (fibered category). If we want to know if some construction will be useful, computable and so on, we need to look various aspects like the complexity, definability, and maximal categorical weakness (non-evilness) of structures involved.
Gluing from local objects is a general architectoral method in mathematics. The various notions of nerve (and maybe relating of spectra and reconstruction in the sense of geometry) are derivatives of that approach.
Word categorification is much less well defined as a “principle”. As an English word it is attributed often to Yetter (right ?), but the concept is actually due much earlier works of Ehresmann who systematically thought about it in his times. Homotopification looks to be even younger and independently introduced as a term by many people. In practice it has been extensively used already in 1970-s about the time of Boardman-Vogt book. Finally to end the historical remarks, I hope everybody knows that Zorn used to emphasis that Zorn’s lemma is not due him; it has been many times used before him; and if I understand the story right he himself did not formulate it particularly, but just used like many others in some proof.
Today I looked at some Oxford companion monograph on philosophy of mathematics: there are tens of items in the index about Dedekind (let’s philosophise about numbers and names, so many chapetrs and school about the two), but no mention of Grothendieck in the monograph. New monograph.
The heart should be nPOV and surface should be friendly as possible to mainstream mathematician or theoretical physicist if we want people to join in usage and contribution.
Yes, this is a good point.
Let me explain why I edited the mathematics contents at all. (I usually ignore the contents listings.) I did not edit it to list evil (which was an afterthought) but to list constructive mathematics. And I did so because I received an email from a mathematician who wanted the nLab to be more user-friendly and suggested having constructive mathematics listed in the mathematics contents. But of course, while it was not a category theorist who wrote me, it was not exactly a mainstream mathematician either, but instead a constructivist!
So I’ll agree with your point that the mathematics contents should be friendly to mainstream mathematicians, but it should also be friendly to more outré mathematicians, especially ones that would be likely to find the POV useful.
Zoran, your remarks on “guiding principles” are interesting. Here are some reactions.
To get one thing out of the way: how long you and others have known about the principle of “maximal categorical weakness” is irrelevant to this discussion (and I hope you’re not suggesting it’s new with anyone in particular). Much more importantly, how rigorously established this principle is also appears to me to be somewhat irrelevant. For it’s a truism that generally speaking, heuristics are not rigorous; that’s almost by definition. The same applies to your remark about how well-defined categorification is (and indeed it is notoriously ill-defined).
I’m not sure I understand “recursion is a guiding principle”. You wrote
I disagree that recursion is not a guiding principle; when creating proofs getting finiteness of procedures and finding levels measuring the complexity is much more widely used principle in building mathematical proofs and constructions.
The principle that proofs are recursive, i.e., are constructed recursively starting from axioms and applying rules of deduction, is just the definition of (formal) proof. It doesn’t give a guide about how to go about constructing proofs for example. One can use tools of recursion theory to measure the proof-theoretic strength or complexity of formal systems, and maybe use the results of a recursive analysis as a guide in theory selection (thinking here of reverse mathematics for example); is that closer to what you meant? If so, maybe I understand a little better, but then what is the principle? Would it come down to saying something like, “in theory or concept-formation, one ought to examine issues of complexity, expressivity, definability, computability, maximal categorical weakness, etc.”? (Not much to disagree with there! But it fits my rough idea of “guiding principle” anyway.)
Further discussion might be required to distinguish “method” from “guiding principle”, but the other examples you give are intreresting and there’s a decent chance one can work each of them into enunciations of guiding principles. (But I won’t try here and now.)
A small remark that categorification has historical roots dating farther back than Ehresmann. Thinking for example of Emmy Noether’s observation that numerical invariants like Betti numbers are shadows of deeper structural invariants like homology groups. (Edit: or, going much further back in parable form, there is the legendary shepherd from the Categorification paper.) I am not competent to judge how systematically Noether and others of that era considered such phenomena.
proofs are recursive, i.e., are constructed recursively starting from axioms and applying rules of deduction, is just the definition of (formal) proof. It doesn’t give a guide about how to go about constructing proofs for example. One can use tools of recursion theory to measure the proof-theoretic strength or complexity of formal systems
I did not mean only the recursiveness at the level of formal proof but the methods of controlling and reducing gradually some complexity parameter. For example you take the Širšov-Bergmann diamond lemma to prove that something is a normal form. This is really hi level, not the low level of writing the proof in terms of elementary terms.
Well, I’m just a low-level kind of guy. ;-)
For example you take the Širšov-Bergmann diamond lemma
Is that what Americans call the Church-Rosser confluence property?
Shame on me but I am not competent to answer off-hand though Church was my scientific grandfather. But after a look at wikipedia, I would rather guess no. There is also a much simpler fact, the Newman’s diamond lemma, quoted at the beginning of Bergmann’s article.
It would be interesting to clear out of course what is the exact relationship; and what is the common generalization. You should tell me what is the relation between the lambda calculus and universal algebra to get closer to the question. Bergmann’s phrasing is for assocaitive algebras, while Širšov’s earlier result is more general. Bergmann quotes Širšov’s paper, but presumably did not notice that his main result was inside.
The “Church-Rosser” (or confluence) property was indeed originally applied to a standard normalization scheme in lambda calculus, but since that time it has come to mean something much more general. Here’s how I understand it (and I don’t claim everything I say is 100% standard).
Suppose one has a presentation of an essentially algebraic structure, so that the elements of the structure are equivalence classes of well-formed terms. A normalization scheme is provided by a relation on terms such that implies is equivalent to (we think of as a reduction of ), and any two terms in an equivalence class are connected by a zig-zag of instances of reduction. The normalization is Church-Rosser (or confluent) if the reflexive transitive closure of the reduction relation is the underlying order of a filtered poset (so that any span can be completed to a diamond). We say the scheme is strongly normalizing if every sequence of reductions terminates in finitely many steps; often, this is guaranteed by defining a suitable rank on terms such that reduction strictly lowers the rank.
Under these conditions, every equivalence class of terms has a unique normal form to which every term is reducible in finitely many steps (a term is normal if it cannot be further reduced). This is a classic lemma. In the untyped lambda calculus, the standard reduction scheme is Church-Rosser but it not strongly normalizing (indeed, the classic example is the fixpoint combinator which reduces only to itself, so that reduction never halts). On the other hand, many presentations of essentially algebraic structures (such as polynomial algebras, free Boolean algebras, various sequent calculi) do admit a confluent strongly normalizing reduction, and this in effect means that equivalence of terms is decidable: just check whether two terms have the same normal form.
It wouldn’t surprise me if all this were discovered independently many times, but in America the term “Church-Rosser” is well understood to refer to this diamond property of reduction.
Now the basic question is what is the term. Newman’s diamond lemma in combinatorics/graph theory is simple statement on using the diamond property, while Bergman’s version has some fine points because of refined setup working with sums of monomials, while the semigroup order respected by the reductions (monomial into polynomial) is on monomials and not on the sums of monomials. My impression is that the Church-Rosser does not concern the subtlety of using semigroup order on monomials in order to make conclusion on normalizing the finite sums; the diamond property for both inclusion and overlap ambiguities is an ingredient but not the full story.
I have transferred a part of one of the appendices in my thesis to diamond lemma (zoranskoda) concerning the Bergman’s diamond lemma.
Bergman said in his paper:
Our main result, Theorem 1.2, is an analog and strengthening of the above observations (Newman’s diamond lemma) for the case of associative rings, with reduction procedures of the form sketched earlier. It is self-contained and does not follow Newman’s graph theoretic formulation. The strengthening lies in the result that the analogs of conditions (i) and (ii) need only be verified for monomials, and in fact that (ii) need only be verified for “minimal nontrivial ambiguously reducible monomials”. That is, it suffices to check, for each monomial which can be written as with either , , (, ) or , , () that the two expressions to which reduces (in the first case, and ; in the second and ) can be reduced to a common value.
but in America the term “Church-Rosser” is well understood to refer to this diamond property of reduction
In American ring theory community one talks about Bergman’s lemma meaning that each overlap or inclusion ambiguity of reductions from monomials to polynomials in a free associative algebra can be resolved by a diamond formed with help of additional reductions. In commutative algebra there are related algorithms on so/called Groebner bases (those people do not know of Bergman’s lemma) which are related but the logics and conditions are a bit different.
Yes. As I say, “Church-Rosser” is by now understood to be a very general term, at least in the North American logic community. I guess by now we’re agreeing that we’re talking about basically the same thing, even though it might go by different names in particular cases where there are extra subtleties and refinements. (I didn’t recognize the name Bergmann or Bergman back in #14, but by now I guess you mean George Bergman at Berkeley.)
To my mind, Mac Lane’s original method of proof of his coherence theorem is another case in point, except that here it’s a categorification of a very simple normalization scheme for terms in free monoids. In other words, the normalization on the binary magma terms which shift parentheses from left to right isn’t just a relation, but rather the arrows are associativity isomorphisms; the diamonds aren’t just diamonds in a poset, they have to be commutative squares in the free monoidal category. Otherwise the overall strategy is essentially the same.
One idea I had when I was first trying to define weak -categories was, vaguely speaking, try and categorify this into higher dimensions. Orient the cells of the associahedra so that you can interpret the associahedra as parity complexes, and the orientations are “arrows” which reduce a suitable rank. Coherence is proven with the help of the observation that the -skeleta of all associahedra are -connected.
I am sorry for creating the confusion with Bergmann instead of Bergman. The thing is that I remember that one of my heroes has double n (it was Bargmann with his paper on Fock-Bargmann space and related coherent states which I also used in the thesis – funny enough they are related to Bargman domains in complex analysis – another Bargman with a and single n).
I did hint in 16 that it is all related to the combinatorics of a common simpler result which I know as the Newman’s diamond lemma, quoted at the beginning of the Bergman’s Advances article, but it was and stays the impression that it is not the same statement. I know that the MacLane’s coherence can be obtained as a result of the type of theory of rewriting rules (the latter I know of because of a period when I intensively studied compiler writing) and some time ago I entered the reference to an amazing recent article of Tibor Beke into nlab, e.g. into the entry rewriting.
Still I think we should be very careful with the subtleties of this algorithmic field and distinguish versions of similar reasoning. For example, in my survey
I wrote a counterexample (in the section on practical criteria) to the proof of Ore property by induction on generators of the algebra and on multiplicative generators of the supposed Ore set (by the way, the proof also uses the Bergman’s diamond lemma in inferring the structure of the considered counterexample). This has been widely used without a proof and one of the appendices in my thesis has a wrong result which I did not use in the bulk of the thesis, though I intended to at a crucial place. I had instead a corrected result, where additional control of the size of new elements generatd from the partial Ore condition is taken into account (there is also one version there which needs a correction, thanks to a Spanish PhD student who found that one of my statements in the section is too strong).
Not to make a wrong impression, I would be happy if the history is older and if the results can be made into logical order, but insist that we understand all the subtleties before hasty merging.
Edit: I created person entry George Bergman.
Zoran:
Today I looked at some Oxford companion monograph on philosophy of mathematics: there are tens of items in the index about Dedekind (let’s philosophise about numbers and names, so many chapetrs and school about the two), but no mention of Grothendieck in the monograph. New monograph.
Sad isn’t it that philosophers looking at mathematics of the past 50 years don’t get invited to write. Not that there are too many. Colin McLarty is best for Grothendieck: The Rising Sea: Grothendieck on Simplicity and Generality.
The link above should be The Rising Sea: Grothendieck on Simplicity and Generality (missing http://
).
I mentioned the experience quoted in 25 in my (usually inactive) blog, June 14, 2010 entry on books.
1 to 27 of 27