Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Maybe one day it would be fun to find the time to create as Lab entries all those linked to here:
The WIki History of the Universe,
maybe at least up to the entry “star formation”, which still fits under the “Physics” headline of the Lab (after that geology and then biology kicks in, which we should probably leave to other wikis).
One interesting thought to explore might be if we can expand that link list to the left. ;-)
One interesting thought to explore…
How would that go? A demonstration that the universe ’argued itself into existence’, to use the phrase of Collingwood criticising Hegel.
How would that go?
Well, not sure. It’s delicate territory.
A demonstration that the universe ’argued itself into existence’,
No, I am thinking this: the first item in that list is “quantum fluctuation”. Can we say something about “Where does the phenomenon of quantum fluctuation originate in, conceptually? Can we reduce that to something which in some sense is more fundamental, even?”
I hear there have been huge public debates on this question among some US physicists and science journalists recently. Apparently some are fond of the meme that “everything was created from nothing in a quantum fluctuation”. But at least one discussion member points out: assuming the notion of a vacuum quantum field configuration is not quite assuming “nothing”.
These are moot discussions without any way of making them formal. But maybe we are beginning to see how to formalize some of this…
I feel a bit unsure about saying more. I have had bad experience with expressing my philosophical feelings here. But maybe I may send you an email with a pointer to some notes I made…
I have had bad experience with expressing my philosophical feelings here.
That’s worrying in an enterprise that was supposed to contribute to mathematics, physics AND philosophy.
If Newton can be taken seriously while stating that space is the “sensorium of God”, I don’t see you have too much to fear. But I guess objections have been more to what are taken to be overbearing claims about how things must be formulated with higher categories. Personally, I think much is to be gained by straining a viewpoint to its limits.
By the way, I see that David Albert, author of the review linked to above, is professor of philosophy at Columbia. Do you happen to know him, academically? I find what re writes in that review, evident as it may seem (clearly not to everybody, though), is very clear-sighted.
I know of him, but haven’t read his books.
I’d like to hear more of your notes you mentioned above.
I’ll show you the notes a little later, maybe by tonight or by tomorrow night. I should still polish them a bit more. This is something I do past bedtime only…
I agree with David’s #4, and I would like Urs and anyone else to consider discussing philosophy publicly, on this site ostensibly devoted to mathematics, physics, and philosophy. As I have intimated in the past, I would welcome more philosophical contributions to the nLab.
The thing about philosophy is that there’s so much disagreement of approach. Even if a mathematician is not sympathetic to the nPOV, they will at least think that to the extent it is adopted, what is contained in the nLab is correct. There’s a large difference between the philosopher who sees themselves as something like a scientist, trying to get at the truth of some concept or other, and the philosopher who reflects on the history of ideas, perhaps commenting on what constitutes good conditions for knowledge to grow. In the field of philosophy of mathematics, this distinction could be exemplified by Boolos on ’The iterative concept of set’ and Lakatos in Proofs and Refutations. You may have noticed my sympathies with the latter.
That’s not to say I’m not interested in the former, but then I wonder if that isn’t better done by mathematicians. In any case, to the extent that I’ve had any good ideas over the years at the Cafe, I’m feeling more like a mathematician.
It seems that philosophical work can inspire important mathematics. Lawvere can read Hegel and be inspired to develop some mathematics, describing what he’s doing using Hegel’s terms of Being and Becoming. But how detachable is the philosophical source? So Urs makes sense of parts of Lawvere, extending cohesion to higher toposes. Do we need to go back to Hegel? I’m asking the same question about Martin-Loef here
Another way of putting it is to say that there is already lots of philosophy going on at nLab. Just look here and here. In fact the whole wiki embodies a philosophy. The Lakatosian strand has a more personal flavour, and would be harder to represent encyclopaedically, so I fiddle with my own thoughts at my wiki.
I would like to put in my vote for public discussion.
That’s not to say I’m not interested in the former, but then I wonder if that isn’t better done by mathematicians.
But there is a transformation procsess, isn’t there, by which ideas originate in philosophy and potentially eventually are absorbed by mathematics. Many mathematicians don’t care about or even can’t deal with an idea until it has arrived in the realm of mathematics. But sometimes a philosopher, or else maybe a physicist, is needed to push an idea into that realm, in the first place.
This is how modern mathematics-based physics has grown out of, verbatim, natural philosophy. Or to some extent how modern chemistry has grown out of alchemy, maybe.
This is where I see the crucial role of category theory: whereas the theory of, say, differential equations was enough, after Newton, to absorb plenty of “natural philosophy” into mathematics, there are many important ideas in “more fundamental natural philosophy” which cannot be captured with just differential equations, not with differential geometry, and not with linear algebra. These are more fundamental concepts. Like structure. Clearly the idea of “structure” is ancient, you will know better than I where and why Aristotle, say, ponders it. But it has been vague all along, has been part of philosophy, not of mathematics. Neverthesless, it is useful to eventually push the idea to the formal world of mathematics. And we see: in category theory this is possible. And so yet another philosophical idea has turned into mathematics.
Indeed, once this happens the philosopher may indeed want to leave further investigation to the trained mathematician. Or he may follow the idea through the curtain and become a mathematician himself.
Certainly before mathematicization there’s philosophical work to be done, and even after one can keep a certain kind of questioning happening, e.g., about whether alternatives are thinkable, what’s dictating we work with the theories we do. That requires considerable background.
It’s saddening how few philosophers are training themselves in category theoretic thinking. It probably would have helped had Mac Lane written lengthy accounts of the kind of phenomenology he had learned in Gottingen from Moritz Geiger.
There is then an interesting question: how far “to the left” can we push this process?
In the 17th century celestial motion was formalized, allowing to trace back the dynamics of the solar system in time. Then “we” formalized galaxy formation, star formation, element nucleosynthesis, the first three minutes after the cosmic singularity. Finally, in the last years, WMAP confirmed that an initial quantum fluctuation indeed explains the very large scale structure of the universe accurately, statistically.
This makes plenty of ancient “natural philosophy” become formal mathematics. Where fivehundred years ago philosphers would still think hard about the elements, the substance that the world is made of, the forces that underly the dynamics of the world all this is now mathematics-based science – which is part of the reasons why these days much of philosophy has turned to social sciences instead.
But this is not the end of the story, is it? Some “natural philosophers” (maybe not usually called this way, but le me call them this way for emphasis) thought about even more primordial questions. For instance what the Zohar has about “tzimtzum” is much more ambitious and at least tries to do what David Albert these days rightly points out would have to be done if one seriously wants to considers the extension of that Wiki History of the Universe to the left.
Do you see what I mean? A natural question seems to be: is there formal mathematics also for this kind of “natural philosophy”.
Probably not. But it’s a kind of interesting question: how deep does the eventual transition from philosophy to formal mathematics actually reach?
I know of one attempt. Spencer Brown at least tried. He uses genuine mathematics, albeit of a very simple and naive sort. And he tries to match it. If you put his foreword next to the quote on the right here it matches curiously well. I don’t know if he was influenced by that. I guess the idea or intuition keeps being developed independently.
So that’s kind of interesting as far as it goes, in that it is a genuine attempt at all. But clearly Spencer Brown does not cut it. The Boolean algebra is not a formal system that supports modern physics. Something a little bit more clever is needed.
On the other hand, Martin-Löf’s first sketch dates from 71, which is two years after Spencer Brown’s sketch.
So that’s in the same ballpark as Hegel (#2). Some contradiction between the ideas of Being and Nothing (pure Being being identical to Nothing, yet somehow different), necessities Becoming. This requires ’determinate Being’, i.e., something with qualities. or something like that.
Although, as I mentioned, Collingwood is critical of Hegel for projecting the structure of discursive dialectic into the realm of Logical ideas (the universe argues itself into existence), he himself wrote some notes on a similar process. Something like:
unity, identity and necessity are attributes of God. Every concept has an opposite. There is a logical process leading to a concept’s opposite. “Space-time is the sum of all the opposites of the determinations of pure being”, so composed of plurality, difference and contingency. “Space-time is not the opposite of pure being, for it has being. If it were the opposite of pure being it would be nothing and could not exist, for it would have no characteristics. In cosmological terms, therefore, space-time is not something wholly other than God; it is a necessary stage in the logic process which is the life of God. God is immanent in nature from the first, and the life of nature is the life of God in nature. The process of nature as a whole is therefore one part or phase of the process by which God works out new determinations of his being.” And so on.
I thought here we told the story of how out of (-2)-categories the whole categorical world emerges.
So that’s in the same ballpark as Hegel (#2).
I see, did’nt know this about Hegel. Can you give me the precise citation?
This kind of idea has probably appeared in many places. But in each case the argument is moot, it seems to me, due to lack of a suitable formal context that would allow drawing robust conclusions. I’d think the quote by Collingwood that you cite illustrates this well. To make formal sense of expressions like “Space-time is not the opposite of pure being” one would first need a formal context that can accomodate both for a sensible formalization of “space-time” and for “pure being”. I don’t think the formal contexts traditionally considered achieve this.
So the question is: can it be achieved at all? Can we push formalization in mathematics-based natural sciences to the point that it can robustly inform us about these kind of ideas? Similarly to, say, how differential calculus solved Zeno’s ancient paradox.
To give a glimpse of Hegel, but unnecessarily for you in English, here’s three section openings from
Encyclopedia of the Philosophical Sciences in Basic Outline Part I: Science of Logic
(1830) Enzyklopadie der philosophischen Wissenschaften im Grundrisse. Herausgegeben von W. Bonsiepen und H.-C. Lucas. In: Gesammelte Werke. Herausgegeben von der Rheinisch-Westfalischen Akademie der Wissenschaften, Band xx. Hamburg: Felix Meiner, 1992 [= Bonsiepen/Lucas 1992].
Sec. 86 Pure being constitutes the beginning, because it is pure thought as well as the undetermined, simple immediate, and the first beginning cannot be anything mediated and further determined.
87 Now this pure being is a pure abstraction and thus the absolutely negative which, when likewise taken immediately, is nothing.
88 Conversely, nothing, as this immediate, self-same [category], is likewise the same as being. The truth of being as well as of nothing is therefore the unity of both; this unity is becoming.
The book spells out the details at much greater length. For yet more see the longer ’Science of Logic’.
What I was quoting from Collingwood is from his ’Principles of History’.
Might one not need to see a theory-of-everything, before embarking on a theory-of-everything-out-of-nothing? Or perhaps we’ve glimpsed it already in whatever is known of M-theory.
Might one not need to see a theory-of-everything, before embarking on a theory-of-everything-out-of-nothing?
Or the other way around, maybe? Or maybe neither without the other.
I don’t expect definite answers on either, anytime soon, certainly. But I think it is interesting to explore how far one can push things. I think we are beginning to be in a position where it seems that at least approximations and “toy examples” can be sensibly studied:
For instance consider closed string field theory. Originally this was motivated from being one answer to “What is non-perturbative string theory?”, later called M-theory. The main problem is that while bosonic closed string field theory was developed nicely in some depth, people never came to terms with the super-version, which, by all that is known, is the only one that would have a chance at all of being realistic.
But still, bosonic closed string field theory is a pretty powerful structure, containing lots of information about perturbative and non-perturbative bosonic string theory. Now a curious fact is this: bosonic closed string field theory is an infinity-Chern-Simons theory (schreiber). This is a simple corollary of some basics of -Chern-Simons theory combined with Barton Zwiebach’s monumental work on closed string field theory. The Lab discusses this here.
But -Chern-Simons theories are precisely the theories that exist naturally in cohesive homotopy type theory. They are just a handful of steps away from the foundational axioms: for an abelian cohesive group type, there is the moduli type . The -dimensional -Chern-Simons theories are the dependent types over that type.
At the same time, type theory certainly accomodates the kind of ontological reasoning that we have mentioned above. Paraphrasing Spencer Brown’s algebraic paraphrase of Hegel, Zohar and others: the initial void is the empty type , the tzimtzum-line drawn into the emptiness that draws a distinction is , which is the unit type. As a label for the slice over it, this is indeed the outermost universe, in the technical sense of universe. If we are already in the slice over an -Chern-Simons theory as above, then this is this theory, in a precise sense.
You may see what I am trying to get at. I am not claiming that this is refined enough yet to draw any substantial conclusions. But I find interesting that things are beginning to look as if eventually it could be.
I thought here we told the story of how out of (-2)-categories the whole categorical world emerges.
to : In the beginning there was nothing. But contemplating this, we have nothingness, which is something.
to : So now we have nothing and something, which are different; nothing makes them the same. Contemplating this, we have plurality; more than just something, there are things, which may have nothing or something to equate them.
to : So now we have pluralities (sets), and these too may be the same, and they may have a plurality of ways to be the same. Contemplating this, we have a groupoid.
Etc.
This is a creation story for mathematics, not for the world. There is already ‘we’ contemplating matters.
This is a creation story for mathematics, not for the world. There is already ‘we’ contemplating matters.
Hegel would need to remove the ’we’.
Logic/God is prior. It’s dialectic plays itself out in, or perhaps as, the world, and then most completely in that part of nature which is mind as human history.
Perhaps Hegel would let God do the contemplating?
This story doesn’t allow for logic to be prior; logic is about truth values, that is -categories, so it is being created here. But God, that can mean whatever you want.
The term ’logic’ was used by some in a much broader sense in Hegel’s day. His version was one of the broadest. As you see there, logic can cover topics such as attraction and repulsion, and intensive and extensive quantity. From the little I’ve read, I can glimpse occasionally why Lawvere thinks highly of him. There’s a common pattern of two terms being different yet reliant on each other. This dialectical tension is resolved at the next stage by the establishment of a ’unity of opposites’. Abstract and concrete are one pair, their difference as kinds of universal resolved for Lawvere in the adjunction between theories and models.
J.S.Mill wasn’t so positive
I found by actual experience of Hegel that conversancy with him tends to deprave one’s intellect. The attempt to unwind an apparently infinite series of self–contradictions, not disguised but openly faced & coined into science by being stamped with a set of big abstract terms, really if persisted in impairs the acquired delicacy of perception of false reasoning & false thinking which has been gained by years of careful mental discipline with terms of real meaning. For some time after I had finished the book all such words as reflexion, development, evolution, &c., gave me a sort of sickening feeling which I have not yet entirely got rid of.
logic is about truth values, that is (−1)-categories
Or so we all thought until recently. Now we know that logic is about -groupoids. Unless we add an unnatural axiom that forces it otherwise.
David,
would this be a good time to start some serious Lab entries on philosophy? Now that we seem to have a common question, a common hook into a specific issue?
We could start creating entries that collect in an organized and interlinked way some of the concrete information which you provided in the above discussion. Say a page with the material in #16 on Hegel’s ideas of being and becoming. Could then interlink with Lawvere’s ideas and with proposed formalizations.
Once we have such pages we could start adding cross links to formal notions in category theory / type theory that might be related, etc.
Just an idea. I would start something myself, but I do need to force myself to work on homological algebra entries at the moment, instead.
I should never say “sombody should”. So instead I opened a stub entry Science of Logic and pasted something in there. Don’t have the leisure to do more right now.
Now we know that logic is about ∞-groupoids.
I’ve said this before, but I’m against using the word “logic” to refer to anything but (-1)-truncated types, even in a setting where not all types are truncated. In classical set theory we have 0-truncated objects in addition to (-1)-truncated ones, but we still reserve “logic” to refer to only the (-1)-truncated ones; why should things be any different once we also have -truncated objects for all ?
Whatever English word one uses, the thing is that the types hence -groupoids are prior, and the (-1)-truncation is an afterthought. Not the other way around.
Whatever English word one uses…
That’s a bit of a problem if ’logic’ is reserved as Mike (#25) and Toby (#20) say for (-1)-truncated objects. It would surely be good to have a word for the full thing, and yet ’type theory’ sounds narrowly technical to me. ’Logic’, ’logike’ with its root logos (’In the beginning was the Word,…), the connection to ’ratio’, has a huge scope. Think of all the ’-ology’s, the study of the rational principles of a domain.
Re 23, I mentioned we were looking at Hegel’s shorter logic in a reading group here, so I can add some material as we go along.
I don’t expect it will ever become very comprehensive. I have serious doubts that there’s anything systematically right about Hegel, after all he really does see the whole Logic as a necessarily arranged process from the first Notions of Being and Nothingness. It’s the kind of thing that it a certain mood you’d love to be right.
Still, opportunities to think about such things as extensive/intensive quantities may not otherwise arise.
Whatever ‘logic’ means, it’s enough for my #20 that logic is at least about -groupoids.
yet ’type theory’ sounds narrowly technical to me.
Do you mean that you’d like to use a word which includes more than ’type theory’ does, or just that a phrase like “writing dynamics into the type-theoretic foundations” wouldn’t be that meaningful to someone who doesn’t already know what type theory is? If the former, what else would you like it to include? If the latter, how about just “…mathematical foundations”?
I don’t really find an etymological argument convincing in terms of what a word means (as opposed to what it used to mean, or what it might have meant in an alternate universe). (-: Of course, as mathematicians we are free to define a word however we like, and plenty of type theorists do seem to use ’logic’ in a way which includes their entire subject. However, whenever we redefine a word, we run the risk of confusing people who are used to using it with another meaning, and I believe (based on my own experience) that to most non-type-theorist mathematicians, ’logic’ refers only to (-1)-types.
Moreover, if we redefine ’logic’ to include all of type theory, then we’re left without a word that describes the (-1)-truncated fragment of type theory, which is a very important fragment to be able to talk about. In particular, when explaining type theory to non-type-theorists (e.g. slide 22 here), I’ve found it very helpful to contrast the different roles played by logic in set-theoretic and type-theoretic foundations.
I was expressing a jumble of thoughts, so thanks for attempting to tease them apart.
With all our talk about Hegel, there was a lament that a certain kind of ambitious theorising, which he knew as Logik, rarely takes place now, and almost never in philosophy departments.
To the extent that homotopy type theory could play a part in a rejuvenation of this kind of theorising, there was the thought that it could do with a grander title. But then, as you say, why not ’mathematical foundations’? I have no suggestions to include more than ’type theory’ in all its manifestations (directed, cohesive, etc.) there.
I live in a world where set theory and logic is the default foundational system. I’m sure that when I talk to a philosophical audience about a mathematical topic, such as cohomology, a good chunk of the audience is thinking “What’s the big deal? This can be expressed in terms of set theory, even if, for the sake of argument, I admit it’s more convenient in category theoretic terms. Any philosophical question pertaining to mathematics can be factored via nothing more than set theory. And even here, logic is the fundamental system in which we write the axioms of set theory.” I’ve never understood how people can rest satisfied with a foundation so far removed conceptually from the action in mainstream mathematics.
Your slide 22 is very interesting then for its reversal of the position of logic from outer skin to inner fragment. Are the interpretations of those boxes the same, would you say? Should we say in the Set case, the inner box is an add-on, while in the Type case, the inner box is a truncation?
In that no time soon will ’logic’ be taken to have anything like the scope Hegel gave it, the sole appearance of his Science of Logic in the references at logic should raise an eyebrow. Perhaps I’ll add a note there.
Toby,
I see that I didn’t make myself clear. I was arguing that the creation story you mentioned secretly puts the cart before the horse:
if we found logic as in type theory, then we first have -groupoids axiomatized already, even before we have axiomatized the (-1)-truncated fragment. The (-1)-truncated fragment is conceptually more complicated than the general setup. You were representing a story where from that (-1)-truncated fragment the general concept is recovered, vaguely. But I am trying to say: it was already there to begin with.
The creation story is sort of an encapsulation of a history of mathematics. (Actually, in the narrow form that I told it —discussing only -groupoids—, it’s a history of mathematical notions of sameness, but there are also other versions of the story that could be told.) Until about 150 years ago, mathematicians were mostly satisfied working with only particular structured sets. (Presumably there is a prehistory when people dealt with only particular elements of such —before the general concept of number—, which is one level earlier yet.) Then we began to deal with sets (and other structures) generally, but still with only particular categories. Then we began to deal with those generally, and so on. Of course, once we noticed this process of evolution, we were able to jump to the top fairly quickly.
Now that we’ve reached the top (or what seems to be the top so far), we can recover all of the lower levels by truncation. But that’s not a creation story, at least not the story that I was telling (which I think is the story that David was remembering). This is only one story, of course.
Hi Toby,
thanks. I totally agree with your account of the actual history of mathematics.
What I am trying to say is that in absolute, non-historical terms, it now seems to be very different. You say:
Now that we’ve reached the top
but I am instead thinking that it was hard to “reach the top” only because everyone secretly had assumed an extra axiom in the ambient type theory, which should not have been assumed. Removing that axiom, hence doing less, work yields the intensional theory, and this is already about -groupoids. You see, I am trying to say: take the HoTT lesson seriously, then “Now that we reached the top” is not a sign of success but of unfortunate detour:
Ah yes, I see; we had this intuition of extensionality, based (I would argue) on an intuition of an absolute notion of sameness, and we have had to loosen that concept, ultimately realising its relativity, over the years (and millennia).
Yes, re #33, that was the story I was trying to remember.
Isn’t there something of an inductive/coinductive divide going on here? Either building up from some bottom level, or providing a maximizing axiomatization, cf. Mike here
I’ve come to believe, over the past couple of years, that anyone trying to study ω-categories (a.k.a. (∞,∞)-categories) without knowing about coinductive definitions is going to be struggling against nature due to not having the proper tools.
The creation story seems inductive, no? Also quite like Brouwer’s ’two-ity’. Yet his fans form a terminal coalgebra.
Ah, I see I was wondering about a tension between these back here.
Also quite like Brouwer’s ’two-ity’.
Yes, I’ve noticed that. (Or maybe you pointed that out once, and I’ve since internalised it.)
Re #35: Yes, thanks. Exactly!
I have put a diagonal arrow “millenia of detour” into the above diagram, for fun. :-)
So, you see, this is what I was imagining here could be good formal context for a creation story of, if not a world, then at least the blueprint rules for a world – instead of just a creation story of the foundations.
I am thinking: intensional dependent type theory is so very basic that I am not goint to argue about that. This is uncreated. (You please need to imagine me writing these things here in a kind of leisurely mood, smiling to myself, not being dead serious about such high-powered terms.)
So I am willing to assume we have that. I have this image in my head, to go with this: a computer as in the old times, no graphical user interface. But Coq preinstalled. You switch it on. The screen pitch black, save for the green cursor box blinking in the top left.
This is the image of the universe before creation. That black screen with nothing on it except that intensional dependent type theory is running in the background. And no input yet: in the Coq-kernel, in the register for “context” it reads:
empty
as in
Inductive : empty : Type :=
and then nothing.
We are in the empty state. I am suggesting this picture as a good formalization of the point that Hegel tries to describe in sec 86-87. Or that the Zohar assumes when it says Ayn Sof.
So: now what? Do we have, in this picture, a good formalization of the next step. The tzimtzum of the Zohar?
I don’t know. I am just thinking a) this is getting to the verge of being formalized enough that it may be an interesting question. And b): Spencer Brown effectively just did precisely this: he suggests, in my paraphrase, that the next thing that happens is that the context is replaced with the context .
And while in itself this may seem to be silly, I do find it mildly entertaining that passing in type theory from the empty context to the unit context means to pass from nothing to the full type universe, in the technical sense.
Okay so far? So next, then I am thinking: fine, that was a fun little nerdy game. I could print that on a T-shirt and it would be just a kind of variant of that standard T-shirt joke.
What I really want is something substantial, along these lines. I am wondering: if we just add another natural axiom to the type theory, can we maybe turn this game into something more substantial.
Should we say in the Set case, the inner box is an add-on, while in the Type case, the inner box is a truncation?
Yes, I would mostly agree with that. The boxes are similar in that the outer one “comes first” and then we develop the inner one. But classically, “logic” gives us a framework to discuss arbitrary theories, and set theory is just one particular such theory that we obtain by listing some new axioms (which are not part of “logic”) — whereas in type theory we find logic as a subsystem, whose rules are mere restrictions to “truncated objects” of the rules of the whole system.
Thinking some more about this, I do agree that it would be nice to have a word for “the basic layer of mathematical foundations”, i.e. the system that defines the rules about what strings of symbols we can write down and how they can be manipulated — the “rules of the game” of mathematics, from a formalist perspective. In classical set-theoretic foundations this “basic layer” is single-sorted predicate logic — while as we just said, the set theory is then built on top of (or “inside of”) this by just giving a few axioms — the “starting position” of the game, as opposed to its rules. Whereas in type-theoretic foundations, the laws of type formation, introduction, and elimination are themselves part of this “basic layer”. (Although we often do supplement them with additional “axioms”, like univalence, which we don’t yet know how to express as part of the basic layer.)
I don’t want to call this basic layer “logic”, for the reasons I mentioned above. But neither “type theory” or “mathematical foundations” seems appropriate for it either; not all choices of basic layer should be considered forms of type theory, while mathematical foundations generally includes the set-theoretic axioms which are not part of the basic layer. Hmm…
There is certainly a progression of layers:
But I don’t think that 2 or 3 is an especially basic layer. There are layers below them and layers above them. And I don’t even know for sure which one you meant when you wrote ‘single-sorted predicate logic’.
There is a sense in which (1,2,3) could be about anything, while (4,5) are intended to be about (material) sets. But this is still somewhat artificial; (1) at least was originally developed to also be about sets (before the distinctions between material sets, structural sets, and subsets of a fixed universe were understood), and (4,5) might also be used for other things. (Arguably, we already use them for rigid trees and accessible pointed graphs, but as these were developed to model material sets within structural set theory, that might not really count.)
I think of multi-sorted first-order logic with equality as pretty basic, since it codifies reasoning accepted as valid all across mathematics. (I’m guessing Mike meant 2 or 3 or multi-sorted versions thereof).
It seems to me there is a difference between the progression 1->2 and the progression 2->4, which I sought to describe by the distinction between “rules” and “initial positions”. We can’t get to predicate logic by stipulating some axioms in a propositional theory. We can get to material set theory by stipulating some axioms in a first-order theory.
Probably I would classify 2->3 as like 2->4, but I don’t think the difference matters much for the point I was trying to make.
We can get to material set theory by stipulating some axioms in a first-order theory.
Not exactly; you also have to stipulate a binary predicate. (The same for 2->3, of course.) The change 1->2 is certainly the most drastic, so maybe that singles out (2).
I don’t want to call this basic layer “logic”, for the reasons I mentioned above. But neither “type theory” or “mathematical foundations” seems appropriate for it either; not all choices of basic layer should be considered forms of type theory, while mathematical foundations generally includes the set-theoretic axioms which are not part of the basic layer. Hmm…
The Substrate.
The Ur.
The Ur-Theory.
The Laws of Type.
The Ur-Laws of Type
The Ur-Type Theory.
The Arche-type-Theory. (:-)
The Proto-type-Theory. (:-)
The Arche-Theory of Type.
The Proto-Theory of Type.
How about ‘logical substrate’? I’m sure that I’ve seen that before, (although Google gives too many links to too informal material to help).
By “stipulating some axioms in a first-order theory” I meant to include the first-order structure, including the predicate. I don’t think the nitpick is really relevant to the point I was making. Why do you say that 1->2 is more “drastic”? Do you not understand what I’m getting at? 1 and 2 are general frameworks; 4 is a specific theory in those frameworks.
Why do you say that 1->2 is more “drastic”? Do you not understand what I’m getting at?
I thought that I was agreeing with you when I said that. But I may not be understanding you!
By “stipulating some axioms in a first-order theory” I meant to include the first-order structure, including the predicate.
OK, so my progression in #41 has another layer:
Of course, some of these are fiddly differences; we could make 4->5 and 5->6 take several more steps if we wanted.
As you said, 1->2 is different from 2->5; I agreed, saying that the former was more substantial (better than drastic). This gives a reason to single out (2) as the ‘logical substrate’ (if we can say that), but you want (4) instead.
Or maybe the logical substrate of ZFC should be a more generalised predicate logic, with arbitrary simple types and relation symbols (maybe also function symbols), maybe also with equality and its axioms.
Although adding equality and its axioms feels more like adding material set membership and its axioms, especially its core ones, at least to me.
Ah, okay, I see what you were saying. Yes, I think we agree after all; (2) is the ’logical substrate’.
David C, your remarks about Hegel make me want to use “logik” or “logick” (cf magick) for the ’logical substrate’. Would that be inexcusable? I don’t know anything about Hegel, but wikipedia says
His stated goal with The Science of Logic was to overcome what he perceived to be a common flaw running through all other former systems of logic, namely that they all presupposed a complete separation between the content of cognition… and the form of cognition…
which sounds to me kind of like the separation between logic (2) and theory (5) in classical set-theoretic foundations, which we eliminate in type theory (doing it all at the same level). But that could be nonsense.
Are you just all agreeing to take Das Logiksubstrat to be untyped first order logic?
Somehow I thought you were proposing to do something else, which I would have liked much better: call the basic Yoga of type-forming of introduction and elimination rules the substrate. (I did grow kind of fond of thinking of this as the arche-type theory). First order logic would then be built on that. Isn’t that much nicer?
No, I think what we agreed on is that in material set-theoretic foundations, the logical substrate is untyped first-order logic without equality. In type-theoretic foundations, by contrast, the logical substrate is the type theory itself. Whatever foundation you pick, its substrate is the relevant ’game rules’, the bottom layer of that foundation. At least, that’s how I intended to use the word.
Okay, good.
But don’t you want to isolate that bare-bones fragment of “type theory” on which they are all built. The type formation/introduction/elemination rules?
Ah, wait, that already has a name, right?‘What was the name for that again? It also corresponds to a special part with a special name of the Coq-software, right?
No, I think what we agreed on is that in material set-theoretic foundations, the logical substrate is untyped first-order logic without equality.
I’m still not sure why the “without equality” here. It seems to me that the rules for handling equality (e.g., allowing you to substitute B for A if A = B, etc.) is part of logic – I’m not seeing material sets here. Put differently, equality is a predicate that comes “for free”, intuitively speaking – other predicates like “membership” are things added on in theories.
Re 55, that sounds like the thinking behind ’logic as invariant-theory’ reported here.
Thanks, David. In an attempt to explain further: consider hyperdoctrine theory, which is one formalization of first-order logic. Continuing the thread above, say we consider just untyped or one-typed first-order logic – although that also seems to me an unnecessary restriction. Then there is a base category which looks like , the free-category-with-finite-products generated by a single type . In first-order logic without equality, we are allowed to quantify along projection maps such as , by taking a left or right adjoint to .
But this seems to me curiously imbalanced or biased, since it privileges only the projection maps as having quantifiers. If we rectify the imbalance by assuming that all maps in have quantifiers, including particularly diagonal maps, then we are dealing with first-order logic with equality, i.e., the equality predicate over comes “for free” as . It seems to me this is at the same basic conceptual “substrate level” as the (more biased) first-order logic without equality.
But don’t you want to isolate that bare-bones fragment of “type theory” on which they are all built. The type formation/introduction/elemination rules? Ah, wait, that already has a name, right? What was the name for that again?
Are you thinking of natural deduction?
Are you thinking of natural deduction?
Oh. Yes, thanks! Why didn’t I know this?
I wouldn’t have said that there was any more to type theory than rules for type formation/introduction/elimination/computation. Isn’t natural deduction just a particular way of presenting a derivation of a typing judgment?
I didn’t say equality was part of material set theory itself, but I also don’t see it properly as part of predicate logic, since you can perfectly well have a first-order theory without equality. I would say the presence of equality is an aspect of the first-order theory of material sets.
And I wouldn’t call hyperdoctrines a ’formalization’ of first order logic. I think first order logic is sufficiently formal already. I would call hyperdoctrines a particular type of semantics for first order logic. And if you choose a semantics in which equality is present, then of course equality seems like part of the logic.
Mike, it sounds as if you’re completely ignoring the point I was trying to make.
You can declare, by fiat, that equality is extra-logical if you like. But if we see logical quantification, conceptually, as adjoint to pulling back along a map which exists by virtue of product structure, then on those grounds we can see “quantifying” along a diagonal map as having just as much claim to being “logical” as traditional quantification along a projection map. I would attribute this insight to Lawvere. You can decide, again by fiat, to accept one as “logical” and the other not, but on conceptual grounds that seems like a somewhat arbitrary decision to me.
How “equals” (or the word “is”) is dealt with seems like such a basic aspect of human reasoning (not to mention symbolic manipulation) that it’s hard for me not to see it as fundamentally logical in nature. (I don’t know the history, but it wouldn’t surprise me if discussions of what equality means go back to ancient times, as part of the art of syllogisms or whatever.) In any case, just as you could say equality is not part of logic, it seems to me I could equally well say that what you call first-order logic without equality is but a fragment of the full predicate calculus. We could define it one way or the other, but this sort of argument gets us nowhere, and I for one have to fall back on conceptual understandings of what logic comprises, which brings me back to the paragraph before this.
And I wouldn’t call hyperdoctrines a ’formalization’ of first order logic.
A presentation, then. The point is that we know how to translate back and forth between traditional logical calculus and the algebra of hyperdoctrines. Hyperdoctrines do bring out some of the conceptual insights, which had been unnoticed before Lawvere came along.
I wouldn’t have said that there was any more to type theory than rules for type formation/introduction/elimination/computation. Isn’t natural deduction just a particular way of presenting a derivation of a typing judgment?
What if I put it this way: Natural deduction is the idea, going back to Gentzen, that we can formulate natural logical reasoning using introduction/elimination rules. The formalization of proofs using this idea then gives you type theory right away, along with the notion of reduction on proofs/computation on terms.
So I’d say it’s rather the other way around: type theory is the natural way of presenting proofs in natural deduction style.
I wouldn’t have said that there was any more to type theory than rules for type formation/introduction/elimination/computation. Isn’t natural deduction just a particular way of presenting a derivation of a typing judgment?
I was thinking: there is the laws of type formation/introduction/elimination/computation itself. And then, second, there are specific such introduction/elimination rules.
One could for instance decide to drop the introduction/elimination rules for product types. The result would still be a “system of natural deduction”, but would not be type theory.
And isn’t the idea that an axiom such as univalence can eventually be recast not as an axiom, but as just as another type formation/introduction-elimination rule?
Re Mike @50, yes that’s one aspect of Hegel, the overcoming of the sharp split between content and form. You do have to remember though that with Hegel you’re getting what seems to us today a curious package. The whole dynamic (or ’dialectic’) of the unfolding of the Logik is prior to any actual thinking, realised in concrete humans. In fact, the world, and that part of it which is human thought, is the Idea (or Spirit) realising itself. I say ’curious’, but in a way I’m hearing echoes of this in things Urs is suggesting, as though the universe worked itself out according to type theory. For Hegel one isn’t to be a Kantian, where what one theorises about the universe is just how the universe is taken up by the human understanding, with the further idea that this will be limited in certain ways, e.g., no access to the thing-in-itself. For Hegel, the human mind itself is part of the universe and as such part of the unfolding of the Idea/Spirit.
That’s all I have time for at the moment. It’s tricky to convey the big picture. It’s a dizzying one which tends either to delight or revolt people. Here’s a brief account.
Hi David,
I have moved that paragraph to the construction site here, just as to save that piece for possible later use.
Hi David,
I have moved that paragraph to the construction site here, just as to save that piece for possible later use.
To come back to:
Are you thinking of natural deduction?
Oh. Yes, thanks! Why didn’t I know this?
The answer is of course: because I wasn’t paying due attention. But also, what we have at type theory in the section Syntax of type theory didn’t mention it.
I have now added a pointer right there at the beginning, to a new entry natural deduction. For the moment mainly to record the Wikipedia entry on this, which, for a change, is very good indeed.
Eventually I hope somebody finds the energy to work on the entries type theory and natural deduction and turn them into actual coherent explanations. I don’t believe that type theory has to seem mytserious to the un-initiated.
I am beginning to see a vision of how a good introduction to type theory would have to read. Maybe in a few weeks, or else if I need to distract myself from more urgent duties, I’ll start working on it. Of course I’d be delighted if somebody beats me to it.
I am beginning to see a vision of how a good introduction to type theory would have to read. Maybe in a few weeks, or else if I need to distract myself from more urgent duties, I’ll start working on it. Of course I’d be delighted if somebody beats me to it.
Maybe you can share the outline of your introduction? Then I’ll be happy to fill in a few pieces.
I’m not sure how I would outline it, but the three prongs of the Idea section could each be expanded:
For mathematicians not already steeped in category theory, the route from logic and the foundations of mathematics seems best: Introduce natural deduction style proofs (1936, the idea is obvious to us now, but wasn’t then: all they had were Hilbert- and Fitch-style calculi), mention proof reductions on these, introduce proof terms and reductions, and now we have type theory via propositions-as-types.
For computer scientists: Go via -calculus, Church’s theory of types (1940), other typed -calculi and foundations of programming languages. Expand perhaps on control operators and the relation to classical logic.
For category theorists: Go via internal languages, as in the current article, but with an expaneded informal motivation through an example or two first.
I don’t think I’d even mention Russell’s ramified types, which the wikipedia, I think unfortunately, starts with.
@Ulrik 64 you can choose to explain A in terms of B or B in terms of A; the point I was making is that natural deduction and type theory are the same. Although I disagree that natural deduction is just an ’idea’ - I think it is a fully precise formal system which is equivalent to other formal systems for type theory such as Gentzen sequent calculus. Type theory itself is not tied to any particular way of drawing its typing deductions.
@Urs 65 No, type theory without product types is still type theory. Particular type theories with names, like ’Martin-Lof DTT’, require the presence of particular type formers, but type theory as a whole is the system and doesn’t mandate the presence of any particular type formers.
@Mike : I agree that natural deduction and type theory are basically the same (though I still like the distinction between having proof terms or not). And I agree that natural deduction is more than the idea, it’s a proof calculus just like Hilbert-style or sequent calculus.
But I disagree that sequent calculus is type theory! Sequent calculus is characterized by left and right rules, and although they can present the same logics, the relation between them is far from obvious in regards to reduction (compare the left rule of implication to implication elimination). Gentzen introduced sequent calculus because he couldn’t prove normalization for natural deduction! The precise correspondence was only slowly teased out through work of Prawitz 1965 and Zucker 1974, with various simplifications since then.
So I don’t understand what you mean by:
Type theory itself is not tied to any particular way of drawing its typing deductions.
Once you’ve presented type theory, through various rules, you’ve described the derivations of judgments. How could there be another way to present them?
@Mike are you thinking of the difference between writing the context at each node in a derivation versus decorating certain leaves with variable names and marking discharging rules with the variables discharged? If so, I’d say these are isomorphic ways of drawing natural deduction proofs / derivations of hypothetical judgments. But none of them are sequent calculus!
Mike, you write:
No, type theory without product types is still type theory.
Also, that’s not what I mean. What I mean is that the framework where we get to choose which type constructors to admit is not yet a type theory. Well, I may be violating conventional terminology, but that’s what I mean.
So there is a formalism of 4-step rules (formation/introduction/elimination/compuation). It just tells you what such a 4-step rule must be like, it doesn’t specify any 4-step rule.
Then in this framework we can say: I pick this, this and that specific 4-step rule. The result call xyz type theory.
Is that an unreasonable way of thinking about it?
Ulrik:
I am beginning to see a vision of how a good introduction to type theory would have to read. Maybe in a few weeks, or else if I need to distract myself from more urgent duties, I’ll start working on it. Of course I’d be delighted if somebody beats me to it.
Maybe you can share the outline of your introduction? Then I’ll be happy to fill in a few pieces.
Okay, good. Thanks for the offer. That would be nice.
Here are some ideas, from the experience of somebody who wasn’t brought up with anything even remotely related and had to learn it from scratch. I’ll give a list of items along which I am imagining a good exposition of and introduction to type theory for completely laymen should proceed. Feel free to comment on this, I may easily be convinced of changes. But this is what occurred to me so far.
I need to post the actual list in the next comment, due to size restrictions on the comments here. So see below…
We are defining a formal expression manipulation system.
At the very very beginning, tell the reader that we are about to specify a rule set for how to bring symbols to paper and how to manipulate them.
Most importantly, we are not developing any old theory such as the theory of groups in a pre-existing context of logic/mathematics (even if one could think of it this way, this is not what the foundational meaning of type theory is and this is not how it should be presented). Instead, we are really doing something much more basic. We make tabula rasa, assume no a-priori math, no a-priori logic, nothing. Now we define from scratch how symbols are to be brought to paper, and how they are to be manipulated there.
The relation to computer implmentation can be a helpful guiding image here, I think: tell the reader that he may think of what follows as a specification of a computer program. That program is supposed to operate on strings of the form and we give rules for which operations are allowed.
That comparison to programming a computer will help to make clear that at the very basis we are making an operational definition. Not a formal definition in some pre-existing formal system. There is no pre-existsing formal system. Instead, we hereby set it up.
Terms, types and judgements
Mention the names of the symbols in one of these expressions whose manipulation we are going to define , (type, term, judgement). At this point, mention just a bit of the intention. Not a lengthy discussion as in Martin-Löf’s lectures (that is more useful to read after some familarity with the basic idea) just a wor on how types will play a role analogous to sets, terms to elements of those sets, but amplifying that we are not actually using any set theory, but instead producing a new formalization of the general idea from scratch.
The 4-step rules
Now say that the rules for how to manipulate these judgement strings (good to say “string” again, it is easy for the novice to get confused with all the terminology that is suggesting deep meaning which hasn’t actually been defined yet, reassure the reader that we are just about to define string maniupulation) — that the rules for how to manipoulate these strings are to come always in the format formation/introduction/elimination/computation, with some words going with this, as I have them at natural induction currently.
No specif types at this point yet. Just the idea that we will have rules to go from a string that is to be interpreted as saying that we have some types to a string that asserts the existence of some more types and some terms of them-
The basic examples
Now the reader should have an idea of what it’s all about, of where the journey is supposed to be headed. So now it’s time to show some first simple but concrete action. Best example is probably the conjunction/product type 4-step rules. Write them out, comment on how to read them as English sentences. At this point probably best to provide two links “If the reader has background in logic, he can think of this as the rules for how to prove conjunction of propositions, for more on that see … . If the reader has background in categories, he can think of this as rules for manipulating cartesian products, for more on that see …”
Intermediate summary and outlook/relevance.
It may sound pathetic to an expert, but at this point it’s time to give a summary and point out that the reader has now essentially already seen what type theory is about. I am thinking of readers of Lab entries who don’t actually want to learn type theory, but just want not to feel confused when the term comes up. Kind of the state of knowledge that 19th century conversation lexica were aiming for: enough familiarity with the subject to be able to give it a place in one’s internal scheme of things. So at this point we should have some summary+outlook sentence along the lines “This is essentially already all there is to type thery. The study of this subject now consists of introducing various such 4-step rules and examining what the consequences are and what they are good for. While these rules may seem to be pathetically simple, it turns out that there is a great deal of expressive power hidden in them, that allows to use them as foundations of mathematics and of for formal proof systems, and as of recently even as fouundations of homotopy theory”. Something like this.
The full definition
Now we assume we are only left with those readers who are determined to invest energy to dig deeper and fully understand things. So now we can start becoming less expositional and more formal. Now is the time to list all the rules, say also for dependent product etc., to give them all their formal names, maybe to make big tables with labelled rules as one can see on the Wikipedia page on natural deduction. In other words, provide now the reader now with the actual definition of standard type theory. Explain the truncation to logic, how first order logic appears in there, etc.
The categorical semantics
Explain the basics of categorical semantics of type theory.
[ These darn size restrictions. That last item was supposed to continue as follows: ]
This is easy, but when I was looking for this back then I had trouble finding a source that just says it plainly and systematically. Give a systematic dictionary here. A type goes to an object, a dependent type to an object in the slice, a term to a generalized element. This kind of stuff. Don’t overload yet with all kinds of technicalities. The basic idea here is strikingly simple, and that should come across. Even a readers who have just seen the definition of type theory above and the definition of a category can and should see the point here. One does not need any deep category theory or anything. It’s just a systematic dictionary that effectively gives just two different ways of bringing symbols to paper. This should be made clear here.
Is that an unreasonable way of thinking about it?
Well, I think it’s entirely reasonable. We might call this the type theoretical framework. This reminds me of the logical frameworks (see, e.g., this introduction), for instance Twelf. LF is itself a type theory in which you can formally define deduction systems (natural deduction, sequent calculus, etc.) using a judgments-as-types principle. (Reduction is then specified as a relation, so you don’t have computation in the logical framework, but you can reason about it.)
Urs, that sounds like a very nice plan! I’ll leave now, but try I’ll try to think if there’s anything I’d change.
On a more practical level, can we execute the plan on a sort of scratch page on the lab before moving it into the entry for type theory? Or should it be on a new entry, say introduction to type theory?
On a more practical level, can we execute the plan on a sort of scratch page on the lab before moving it into the entry for type theory? Or should it be on a new entry, say introduction to type theory?
Since one fine day in the future the entry type theory will contain plenty of discussion, it will be good to split it up into chapters living on separate nLab pages anyway. So I’d think it would be fine to work on a separate entry introduction to type theory, yes.
Put differently, equality is a predicate that comes “for free”, intuitively speaking
For Frege, class membership was also a predicate that came free. But in that case, things turned out to be more complicated than his intuition suggested! People have argued that ZFC (maybe with some large cardinal axioms too) has true axioms for the correct intuition behind class membership, but this view is not so popular anymore, especially around here. I would treat equality the same way, even if the most intuitive rules happen to be consistent in that case.
But this seems to me curiously imbalanced or biased, since it privileges only the projection maps as having quantifiers.
Well, we should be open to having any class of display maps, not just product projections. (Or if the display structure is not well-rooted, maybe not even product projections.) One possibility, of course, is to have all maps be display maps; this corresponds to having equality. But there are other options.
People have argued that ZFC (maybe with some large cardinal axioms too) has true axioms… but this view is not so popular anymore, especially around here.
I would treat equality the same way, even if the most intuitive rules happen to be consistent in that case.
But I still don’t understand why. Why should equality and the rules for handling it be regarded any differently than any other part of predicate logic, such as the quantifiers and the rules for manipulating them, in this particular regard of what is considered as ’true’?
Actually, since I feel myself to be pretty much in agreement with the “view around here”, you need not bother answering the last question, because I don’t usually think in terms of “what is true” (Platonically) either. I do see the predicate logic as establishing “basic rules of the game” that we mathematicians play – the substrate as you all have been calling it. What I am attempting to argue, apparently without a lot of success, is that equality can be seen as on a par with the usual logical quantifiers, if we take seriously some of Lawvere’s insights.
I can see that I am confusing people with “for free”, so let me try again. Take the first-order hyperdoctrine (leaving open for the moment if we mean with or without equality) generated by some language. Certain maps in the category of terms come “for free”, namely those which exist by virtue of the product structure. In other words, even if there were no function symbols or relation symbols in the language, we would still have product projections and diagonals and so on – the maps that belong to the free category with finite products generated by the set of sorts – there as maps in the base .
And from here on, I would argue as I did before, that diagonal maps and projection maps, the maps that are “already there” from the beginning, be accorded “equal rights” and treated uniformly (with , Beck-Chevalley, etc.). That leads to “equality” being included under the ERA (ha ha ha), as .
Well, we should be open to having any class of display maps, not just product projections. (Or if the display structure is not well-rooted, maybe not even product projections.) One possibility, of course, is to have all maps be display maps; this corresponds to having equality. But there are other options.
Hm, you mean all maps in the base category of terms? If that’s what you meant, then I think I agree, and that’s a nice theorem: that once you include diagonal maps as display maps (meaning, I guess, those maps you can quantify along), then all the maps in the base category get included. (At least, I think I remember once convincing myself that was true; I should go through the argument again.) Of course, you only get other maps by moving past the “logical substrate”, by adjoining non-logical function symbols for example.
But yes, of course there are plenty of options. You and Mike have apparently chosen one option over the others as what gets to be counted as the logical substrate (of material set-theoretic foundations). I am not understanding why that is the canonical choice; as I’ve been arguing, I see that as biased toward projection maps, and I don’t see what arguments you guys are giving for drawing the line there. It’s a traditional choice, sure. (I think first-order logic with equality is another traditional choice.)
But you don’t have to take just my word for it. David Corfield argued here that if logic is seen in the light of invariant theory, a la Mautner (who had been reading Tarski’s “What are Logical Notions?”), then equality is a logical notion. I think I could also dig up other suitable references, if pressed.
You and Mike have apparently chosen one option over the others
Well, I want to say that there is no absolute notion of logical substrate, that none of the steps in the progression that I listed in #48 deserves to be singled out. But I concede to Mike that the break from (1) to (2) is greater than 2–>3 and 3–>4 (and maybe greater than 4–>5 and 5–>6, and in any case those can be broken down into simpler steps), and I also don’t see how to break 1–>2 down into much smaller steps either. So that makes (2) special in a way. (Since (2)–(6) are first-order, while (1) is zeroth-order, (2) is special for being the most basic first-order step.)
But I really would take ‘logical substrate’ as a relative term. Mike should argue for why (2) is definitely better, if he wants. I only want to argue that it’s also a nice place.
(By the way, when I wrote #48, I thought that Mike meant in #47 to advocate (4) as the logical substrate of ZFC, but he meant (2), as his #49 makes clear.)
It’s a traditional choice, sure. (I think first-order logic with equality is another traditional choice.)
I’ve seen it with equality much more often; I’d call that (with classical logic) the traditional choice. I made sure to put (2) in my list largely just to make the point that it is also an option.
I’ve seen it with equality much more often; I’d call that (with classical logic) the traditional choice. I made sure to put (2) in my list largely just to make the point that it is also an option.
That does seem reasonable to me, Toby.
I think we can probably agree on some observed facts: that some very respectable people choose to include equality as part of the background logic, some very respectable people choose not to include equality as part of the background logic, and some people are (probably) agnostic on the matter, choosing to include it sometimes but not others, depending on what suits their purposes.
If we can at least agree on those facts, then we could also go further and accept 2 as at least belonging to the logical substrate of material set-theoretic foundations. It seems we all agree on that. I would go further and say that in view of the facts outlined in the previous paragraph, whether to include equality in the logic is widely seen to be an option. (If I may say so, an option which is different in character than say adding a membership predicate, which everyone agrees nowadays is non-logical.) In other words, I won’t insist that logic include equality, provided that others don’t insist that it doesn’t belong. If we can agree on that too, then we’re all set. (I’m saying this in view of potential nLab entries down the line, including possibly my own pending contribution, the predicate logic article sitting incomplete on my web.)
Sorry to make so much hay about this, but I was getting worried with Mike’s #53 that some sort of official conclusions were being drawn, ones I was not prepared to assent to without an argument.
Todd and Toby, I’m happy with the relativity of the logical substrate. At this point I’m really regretting posting #40; I feel like we are all looking at a big Venn diagram whose circles are drawn with extremely fuzzy boundaries, and arguing over whether some microscopic specks are “on one side or the other”. (e.g. I misunderstood #57 to be invoking a semantic notion when I wanted to talk purely about syntax, but #63 set me straight. Sorry!) Maybe the only conclusion to be drawn is that it is not at all as clear as I thought it was when I wrote #40 where one can draw the line! Or even if the line is a helpful one to draw at all. Which is, I guess, kind of what I thought originally, so that’s reassuring.
Urs and Ulrik, I must have a different understand of “natural deduction” than you. It seems like possibly wikipedia agrees with you, but I don’t quite understand what it does mean for you.
It seems to me that natural deduction and sequent calculus are just different ways of formalizing the meta-logic. Let me give an example. In type theory with products, we have that
This is a meta-statement (a judgment) which we read as “in the context of a variable of type , the term has type .”
To derive this informally, I might write: Since , we have , and also . But now putting these together, we have .
To derive this using natural deduction, I would write
And to derive it using sequent calculus, I would write
In each case I would say that the judgment and derivation are the same, merely expressed formally differently. Just as the theorems of group theory and their proofs are “the same”, whether we formalize group theory in type theory or set theory or first-order predicate logic.
Similarly, my understanding is that a “logical framework” is just a type theory that we use as the meta-logic for another type theory. (Although the type theories used for this purpose tend to be of a particular fairly weak sort.)
Now it sounds to me like you are using “natural deduction” to refer to “meta-logic” more generally — perhaps together with some intention to consider only FIEC-type axioms/rules? Is that right? If so, is that really universal? It’s very different from the impression I’ve gotten of what “natural deduction” means.
Mike, did you read #73? There I guessed that your “sequent calculus” is really natural deduction with the context marked at each node, and it seems my guess was correct. Of course these two presentations of natural deduction derivations are isomorphic, and there’s no reason to differentiate them.
You’re right about logical frameworks à la LF (a weak type theory used to encode another logic via judgments-as-types – weak means weak enough to make HOAS work).
I’d like to say that “natural deduction” describes any logic or type theory that works via pairs of introduction/elimination rules; “Hilbert style” describes a logic where modus ponens is the only rule, while “sequent calculus” describes any logic or type theory (say the -calculus) that works via pairs of left and right rules.
Then I gave the example of intuitionistic implication, for which the sequent calculus left rule,
is derivable in the natural deduction presentation, whereas the natural deduction elimination rule,
is only admissible in the sequent calculus.
I’m not sure what you mean by “If so, is that really universal?” regarding natural deduction; do you have a formal statement in mind? I’d just say it refers to anything that works via FIEC rules, but I guess there are, or at least could be, mixed systems with some rules structured via FIEC, and some rules of some other kind.
Mike,
I don’t want to make any claims about what natural deduction is. I am just looking for a word:
What is natural deduction/type theory when you remove all the type constructors?
Is there a word for what remains? Because what remains is not nothing. It is still a rule-set for what constitutes a type definition, what constitutes a deduction rule; a compiler that reads in type specification code (in the form of these 4-step rules) and spits out the corresponding type theory.
I suppose in Coq, Agda etc there is – or could be – a libraray which contains all the type formation/introduction/elimination rules. (?) If I remove that library, what is a name for what is left? It’s no longer a proof assistant based on type theory, for it knows not of a single type nor of a single deduction rule. But it’s also not trivial: it knows what admissible type specifications and deduction rules would be, were you to give it one. What is the name of this?
Maybe the word that I am after is calculus of constructions? Not sure yet.
Give me a hint on how to complete this sentence “The caluclus of construcitons is to type theory as … is to …”. It is the basis of / it is a tool in / it is a user interface for type theory? Something like this.
I like on the Wikipedia page the line where it says:
The calculus of constructions has very few basic operators: the only logical operator for forming propositions is .
That’s getting in the direction that I am aiming at: I want to know what we call type theory where we remove the operators and just remember what the rules are for introducing them.
Ulrik, if I were to “write natural deduction with the context marked at each node”, then I would have written my example as
Note the difference to the example that I called sequent calculus. Are you saying that what I called sequent calculus is not actually sequent calculus? If so, how does it differ from sequent calculus?
I’d like to say that “natural deduction” describes any logic or type theory that works via pairs of introduction/elimination rules; “Hilbert style” describes a logic where modus ponens is the only rule, while “sequent calculus” describes any logic or type theory (say the λμ-calculus) that works via pairs of left and right rules.
Sure. But those are still all just three methods of presenting the same logical theories — in our case, the meta-logic of a type theory. (And left and right rules are basically the same as elimination and introduction rules, respectively.)
for which the sequent calculus left rule … is derivable in the natural deduction presentation, whereas the natural deduction elimination rule … is only admissible in the sequent calculus.
So? Of course different presentations of the same thing will have different formal properties. What exactly is this example supposed to show?
I’m not sure what you mean by “If so, is that really universal?” regarding natural deduction
What I meant was “are you sure everyone uses the word ’natural deduction’ the way you are using it?” Now I’m thinking that actually you are using it more or less the way I wanted to use it; but I just don’t understand your statement that we can’t present type theory using sequent calculus just as well as using natural deduction.
a rule-set for what constitutes a type definition, what constitutes a deduction rule; a compiler that reads in type specification code (in the form of these 4-step rules) and spits out the corresponding type theory.
Is that ’rule-set’ something that can be made precise?
Is that ’rule-set’ something that can be made precise?
Are you implying that there is no precise notion of what a type formation rule, a term introduction rule, a term elimination rule and a corresponding computation rule are to be like?
I don’t know of one.
I see. Wasn’t aware of that.
But, please, allow me to continue asking my question:
even if there is no rule for what these rules are to be like, I imagine there is a computer program that knows how to read in a bunch of such rules and then do natural deduction on these basis of these. What would such a computer program be called?
Or put differently: I expect that the Coq-programmers don’t hard-wire the rule for dependent products formation/intoruduction/elimination into the machine language. They will have it read in from some file. Just as the type formation rules corresponding to inductive types are dynamically generated and then “read into the system” somehow. No?
I am slightly puzzled that what I feel is an obvious distinction of layers of the symbol manipulation takes me so much effort to communicate and ask about. But as long as you have energy to try to reply, I’ll continue asking. :-)
Oh, by the way, re #85: good! I’m relieved.
I’m enjoying following the other threads in this discussion as well.
Urs, I believe you are wrong about Coq. I haven’t read its code, but as far as I know, it is not modular in that way. There are more modular proof assistants, such as Isabelle and Twelf, which are basically an “outer” proof assistant that implements some metalogic/logical-framework and in which you can describe a type theory that you want to study. But I gather that it would be very difficult for a modular system to provide the same sort of tactic language and support for serious large-scale development that Coq does.
I see. Do you happen to know how they call these “outer” proof assistants? Is there a name for the theories they are based on? As in: “A proof assistant is to type theory as an outer proof assistant is to … theory.”?
Are you saying that what I called sequent calculus is not actually sequent calculus?
No, you’re quite right, I must have glossed your proof; in particular, the rule
is indeed a sequent calculus left rule for product.
What exactly is this example supposed to show?
First of all, my comment that implication elimination is only admissible holds only for sequent calculus without cut; with cut it is derivable.
Now, the example was supposed to show that the systems are not equivalent. Let me expand on that, keeping to the example of the implicational fragment of intuitionistic logic. Consider again the left rule, with the term assignment like this:
Fact: with this and the other obvious sequent calculus typings, there’s a different class of typeable -terms! For example:
is not typeable in the sequent calculus. However, we get the same terms in normal form. See Sørensen and Urzyczyn’s Lectures on the Curry-Howard Isomorphism, Chapter 7 for details.
The point is, there’s no isomorphism between the sequent calculus and natural deduction typeable terms, and things get even more subtle when it comes to reductions. For instance, this intuitionistic sequent calculus proof,
can be reduced to two different normal proofs. That doesn’t happen in natural deduction.
Now, this discussion does bring forth another good point: We talk of typing using sequent calculus, so type theory is not just term typings for natural deduction! We have natural deduction typings, and we have sequent calculus typings, and now that I think of it, we have Hilbert style typings, too, namely for combinatory algebra.
I still think natural deduction could be the term Urs was after: the framework of type theory where types are defined by FIEC rules. Did Martin-Löf use a different term? Or could just coin it, say, The FIEC framework?
Anyway, it’s certainly not Calculus of Constructions, CoC, because that’s a particular type theory (with impredicative sorts Set and Prop, with a layer of predicative universes on top, but no inductive types). It’s not the Calculus of Inductive Constructions, CIC, either, as that’s another particular type theory (with predicative Set, and inductive types; basically the type theory underlying Coq).
I see. Do you happen to know how they call these “outer” proof assistants? Is there a name for the theories they are based on? As in: “A proof assistant is to type theory as an outer proof assistant is to … theory.”?
I think these theories are exactly logical frameworks. See this list of Specific Logical Frameworks and Implementations.
I also think, with Mike, that Coq doesn’t factor as a general framework for type theory with a module for CiC, but I could be wrong.
I think these theories are exactly logical frameworks.
Hurray! Finally the word that I have been looking for for almost 50 comments now! ;-) Thanks.
So it’s not logical substrate but logical framework? Okay.
Do you happen to have a link to a page about the idea of logical frameworks rather than that list of logical frameworks which you provided? (Don’t worry if not. I can try to Google myself…)