Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 23 of 23
After a good long time, I am getting back to the exposition of predicate logic that I had promised Urs months ago. (The first version, on which I had spent a fair amount of time, got somehow deleted, and it is only recently that I mustered up another head of steam to start up again.)
For now I’m working on it in my personal lab, but perhaps I can say a little here of what my plans are. Very often, if one picks up a textbook which goes into logic, there will be some hurried recitation of some of the basic vocabulary of a first-order theory, involving words like “terms”, “well-formed formulas”, etc., which doesn’t give much good overall feeling for a first-order theory as a mathematical structure. My point of view will be to resurrect the notion of (Boolean) hyperdoctrine, giving first a detailed description of the “free Boolean hyperdoctrine generated from the signature” of a theory, and then taking a quotient Boolean hyperdoctrine by dividing by a “hyperdoctrine filter generated from the axioms” of a theory. This hyperdoctrine I regard as the essence of a first-order theory as an algebraic structure. (Other algebraic set-ups are of course possible and worth considering.)
The free Boolean hyperdoctrine generated from a signature (consisting of a collection of sorts , a collection of function symbols , and a collection of relation symbols ) is already interesting to contemplate. The way I see it, it will go like this. First, one is to construct “the free cartesian category, i.e., category with finite cartesian products, generated from the data of sorts and function symbols”. I think one is within rights to call this the category of terms, . The objects of are product types generated from , and the morphisms are terms generated from in the doctrine of finite product categories. Then, the free Boolean hyperdoctrine I’m after would be a functor of the form
taking each product type to the Boolean algebra of well-formed formulas of that type, generated by the set of relation symbols . The idea is that this functor encapsulates term substitutions in formulas, and for each term between types, the term substitution has a left adjoint .
This in itself is an interesting decision, and is possibly up for debate. A different decision is take the base category not to be , but rather the free category with finite products generated from the set of sorts, which is a lot easier to understand and work with, being equivalent to . In some sense this is the more usual decision, because the standard existential quantifiers are left adjoints to substitutions induced by mere projection maps between types. (Already existential quantification along diagonal maps probably feels a little outré to your garden-variety logician, until you tell him that this is where equality predicates really come from.) Existential quantification along general terms would no doubt feel even more exotic to that logician, until you tell him it’s really just an instance of relational composition.
Anyway, whichever base category one winds up choosing, I definitely want the Beck-Chevalley conditions to be imposed. This leads to the interesting problem of gaining a detailed understanding of what pullbacks look like in . I expect they have a simple description in the end, but it’s not something I understand well at the moment. Pullbacks in are by contrast much easier to understand, and for this case the Beck-Chevalley conditions can be shown to follow just from the Frobenius law that I’ve been discussing elsewhere.
Does anyone know whether this development I’m pursuing has been treated in the literature somewhere?
the exposition of predicate logic that I had promised Urs months ago.
Thanks, Todd! Much appreciated.
I can’t look at it right now. But I will certainly read with interest a little later.
Seely gives quite a detailed construction of a hyperdoctrine from a first-order theory in Hyperdoctrines, natural deduction and the Beck condition, Zeitschr. für math. Logik und Grundlagen der Math. 29, 1983 (available from his web page. In particular, the base category of such a hyperdoctrine is the cartesian category of terms.
He doesn’t directly characterize pullbacks in the term category, but he does show (section 8) that a hyperdoctrine that satisfies Beck–Chevalley for the 3 basic product-absolute pullback squares satisfies B.–C. for an arbitrary pullback square if and only if the theory associated to that hyperdoctrine proves that the square is a pullback, in a sense that’s obvious when you see it.
I hope that’s helpful.
Yes, this is very helpful, Finn – thank you very much. I didn’t know about this article. It looks tailor-made for the type of thing I wanted to do.
Right now I am mainly interested in the free Boolean hyperdoctrine generated by a signature (no extra axioms). My suspicion is that all the pullback squares in the term category are essentially product-based, and also that the maps of the associated cartesian bicategory correspond exactly to morphisms in the term category. Do you happen to know the status of either of these questions (presuming they make sense to you)?
(Silly throw-away observation: every time I see Seely’s sign-off initialization “rags”, I can’t help but think of the little robot dog in Woody Allen’s Sleepers. “Woof, woof. Hello, I’m rags.”)
I’m afraid I don’t know how to characterize pullback squares of terms. I haven’t seen it discussed anywhere in greater generality than that of what you’ve written on the Lab.
The maps you mention will be the provably functional relations, which in general needn’t be given by terms, although of course all terms are provably functional. But I suppose if there are no axioms then presumably there’s no way for a relation to be functional unless it’s the ’graph’ of a term. Not that I know how to go about proving that. (Pavlović’s paper Maps II: Chasing diagrams in categorical proof theory, Logic Journal of the IGPL 4, 1996, here, is all about maps in fibrations/hyperdoctrines, though sadly he doesn’t talk about syntactic hyperdoctrines.)
Sorry I can’t be of more help. I’d definitely be interested in the answers to both of your questions, so do please keep the Forum updated.
(I’ve never seen Sleepers, so I can’t comment robot-dog-wise.)
(The first version, on which I had spent a fair amount of time, got somehow deleted, and it is only recently that I mustered up another head of steam to start up again.)
Wait, wait, how can that have happened!? Nothing ever written on the nLab should ever have been deleted except: private webs that have been shut down, and edits overwriting a previous edit with the same signature within 30 minutes with no intervening edit. Of these two ways, I believe that the former has happened just once.
Did you write this on the main nLab or on a personal web? Do you remember the page name? Might it have been combined with another page and moved to a page with ‘> history’ in its name?
It (probably) happened because I had the page being created continuously open on my laptop, didn’t save a first edit, and then something happened so that I had to shut down and then wasn’t able to retrieve my tabs. For example, it could have been a computer malfunction that I had to get fixed by my local guy.
Edit: It was highly irritating, yes, and perhaps an instance of incompetence on my part, but I should take heart from what I’ve heard said about Milnor. The way he would write a book was to write page 1, crumple it up and throw it away, write pages 1 and 2 and throw them away, write pages 1,2, and 3, etc. Surely that was an exaggeration, but whatever the true story, it paid off: his books are just fantastic to read.
I have added a bunch of material to my burgeoning notes on predicate logic. A very brief condensed summary of the newish material is that
The category of product types generated by a set of sorts is described as ;
The category of terms generated by and a set of -sorted function symbols is described via (more or less) traditional term syntax, but also somewhat more excitingly as where there is a distributive law between the free monoidal category generated by , and the free category with products generated by .
I hope to analyze/characterize pullbacks in (cf. the commentary above) by taking advantage of the decomposition , with a hoped-for reduction to pullbacks in .
All comments welcome.
@#8:
I see. The spirit of Milnor may be guiding you, but also try (Firefox and) the Firefox add-on Lazarus.
Nice article, BTW.
Hi Todd. You write
There are four layers of data and constructions we deal with, each layer depending on the previous.
Is there a category theoretic way of seeing why there should be precisely these four layers, perhaps as part of the property, structure, stuff story?
David, I’d like to think about this further (or keep this question in mind) – I’m not quite sure. My immediate reaction though is that while sorts refer to ’stuff’ and both functions and relations refer to ’structure’, there is a qualitative difference between functions and relations which causes me to order them like this. It reminds me a bit of framed bicategories, which are varieties of double categories: as Mike likes to remind us, it is useful to consider 2-categorical structures like not by attempting to characterize functors as profunctors satisfying a property (where we run into the problem that equivalence in is mere Morita equivalence), but collect functors together as a separate structure in a horizontal direction, and profunctors as a structure in a vertical direction.
There are probably a number of formalisms to bear in mind. Stuff-structure-property was originally a way to consider a certain factorization in or in (groupoids); a question is what sort of analogous factorization would be sensible for fibered categories.
Interesting. If I’m reading this page correctly, the connection to proarrow equipments looks right.
Just getting around to glancing at this; I think it is great! I am very happy to see a description of this stuff written out carefully in terms of multicategories, and I look forward to reading more of it.
I was a little confused by your “eta-expanded” notation
and reference to the variable declarations therein as a “preamble”. If I understand correctly, you’re just briefly summarizing this syntax in order to discard it in favor of the operadic approach, which is okay, but I think your summary may be confusing, at least to people who are used to thinking about this version. At first I thought that your notation above was trying to say the same thing as the traditional term-in-context
in which the variable declarations are called a “context”. Then I noticed that you had written “” in front of it, so maybe it’s more like you’re actually asserting traditional eta-equivalence, except that in order to state that one needs the term-former whose introduction rules require the notion of term-in-context. Plus you don’t have function-types, so it’s not clear that traditional eta-equivalence is even meaningful.
I realize you’re just trying to say “let’s not do this”, but maybe it’s worth being a little more precise about what we’re not doing? (-:
Also, I agree that types are stuff, functions and relations are structure, and axioms are properties. Is there an abstract reason why there are two kinds of structure but only one kind of stuff and one kind of properties? Not sure, from this perspective.
But from a (higher-)type-theory perspective, you have n-types for all n, each of which contain terms. 0-types are types, which are stuff, and terms in 0-types are like functions, which are structure. (-1)-types are propositions/relations, which are structure, and terms in (-1)-types are axioms/proofs, which are properties. Similarly, 1-types are eka-stuff, and terms in 1-types are stuff. So the reason we have a 1-2-1 pattern is that traditional logic deals only with 0- and (-1)-types, so we have two 1-1 patterns overlapping with a shift.
The question of a corresponding factorization system for fibered categories is interesting! I bet there is one, at least for fibered posets.
Mike, can you remind me why types have to be -types again? Is it just because you’re trying to match homotopy theory, or is there more to it?
(I agree with your analysis, which makes a lot of sense.)
@15: Thanks, Mike! I’m all for being less confusing, so perhaps I’ll rewrite that section.
I can see why could be confusing; it was an uncomfortable marriage between formal and informal. What I really meant was “for each function symbol , there is a term which we write as (-expression), and which we think of as standing for the function ”. The equation was a sloppy way of putting it, I guess, since in the formal set-up I don’t mean to consider the function symbol as a term.
The -expressions were intended as another way of expressing terms-in-context. What I don’t want is notation like (and left at that), so that the variables of are understood to be exactly , the variables which appear in . The free variables of may include some extra that do not appear inside , and these should be declared in some way. That’s all I meant to get across.
I’m taking more time with this project than I perhaps should. The free Boolean hyperdoctrine generated by a signature, which is what I’m mainly aiming at before I get into calculus of deductions, is actually a fairly complicated beast, and I want to express it as “prettily” as I can, seeing how far one can go with a string diagram approach. I am toying with the idea of showing how Peirce essentially did it (which was worked on by me and Gerry Brady), using “lines of separation” for negation. But it’s a bit up in the air right now.
Toby: Yes, to match homotopy theory, and also higher category theory. 0-types are sets which are homotopy 0-types and also 0-groupoids.
Todd: “there is a term which we write as (λ-expression)” sounds all wrong to me. I think the term is ; the context is not part of the term syntactically, although it makes no sense to consider a term apart from its context. I don’t think it’s a good idea to denote contexts with λ, since in a more expressive type theory that includes function types, there is a difference between the term in context (which is usually written as ) and the term in the empty context. Categorically, the first denotes a morphism from A to B, while the second denotes the corresponding global element of the exponential object .
Mike: I got it. I recognize all the points and distinctions you are making.
As I said, I am all for being less confusing. Why don’t you give me an opportunity to rewrite the section, and then I’ll post a note here when I have done so, okay?
Okay, I have now rewritten the section on term syntax, with above comments in mind (link). Hopefully it’s an improvement or anyway less confusing. All comments welcome.
Sorry if I came across as too critical. As I said, I think this is a great exposition and I look forward to reading the rest of it. The new version of the section on term syntax looks perfect to me!
Thanks, Mike. Sorry for bristling. :-)
I am gearing up for the next installment; please stay tuned! Some of the topics to come seem fascinating to me.
1 to 23 of 23