Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nforum nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf sheaves simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 6th 2011

    After a good long time, I am getting back to the exposition of predicate logic that I had promised Urs months ago. (The first version, on which I had spent a fair amount of time, got somehow deleted, and it is only recently that I mustered up another head of steam to start up again.)

    For now I’m working on it in my personal lab, but perhaps I can say a little here of what my plans are. Very often, if one picks up a textbook which goes into logic, there will be some hurried recitation of some of the basic vocabulary of a first-order theory, involving words like “terms”, “well-formed formulas”, etc., which doesn’t give much good overall feeling for a first-order theory as a mathematical structure. My point of view will be to resurrect the notion of (Boolean) hyperdoctrine, giving first a detailed description of the “free Boolean hyperdoctrine generated from the signature” of a theory, and then taking a quotient Boolean hyperdoctrine by dividing by a “hyperdoctrine filter generated from the axioms” of a theory. This hyperdoctrine I regard as the essence of a first-order theory as an algebraic structure. (Other algebraic set-ups are of course possible and worth considering.)

    The free Boolean hyperdoctrine generated from a signature (consisting of a collection of sorts SS, a collection of function symbols FF, and a collection of relation symbols RR) is already interesting to contemplate. The way I see it, it will go like this. First, one is to construct “the free cartesian category, i.e., category with finite cartesian products, generated from the data of sorts and function symbols”. I think one is within rights to call this the category of terms, Term(S,F)Term(S, F). The objects of Term(S,F)Term(S, F) are product types generated from SS, and the morphisms are terms generated from FF in the doctrine of finite product categories. Then, the free Boolean hyperdoctrine I’m after would be a functor of the form

    Wff R:Term(S,F) opBool,Wff_R: Term(S, F)^{op} \to Bool,

    taking each product type to the Boolean algebra of well-formed formulas of that type, generated by the set of relation symbols RR. The idea is that this functor encapsulates term substitutions in formulas, and for each term f:TTf: T \to T' between types, the term substitution f *:Wff(T)Wff(T)f^\ast: Wff(T') \to Wff(T) has a left adjoint f:Wff R(T)Wff R(T)\exists_f: Wff_R(T) \to Wff_R(T').

    This in itself is an interesting decision, and is possibly up for debate. A different decision is take the base category not to be Term(S,F)Term(S, F), but rather the free category with finite products Prod(S)Prod(S) generated from the set of sorts, which is a lot easier to understand and work with, being equivalent to (Fin/S) op(Fin/S)^{op}. In some sense this is the more usual decision, because the standard existential quantifiers are left adjoints to substitutions induced by mere projection maps S×TTS \times T \to T between types. (Already existential quantification along diagonal maps TT×TT \to T \times T probably feels a little outré to your garden-variety logician, until you tell him that this is where equality predicates really come from.) Existential quantification along general terms f:TTf: T \to T' would no doubt feel even more exotic to that logician, until you tell him it’s really just an instance of relational composition.

    Anyway, whichever base category one winds up choosing, I definitely want the Beck-Chevalley conditions to be imposed. This leads to the interesting problem of gaining a detailed understanding of what pullbacks look like in Term(S,F)Term(S, F). I expect they have a simple description in the end, but it’s not something I understand well at the moment. Pullbacks in Prod(S)=(Fin/S) opProd(S) = (Fin/S)^{op} are by contrast much easier to understand, and for this case the Beck-Chevalley conditions can be shown to follow just from the Frobenius law δδ =(δ 1)(1δ)\delta \delta^\dagger = (\delta^\dagger \otimes 1)(1 \otimes \delta) that I’ve been discussing elsewhere.

    Does anyone know whether this development I’m pursuing has been treated in the literature somewhere?

    • CommentRowNumber2.
    • CommentAuthorUrs
    • CommentTimeAug 6th 2011
    • (edited Aug 6th 2011)

    the exposition of predicate logic that I had promised Urs months ago.

    Thanks, Todd! Much appreciated.

    I can’t look at it right now. But I will certainly read with interest a little later.

    • CommentRowNumber3.
    • CommentAuthorFinnLawler
    • CommentTimeAug 6th 2011

    Seely gives quite a detailed construction of a hyperdoctrine from a first-order theory in Hyperdoctrines, natural deduction and the Beck condition, Zeitschr. für math. Logik und Grundlagen der Math. 29, 1983 (available from his web page. In particular, the base category of such a hyperdoctrine is the cartesian category of terms.

    He doesn’t directly characterize pullbacks in the term category, but he does show (section 8) that a hyperdoctrine that satisfies Beck–Chevalley for the 3 basic product-absolute pullback squares satisfies B.–C. for an arbitrary pullback square if and only if the theory associated to that hyperdoctrine proves that the square is a pullback, in a sense that’s obvious when you see it.

    I hope that’s helpful.

    • CommentRowNumber4.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 6th 2011

    Yes, this is very helpful, Finn – thank you very much. I didn’t know about this article. It looks tailor-made for the type of thing I wanted to do.

    Right now I am mainly interested in the free Boolean hyperdoctrine generated by a signature (no extra axioms). My suspicion is that all the pullback squares in the term category are essentially product-based, and also that the maps of the associated cartesian bicategory correspond exactly to morphisms in the term category. Do you happen to know the status of either of these questions (presuming they make sense to you)?

    • CommentRowNumber5.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 6th 2011

    (Silly throw-away observation: every time I see Seely’s sign-off initialization “rags”, I can’t help but think of the little robot dog in Woody Allen’s Sleepers. “Woof, woof. Hello, I’m rags.”)

    • CommentRowNumber6.
    • CommentAuthorFinnLawler
    • CommentTimeAug 6th 2011

    I’m afraid I don’t know how to characterize pullback squares of terms. I haven’t seen it discussed anywhere in greater generality than that of what you’ve written on the Lab.

    The maps you mention will be the provably functional relations, which in general needn’t be given by terms, although of course all terms are provably functional. But I suppose if there are no axioms then presumably there’s no way for a relation to be functional unless it’s the ’graph’ of a term. Not that I know how to go about proving that. (Pavlović’s paper Maps II: Chasing diagrams in categorical proof theory, Logic Journal of the IGPL 4, 1996, here, is all about maps in fibrations/hyperdoctrines, though sadly he doesn’t talk about syntactic hyperdoctrines.)

    Sorry I can’t be of more help. I’d definitely be interested in the answers to both of your questions, so do please keep the Forum updated.

    (I’ve never seen Sleepers, so I can’t comment robot-dog-wise.)

    • CommentRowNumber7.
    • CommentAuthorTobyBartels
    • CommentTimeAug 7th 2011

    (The first version, on which I had spent a fair amount of time, got somehow deleted, and it is only recently that I mustered up another head of steam to start up again.)

    Wait, wait, how can that have happened!? Nothing ever written on the nLab should ever have been deleted except: private webs that have been shut down, and edits overwriting a previous edit with the same signature within 30 minutes with no intervening edit. Of these two ways, I believe that the former has happened just once.

    Did you write this on the main nLab or on a personal web? Do you remember the page name? Might it have been combined with another page and moved to a page with ‘> history’ in its name?

    • CommentRowNumber8.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 7th 2011
    • (edited Aug 7th 2011)

    It (probably) happened because I had the page being created continuously open on my laptop, didn’t save a first edit, and then something happened so that I had to shut down and then wasn’t able to retrieve my tabs. For example, it could have been a computer malfunction that I had to get fixed by my local guy.

    Edit: It was highly irritating, yes, and perhaps an instance of incompetence on my part, but I should take heart from what I’ve heard said about Milnor. The way he would write a book was to write page 1, crumple it up and throw it away, write pages 1 and 2 and throw them away, write pages 1,2, and 3, etc. Surely that was an exaggeration, but whatever the true story, it paid off: his books are just fantastic to read.

    • CommentRowNumber9.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 7th 2011
    • (edited Aug 7th 2011)

    I have added a bunch of material to my burgeoning notes on predicate logic. A very brief condensed summary of the newish material is that

    • The category of product types Prod(S)Prod(S) generated by a set of sorts is described as (Fin/S) op(Fin/S)^{op};

    • The category of terms Term(S,F)Term(S, F) generated by SS and a set of SS-sorted function symbols is described via (more or less) traditional term syntax, but also somewhat more excitingly as Pro(S,F)Prod(S)Pro(S, F) \circ Prod(S) where there is a distributive law between the free monoidal category Pro(S,F)Pro(S, F) generated by (S,F)(S, F), and the free category with products Prod(S)Prod(S) generated by SS.

    I hope to analyze/characterize pullbacks in Term(S,F)Term(S, F) (cf. the commentary above) by taking advantage of the decomposition Term(S,F)=Pro(S,F)Prod(S)Term(S, F) = Pro(S, F) \circ Prod(S), with a hoped-for reduction to pullbacks in Prod(S)Prod(S).

    All comments welcome.

    • CommentRowNumber10.
    • CommentAuthorTobyBartels
    • CommentTimeAug 7th 2011

    @#8:

    I see. The spirit of Milnor may be guiding you, but also try (Firefox and) the Firefox add-on Lazarus.

    • CommentRowNumber11.
    • CommentAuthorTobyBartels
    • CommentTimeAug 7th 2011

    Nice article, BTW.

    • CommentRowNumber12.
    • CommentAuthorDavid_Corfield
    • CommentTimeAug 8th 2011

    Hi Todd. You write

    There are four layers of data and constructions we deal with, each layer depending on the previous.

    Is there a category theoretic way of seeing why there should be precisely these four layers, perhaps as part of the property, structure, stuff story?

    • CommentRowNumber13.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 8th 2011

    David, I’d like to think about this further (or keep this question in mind) – I’m not quite sure. My immediate reaction though is that while sorts refer to ’stuff’ and both functions and relations refer to ’structure’, there is a qualitative difference between functions and relations which causes me to order them like this. It reminds me a bit of framed bicategories, which are varieties of double categories: as Mike likes to remind us, it is useful to consider 2-categorical structures like ProfProf not by attempting to characterize functors as profunctors satisfying a property (where we run into the problem that equivalence in ProfProf is mere Morita equivalence), but collect functors together as a separate structure in a horizontal direction, and profunctors as a structure in a vertical direction.

    There are probably a number of formalisms to bear in mind. Stuff-structure-property was originally a way to consider a certain factorization in CatCat or in GpdGpd (groupoids); a question is what sort of analogous factorization would be sensible for fibered categories.

    • CommentRowNumber14.
    • CommentAuthorDavid_Corfield
    • CommentTimeAug 8th 2011

    Interesting. If I’m reading this page correctly, the connection to proarrow equipments looks right.

    • CommentRowNumber15.
    • CommentAuthorMike Shulman
    • CommentTimeAug 12th 2011

    Just getting around to glancing at this; I think it is great! I am very happy to see a description of this stuff written out carefully in terms of multicategories, and I look forward to reading more of it.

    I was a little confused by your “eta-expanded” notation

    λx 1:s i 1,,x n:s i n.f(x 1,,x n):s \lambda x_1\colon s_{i_1}, \dots, x_n\colon s_{i_n}. f(x_1,\dots,x_n)\colon s

    and reference to the variable declarations therein as a “preamble”. If I understand correctly, you’re just briefly summarizing this syntax in order to discard it in favor of the operadic approach, which is okay, but I think your summary may be confusing, at least to people who are used to thinking about this version. At first I thought that your notation above was trying to say the same thing as the traditional term-in-context

    x 1:s i 1,,x n:s i nf(x 1,,x n):s x_1\colon s_{i_1}, \dots, x_n\colon s_{i_n} \;\vdash\; f(x_1,\dots,x_n)\colon s

    in which the variable declarations are called a “context”. Then I noticed that you had written “f=f=” in front of it, so maybe it’s more like you’re actually asserting traditional eta-equivalence, except that in order to state that one needs the λ\lambda term-former whose introduction rules require the notion of term-in-context. Plus you don’t have function-types, so it’s not clear that traditional eta-equivalence is even meaningful.

    I realize you’re just trying to say “let’s not do this”, but maybe it’s worth being a little more precise about what we’re not doing? (-:

    • CommentRowNumber16.
    • CommentAuthorMike Shulman
    • CommentTimeAug 12th 2011

    Also, I agree that types are stuff, functions and relations are structure, and axioms are properties. Is there an abstract reason why there are two kinds of structure but only one kind of stuff and one kind of properties? Not sure, from this perspective.

    But from a (higher-)type-theory perspective, you have n-types for all n, each of which contain terms. 0-types are types, which are stuff, and terms in 0-types are like functions, which are structure. (-1)-types are propositions/relations, which are structure, and terms in (-1)-types are axioms/proofs, which are properties. Similarly, 1-types are eka-stuff, and terms in 1-types are stuff. So the reason we have a 1-2-1 pattern is that traditional logic deals only with 0- and (-1)-types, so we have two 1-1 patterns overlapping with a shift.

    The question of a corresponding factorization system for fibered categories is interesting! I bet there is one, at least for fibered posets.

    • CommentRowNumber17.
    • CommentAuthorTobyBartels
    • CommentTimeAug 12th 2011

    Mike, can you remind me why types have to be 00-types again? Is it just because you’re trying to match homotopy theory, or is there more to it?

    (I agree with your analysis, which makes a lot of sense.)

    • CommentRowNumber18.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 12th 2011

    @15: Thanks, Mike! I’m all for being less confusing, so perhaps I’ll rewrite that section.

    I can see why f=λx 1,,x n.f(x 1,,x n)f = \lambda x_1, \ldots, x_n. f(x_1, \ldots, x_n) could be confusing; it was an uncomfortable marriage between formal and informal. What I really meant was “for each function symbol ff, there is a term which we write as (λ\lambda-expression), and which we think of as standing for the function ff”. The equation was a sloppy way of putting it, I guess, since in the formal set-up I don’t mean to consider the function symbol as a term.

    The λ\lambda-expressions were intended as another way of expressing terms-in-context. What I don’t want is notation like t(x 1,,x n)t(x_1, \ldots, x_n) (and left at that), so that the variables of tt are understood to be exactly x 1,,x nx_1, \ldots, x_n, the variables which appear in tt. The free variables of tt may include some extra that do not appear inside tt, and these should be declared in some way. That’s all I meant to get across.

    I’m taking more time with this project than I perhaps should. The free Boolean hyperdoctrine generated by a signature, which is what I’m mainly aiming at before I get into calculus of deductions, is actually a fairly complicated beast, and I want to express it as “prettily” as I can, seeing how far one can go with a string diagram approach. I am toying with the idea of showing how Peirce essentially did it (which was worked on by me and Gerry Brady), using “lines of separation” for negation. But it’s a bit up in the air right now.

    • CommentRowNumber19.
    • CommentAuthorMike Shulman
    • CommentTimeAug 13th 2011
    • (edited Aug 13th 2011)

    Toby: Yes, to match homotopy theory, and also higher category theory. 0-types are sets which are homotopy 0-types and also 0-groupoids.

    Todd: “there is a term which we write as (λ-expression)” sounds all wrong to me. I think the term is f(x 1,,x n)f(x_1,\dots,x_n); the context is not part of the term syntactically, although it makes no sense to consider a term apart from its context. I don’t think it’s a good idea to denote contexts with λ, since in a more expressive type theory that includes function types, there is a difference between the term f(x):Bf(x)\colon B in context x:Ax\colon A (which is usually written as x:Af(x):Bx\colon A \;\vdash\; f(x)\colon B) and the term λx.f(x):AB\lambda x.f(x)\colon A\to B in the empty context. Categorically, the first denotes a morphism from A to B, while the second denotes the corresponding global element of the exponential object B AB^A.

    • CommentRowNumber20.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 13th 2011

    Mike: I got it. I recognize all the points and distinctions you are making.

    As I said, I am all for being less confusing. Why don’t you give me an opportunity to rewrite the section, and then I’ll post a note here when I have done so, okay?

    • CommentRowNumber21.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 13th 2011

    Okay, I have now rewritten the section on term syntax, with above comments in mind (link). Hopefully it’s an improvement or anyway less confusing. All comments welcome.

    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeAug 14th 2011

    Sorry if I came across as too critical. As I said, I think this is a great exposition and I look forward to reading the rest of it. The new version of the section on term syntax looks perfect to me!

    • CommentRowNumber23.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 14th 2011

    Thanks, Mike. Sorry for bristling. :-)

    I am gearing up for the next installment; please stay tuned! Some of the topics to come seem fascinating to me.