Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 16th 2011
    • (edited Aug 16th 2011)

    I have added a bit more material to notes on predicate logic. The section I am currently working on (Pullbacks in Term(S,F)Term(S, F)) is a technical section, clearing some underbrush so that one has a clear view of the “right” Beck-Chevalley conditions on a free hyperdoctrine.

    It’s also a somewhat speculative section, and if anyone has some reactions I’d be glad to hear them. Having defined the free cartesian category Term(S,F)Term(S, F) generated by a functional signature, this will be the base category of the hyperdoctrine. The notes aim to describe the free Boolean hyperdoctrine

    Term(S,F) opBoolTerm(S, F)^{op} \to Bool

    and there is the question of which Beck-Chevalley conditions should be assumed. (The usual semantics will not be sound if we assume the Beck condition for all pullbacks – I hint at this in one of the examples.) My conjecture (that I have reasonable but not yet unshakeable belief in) is that the right class of pullbacks are those I am calling “productive” – “based on products”.

    I am giving a semantic definition of productive and a syntactic one. The semantic definition is that a pullback is productive if it is preserved by any finite-product preserving functor. (Or, if you like, absolute pullbacks relative to the doctrine of finite product categories.) The syntactic definition is that there are some obvious pullbacks that you can write down based only on product data, and the hope (based on results of Seely’s thesis) is that I have a complete list. In other words, there is a hoped-for completeness theorem that one can prove – the corresponding soundness theorem is completely obvious.

    It seems that studying limits that are absolute with respect to some doctrine would be a sort of natural thing to study, but I don’t believe I’ve ever seen this. Can anyone tell me where this type of thing has been studied?

    • CommentRowNumber2.
    • CommentAuthorMike Shulman
    • CommentTimeAug 17th 2011
    • (edited Aug 17th 2011)

    This is very neat! Kate and I have been thinking a lot about lists of squares like your (1)–(4), for a slightly different reason: the “homotopical” examples of monoidal fibrations we care about only satisfy the Beck-Chevalley condition for homotopy pullback squares, so we need to be careful about which pullback squares are also homotopy pullbacks and only use B-C for those. Your (1), (2), and (4) are homotopy pullbacks (plus B-C is automatic for (4) since σ is an isomorphism), but (3) is not — and thereby much interest arises, cf. free loop space object. I suppose one could talk about squares being “homotopically productive” if their pullbackness is preserved by every product-preserving functor into an (,1)(\infty,1)-category.

    We also need B-C for the pullback square

    A δ A×A δ id×δ A×A δ×id A×A×A \array{ A & \overset{\delta}{\to} & A\times A \\ {}^\delta \downarrow & & \downarrow ^{id\times\delta} \\ A\times A & \overset{\delta\times id}{\to} & A\times A\times A}

    (being the “Frobenius axiom”), which is also (unless I’m very confused) semantically productive, but which I don’t see immediately how to derive from your current definition of syntactic productivity. Was this just an oversight, or does this one follow from the others? I know you just showed the Frobenius law follows from Frobenius reciprocity in a cartesian bicategory, but this is not quite the same, right?

    Oh, and the general studying of limits absolute with respect to some doctrine? I agree that it seems very natural, but I also don’t believe I’ve ever seen it.

    • CommentRowNumber3.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 17th 2011
    • (edited Aug 17th 2011)

    Mike, your comments are perceptive as always. Square (3) is definitely “odd man out” (Seely seems to recognize this too); the way I think of it is that BC for this square it has a lot to do with equality being extensional in the traditional sense, which is why it seems to fit with the declared aims of the article in dealing with classical first-order logic with equality. When you interpret it in a cartesian bicategory BB, you find that it forces Map(B)Map(B) to be locally posetal (edit: locally preordered). So it is satisfied in say the bicategory of bimodules between posets, but is not satisfied in the bicategory of bimodules between groupoids.

    The Frobenius condition on the other hand forces Map(B)Map(B) to be locally groupoidal. So if you stick BC for (3) together with the Frobenius condition, you get local essential discreteness in Map(B)Map(B), which is (the way I see it) the statement that equality is extensional. This is satisfied in, e.g., SpanSpan. (And yes, Frobenius is certainly semantically productive.)

    There’s another big qualitative difference between BC for (3) and Frobenius: when you interpret it surface-diagrammatically, or even in terms of thick 2d tangles, Frobenius is an equivalence of cobordisms. The Beck condition for (3) on the other hand is more a “cobordism between cobordisms” – sort of like boring your finger through a strip of clay in the 2d tangle case in one direction, and mending the hole going the other direction.

    All this being said, one can derive the syntactic productivity of Frobenius from the conditions listed. It took me a little time to see this too. I’ll latex up the relevant diagrams a little later.

    • CommentRowNumber4.
    • CommentAuthorMike Shulman
    • CommentTimeAug 19th 2011

    I’ve wondered about the fact that the Frobenius BC condition is an equivalence of cobordisms, but some of the other productive BC squares are not. Your (2) and (4) are also “purely topological”, whereas (1) and (3) are not. Do you think there is anything deep in that?

    I see you’ve put up the proof that Frobenius is syntactically productive. That’s very cute! I had never seen that before. Is that argument original to you?

    • CommentRowNumber5.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 20th 2011
    • (edited Aug 20th 2011)

    It’s been a while since I’ve thought hard about the topological meaning of these things, so I’m not sure I can say anything deep about the equivalence-of-cobordism phenomenon. One thing I vaguely remember, and vaguely have a feeling for, is the type of “cobordisms between cobordisms” that do come up. But it’s something that I’ve not written down with absolute precision – be patient as I wave my hands, and use metaphorical, pre-mathematicized language.

    On the algebraic side, I was at one point interested in describing the “theory of equality”, or in other words a free cartesian bicategory with Frobenius generated from a set of objects. (Not locally posetal cartesian bicategories, just cartesian bicategories.)

    On the geometric side for this, the models I’ll use for objects are 2d disks, and the models for morphisms (1-cells) are cobordisms between one finite set of 2d disks to another. I’ll call them thickened tangles; they are certain 3d manifolds with corners. It’s fun to imagine them as built with modeling clay. The “cobordisms between cobordisms”, or cobordisms between thickened tangles representing 2-cells, should include equivalences or boundary-fixing diffeomorphisms isotopic to the identity (as in the case of Frobenius), construed as cobordisms in a standard way. But they also include all the ways you can make holes or cuts, by pinching down a piece of the clay to a point and then either severing, or poking a hole through. It is vitally important here that the cobordisms are directed: you can make a hole or cut (call either an ’excision’), but you can’t run the movie backwards to mend a hole or cut, so to speak.

    So, you can take a tube of clay, pinch down the middle to a point, and sever to get a solid cup followed by a solid cap. This represents the unit of the adjunction εε *\varepsilon \dashv \varepsilon^\ast, where ε:x1\varepsilon: x \to 1 represents the unique map for a generator xx. Or, you can take a tube of clay and use your thumb and index finger and pinch down a diameter somewhere to a point, and bore through to make a hole. This represents the unit of the adjunction δδ *\delta \dashv \delta^\ast. Or, you can take a sculpture whose boundary consists of two pairs-of-pants sewn at their waists, and pinch the crotch regions to a point and sever to make two disconnected vertical tubes of clay; this represents the counit of δδ *\delta \dashv \delta^\ast. Or finally, you can take a solid cap glued to a solid cup, which is just a ball of clay floating mid-air, and ’excise’ it by throwing it away; this represents the counit of εε *\varepsilon \dashv \varepsilon^\ast.

    So, all these directed cobordisms are local excision moves of one kind or another. They generate the 2-cells we want.

    Hopefully I’ve made the visuals clear. They are based on a kind of semi-imaginative extrapolation from writings of C.S. Peirce (which I don’t particularly recommend except to fanatics).

    As for the proof that Frobenius is syntactically productive – no, I’ve not seen it anywhere else, but I imagine people like Seely might have written it down on an envelope at some point. I think by the way that the syntactic rules are incomplete: we can add to (1)-(4) pullback squares of the following forms

    • Any pullback of a constant 1X1 \to X with itself (where the projection maps of the pullback are identities);

    • Any pullback of a pair 1 X,e:XX×X\langle 1_X, e \rangle: X \to X \times X with the diagonal δ:XX×X\delta: X \to X \times X, if ee is idempotent.

    (Only the first of these really matters for Term(S,F)Term(S, F).) I’m still not sure I’ve got a complete list. For the time being I’m just settling on an analysis of pullbacks in Term(S,F)Term(S, F), and it’s somewhat slow-going, although I hope to have something up soon.

    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeAug 23rd 2011

    That’s neat, thanks. Can you say anything about what prompted you to guess that the syntactic productivity of Frobenius might be provable from the others?

    As for other productive pullback squares, what about just any pullback of an isomorphism? (Which includes your (4).)

    • CommentRowNumber7.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 23rd 2011

    Perhaps the first thing that gave me some confidence that Frobenius is syntactically productive is that the rules I had written down for syntactic productivity were copied from Seely’s paper. Everyone working in the area, and that certainly includes Seely, knows about the Frobenius laws, and with Seely being a careful worker and all, it would have surprised me if the relevant pullback for Frobenius weren’t caught up in his net. So I had a little faith, you see. :-)

    It’s a little hard to introspect and give a precise description of my thought processes from there. I had to go back and stare at the form of his square (1) and just imagine what would work, and “which string to cut”. But you remind me that someday I hope to share my little bags of tricks for proving little results like this, coherence results, the recent Frobenius reciprocity implies Frobenius law thing, etc. Some of the tricks can be to a large degree mechanized, and many have a topological flavor that makes them just make plain enjoyable to fiddle around with until I get the result I’m after. (Buzzwords include “walls of codimension 1”, “blowing up”, and “genus”.)

    Thanks for the observation on pulling back isos. Yes, that one’s sort of obvious, isn’t it? :-)

    • CommentRowNumber8.
    • CommentAuthorMike Shulman
    • CommentTimeAug 24th 2011

    That makes sense. But are you saying that the other squares you mentioned in comment 5 are ones that Seely did miss? (I haven’t looked at Seely’s paper yet.)

    • CommentRowNumber9.
    • CommentAuthorTodd_Trimble
    • CommentTimeAug 24th 2011
    • (edited Aug 24th 2011)

    No, I’m not saying that Seely missed them – the ones he gave were sufficient for the theorems he wanted to prove. (And he does refer to the Frobenius conditions inter alia – except that he calls them the Lawvere theorems or something. So I guessed he must have checked those BC conditions in particular, although I don’t think he gave my diagrammatic proof. I suppose it could be in his thesis, on which his paper is based.)

    As far as I know, the notions of semantic and syntactic productivity are due to me (in the article this thread is about), not Seely – and it’s not clear that the notion of syntactic productivity, one which would allow a general completeness theorem, has even been nailed down yet. My current hope is that together with the squares which have been added in by comments #5 and #6, we now have a complete list for a completeness theorem. I should try to get back to this soon.

    What Seely does is characterize those pullbacks (in the base category of the hyperdoctrine which classifies a finitary first-order theory TT) for which a Beck-Chevalley condition holds. His notion of hyperdoctrine assumes BC for the squares generated by (1)-(4) and closed under products and composition, and then he shows that Beck-Chevalley will hold for some other pullback in the base category iff a proof can be given internally in TT that that square is a pullback.

    • CommentRowNumber10.
    • CommentAuthorMike Shulman
    • CommentTimeAug 25th 2011

    I see; very nice. Good luck with the completeness theorem!