Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nforum nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf sheaves simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 1st 2016

    Does anyone have access to

    AARNE RANTA, Constructing possible worlds, Volume 57, Issue 1-2, pages 77–99, April 1991 ?

    DOI: 10.1111/j.1755-2567.1991.tb00541.x

    I think he treats possible worlds there in terms of contexts.

    • CommentRowNumber2.
    • CommentAuthorUrs
    • CommentTimeMar 1st 2016

    I am not at the institute at the moment, but the article is freely visible here http://onlinelibrary.wiley.com/doi/10.1111/j.1755-2567.1991.tb00541.x/pdf

    • CommentRowNumber3.
    • CommentAuthorDmitri Pavlov
    • CommentTimeMar 1st 2016

    An electronic version of the article is available here: http://libgen.io/scimag/get.php?doi=10.1111/j.1755-2567.1991.tb00541.x

    • CommentRowNumber4.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 1st 2016

    Thanks both, but I’m being asked to pay at #2, and getting a time-out at #3.

    • CommentRowNumber5.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 1st 2016

    I have it now.

    • CommentRowNumber6.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 2nd 2016

    So there’s the idea in Ranta of worlds as specified by contexts. Sometimes we are to think of a (finite) context

    Γ=x 0:A 0,x 1:A 1(x 0),x 2:A 2(x 0,x 1),x n:A n(x 0,,x n1)\Gamma \;=\; x_0:A_0,\; x_1:A_1(x_0),\; x_2:A_2(x_0,x_1),\;\dots x_n:A_n(x_0,\dots,x_{n-1})

    as specifying a collections of worlds which are all possible ’complete’ extensions of Γ\Gamma. I guess that links in with contexts as open sets of worlds as points, choice sequences, fan theorem, etc. and sheaf models for modal logic.

    Philosophers have long pondered the nature of counterfactuals such as

    • Had I taken an aspirin this morning, I wouldn’t have a headache now.
    • Had Caesar been in command during the Korean War, then he would have used the atomic bomb.

    They sometimes invoke a sense of ’closest’ possible world in which the antecedent holds. If these finite contexts could be seen as forming a tree, where an extension shifts along a branch, then one could imagine some kind of minimal stripping back of context to leave out that part which conflicts with the antecedent of the counterfactual, before building up to a context where it holds – a minimal open set, in one of whose worlds I took the aspirin.

    So there’s no need to change the actual world beyond what’s necessary in the case of my headache by stripping back to this morning, asserting that I take an aspirin, but also that a tree branch hit me on the head, so that I do in that case have a headache.

    Is it only in the meta-language that I can talk about altering contexts?

    • CommentRowNumber7.
    • CommentAuthorMike Shulman
    • CommentTimeMar 2nd 2016

    It may be only in the meta-language that we can talk about altering contexts, but we do it all the time in the object language. Indeed, one of the roles of type formers is to alter contexts. For instance, the product x:AB(x)\prod_{x:A} B(x) alters the context inside its scope (the type B(x)B(x)) by adding a new variable xx of type AA. The “usual” type formers only alter contexts by adding things to them (also known as “variable binding”), but in modal type theory we also allow type formers that delete things from the context. The Pfenning-Davies judgmental \Box, for instance, deletes all assumptions from the context that are merely true (rather than valid, i.e. necessarily true), and similarly the spatial type theory \flat deletes all assumptions that are not crisp (i.e. discrete). So I think it makes a lot of sense to regard a counterfactual conditional such as “Had I taken an aspirin this morning” as a sort of subjunctive-modal \prod that, in addition to adding a new assumption to the context (“I took an aspirin this morning”), also deletes from the context all facts that contradict this new assumption.

    • CommentRowNumber8.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 3rd 2016

    Thanks, Mike. I wonder how much of the following is right:

    Where we have gone down the route of positing a type of worlds, WW, at that section of the nLab page, we might speak instead in terms of contexts. So in what we call the ’global’ case, instead of the map W*W \to \ast, we’re looking at the map from a context Γ\Gamma to the empty context. More generally when there is a context extension f:ΔΓf: \Delta \to \Gamma, we have base change of types from the context of Γ\Gamma to the context of Δ\Delta. Left and right adjoints give modal operators.

    Now do we need to restrict to

    a morphism ΔΓ\Delta \to \Gamma in the category of contexts (an interpretation of Γ\Gamma in Δ\Delta) is a display morphism iff there is an isomorphism ΔΘ\Delta \leftrightarrow \Theta where Θ\Theta is an extension of Γ\Gamma?

    What would happen for general morphisms ΔΓ\Delta \to \Gamma?

    • CommentRowNumber9.
    • CommentAuthorMike Shulman
    • CommentTimeMar 3rd 2016

    I guess that would basically mean that instead of referring to worlds implicitly by modal operators we would be talking about them explicitly. That is, instead of saying “it’s necessarily true that 1+1=21+1=2” and using a semantics that interprets that statement to mean that in 1+1=21+1=2 in all worlds, we would be saying explicitly in the object language that “1+1=21+1=2 in all worlds”.

    • CommentRowNumber10.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 29th 2016
    • (edited Mar 29th 2016)

    I like the idea of Ranta’s that we think of a narrative as the progressive construction of a context. Think of a work of fiction introducing the reader to things, people and places, describing their features, etc. Then possible worlds relative to what has been stated so far are ways of continuing: new kinds of thing may be introduced, or new examples of existing kinds, and identities may be formed, such as when we find out that Oliver Twist’s kindly gentleman is in fact his grandfather. And so on. Necessity describes what must happen, and possibility what may happen.

    That suggests that rather than a straight WW for the type of worlds, as here, that we work relative to contexts

    Γ=x 0:A 0,x 1:A 1(x 0),x 2:A 2(x 0,x 1),x n:A n(x 0,,x n1)\Gamma \;=\; x_0:A_0,\; x_1:A_1(x_0),\; x_2:A_2(x_0,x_1),\;\dots x_n:A_n(x_0,\dots,x_{n-1}).

    Presumably contexts can be wrapped up as a large dependent sum. Is there an idea then that forming a context equivalent to a type is like forming a tower of homotopy fibers of a space?

    • CommentRowNumber11.
    • CommentAuthorUrs
    • CommentTimeMar 29th 2016
    • (edited Mar 29th 2016)

    That suggests that rather than a straight WW for the type of worlds, as here, that we work relative to contexts

    Γ=x 0:A 0,x 1:A 1(x 0),x 2:A 2(x 0,x 1),x n:A n(x 0,,x n1)\Gamma \;=\; x_0:A_0,\; x_1:A_1(x_0),\; x_2:A_2(x_0,x_1),\;\dots x_n:A_n(x_0,\dots,x_{n-1}).

    Just to note that this is a distinction only in notation. A context as you display is equivalently one single type/object formed as an iterated fiber product.

    • CommentRowNumber12.
    • CommentAuthorMike Shulman
    • CommentTimeMar 29th 2016

    In categorical semantics a context is interpreted by the same object as its dependent sum, but in the type theoretic syntax the distinction matters a lot. This is hard to understand from a categorical perspective, but I’m gradually putting together a way to see it from that point of view too. The short answer is that it has to do with the difference between monoidal categories and multicategories.

    I do think of the narrative of a proof as the successive construction of a context, more or less. Most proof strategies, when thought of as “tactics”, add something to the context (and perhaps change the goal as well), either by destructing or specializing something that was already there (an elimination rule) or by introducing a hypothesis in order to prove a universal or conditional statement (an introduction rule). (The introduction rules for positive types don’t change the context at all, acting only on the goal.) None of the basic proof strategies removes things from the context (well, the eliminators for positive types are generally implemented to remove the thing destructed, but that isn’t strictly necessary), but there are others that do — particularly, cut!

    • CommentRowNumber13.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 29th 2016

    I’m gradually putting together a way to see it from that point of view too.

    Do let us know when you’re done.

    The short answer is that it has to do with the difference between monoidal categories and multicategories.

    Are you pointing to something like the way of seeing product types as positive using its “from the left” universal property in a representable cartesian multicategory? Is there something comparable for dependent sum type, which can also be seen as both negative and positive?

    • CommentRowNumber14.
    • CommentAuthorUrs
    • CommentTimeMar 29th 2016

    it has to do with the difference between monoidal categories and multicategories

    Probably you mean that it has to do with the difference between unbiased and biased definitions of monoidal categories?

    Do you think it “matters a lot” for what David is after? I’d be surprised.

    • CommentRowNumber15.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 29th 2016

    Maybe what’s happening at slide 55 here?

    • CommentRowNumber16.
    • CommentAuthorMike Shulman
    • CommentTimeMar 29th 2016

    @Urs: no, I don’t mean that, I really do mean monoidal versus multicategory; David has the right idea (product types as positive). The “judgmental structure” of a type theory should be thought of as a kind of multicategory in which type formers are all characterized by universal properties, and if you want to talk about a type former that doesn’t have a universal property then you need to incorporate something in the judgments to give it one, like passing from monoidal categories to representable multicategories. I don’t think anyone has written down the relevant kind of generalized multicategory for dependent type theory yet, but it ought to exist.

    • CommentRowNumber17.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 30th 2016

    Ranta, while discussing “Text as a progressive conjunction” in sec 6.1, gives two ways one might think to take a series of sentences, including anaphorical reference to earlier sentences, as a large progressive conjunction. Both are found wanting since they take us away from the fact that each sentence was made as an assertion:

    A text is a sequence of assertions, but a conjunction is not.

    There’s then a discussion of the equivalence of a context given in the sequential way and as progressive conjunction.

    So Ranta is alive to this monoidal vs multicategory issue.

    • CommentRowNumber18.
    • CommentAuthorUrs
    • CommentTimeMar 31st 2016

    Mike, thanks. I see what you are saying about monoidal and multicategories. But I still have little idea of what you were alluding to in #12.

    So in #10 it seemed to me that David got away with the impression that discussions such as those at necessity, which speak about one context denoted WW, could be generalized by admitting contexts that are a sequence of consecutively dependent types. In #11 I said that the latter is just the notation that remembers how WW may have been constructed consecutively. In #12 you started out saying that while this is true in the categorical semantics, there is nevertheless a difference that matters, and I take it you mean: that matters in a non-trivial way.

    This I am not seeing at the moment. Could you maybe just elaborate a little on how it matters?

    • CommentRowNumber19.
    • CommentAuthorDavid_Corfield
    • CommentTimeMar 31st 2016
    • (edited Mar 31st 2016)

    I see your point from #11, but I have a couple of vague lingering issues:

    1) Might a WW be given and there be no canonical way to decompose it as an iterated fiber product? Then specifying such a product is providing more information. If I point to a space, then giving a tower of fibrations for it is equipping it with structure, no? I guess we can look for a specific tower, like the Postnikov one. Or would I say that any type is equivalent to any of its representations as iterated fiber products, so it’s a distinction that doesn’t matter?

    2) Ranta’s thought (perhaps linked to Mike’s thought). Given a context

    Γ=x 1:A 1,x 2:A 2(x 1),x 3:A 3(x 1,x 2),x n:A n(x 1,,x n1), \Gamma \;=\; x_1:A_1,\; x_2:A_2(x_1),\; x_3:A_3(x_1,x_2),\;\dots x_n:A_n(x_1,\dots,x_{n-1}),

    I could form

    (x 1:A 1)(x n1:A n1(x 1,,x n2))A n(x 1,,x n1), (\sum x_1:A_1) \ldots (\sum x_{n-1}: A_{n-1}(x_1,\ldots, x_{n-2}))A_n(x_1, \ldots, x_{n-1}),

    or to give a sequential reading

    (x n1:(x 1:A 2(x 1))A n(p n2(x n1),q(p n3(x n1),,q(x n1)) (\sum x_{n-1}: \ldots (\sum x_1: A_2(x_1)\ldots)A_n(p^{n-2}(x_{n-1}), q(p^{n-3}(x_{n-1}), \ldots, q(x_{n-1}))

    and any ’instance’ of Γ\Gamma, where the variables are assigned, can be converted to an instance of either of these. But there’s still something in the syntactical difference.

    • CommentRowNumber20.
    • CommentAuthorMike Shulman
    • CommentTimeMar 31st 2016

    Breaking WW up into a tower of fibrations allows us to do a lot more things with it: e.g. we can Σ\Sigma or Π\Pi over just the last type in the context instead of the entire context at once. Maybe you prefer to talk about a tower of fibrations rather than a sequence of variables, and for this purpose that’s indeed just a difference of notation (in the multicategory/cut-elimination world there is still a difference, but I think that’s probably irrelevant for this discussion) – but having such a decomposition is significant.

    • CommentRowNumber21.
    • CommentAuthorUrs
    • CommentTimeMar 31st 2016
    • (edited Mar 31st 2016)

    Okay, if that’s what you mean. Of course it matters to observe when a given base change

    H /WH \mathbf{H}_{/W} \stackrel{\longrightarrow}{\stackrel{\longleftarrow}{\longrightarrow}} \mathbf{H}

    factors

    H /WH /W 2H /W 1H \mathbf{H}_{/W} \stackrel{\longrightarrow}{\stackrel{\longleftarrow}{\longrightarrow}} \cdots \stackrel{\longrightarrow}{\stackrel{\longleftarrow}{\longrightarrow}} \mathbf{H}_{/W_2} \stackrel{\longrightarrow}{\stackrel{\longleftarrow}{\longrightarrow}} \mathbf{H}_{/W_1} \stackrel{\longrightarrow}{\stackrel{\longleftarrow}{\longrightarrow}} \mathbf{H}

    but it may also be useful to factor it differently, I don’t want WW to force me how to choose that factorization.

    But anyway, I see what you mean now. (Hm, or maybe I don’t, because where did the multicategories enter? But maybe I’ll have to bow out, as I should concentrate on something else right now.)

    • CommentRowNumber22.
    • CommentAuthorDavid_Corfield
    • CommentTimeApr 1st 2016

    I would like to hear more at some point about the multicategory/polarity issue, which I first met in Bob Harper’s post and Mike’s comment there.

    The computational trinitarianism philosophy is expressed there:

    More broadly, it is useful to classify types into two polarities, the positive and the negative, corresponding to the verificationist and pragmatist perspectives. Positive types are inductively defined by their introduction forms; they correspond to colimits, or direct limits, in category theory. Negative types are coinductively defined by their elimination forms; they correspond to limits, or inverse limits, in category theory. The concept of polarity is intimately related to the concept of focusing, which in logic sharpens the concept of a cut-free proof and elucidates the distinction between synchronous and asynchronous connectives, and which in programming languages provides an elegant account of pattern matching, continuations, and effects.

    As ever, enduring principles emerge from the interplay between proof theory, category theory, and type theory. Such concepts are found in nature, and do not depend on cults of personality or the fads of the computer industry for their existence or importance.

    What is this ’focusing’? In case I have a moment, it looks like Focusing on Binding and Computation might be useful.

    • CommentRowNumber23.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 8th 2016

    Morning thoughts: what is the dependency structure of a context? Although a context is given in general as

    Γ=x 0:A 0,x 1:A 1(x 0),x 2:A 2(x 0,x 1),x n:A n(x 0,,x n1), \Gamma \;=\; x_0:A_0,\; x_1:A_1(x_0),\; x_2:A_2(x_0,x_1),\;\dots x_n:A_n(x_0,\dots,x_{n-1}),

    A 2A_2, say, might only depend on one of its predecessors. It looks to me like we have a directed acyclic graph.

    Directed acyclic graphs, or DAGs, are famous for expressing the dependency structure of Bayesian networks, a way of representing probability distributions based on causal dependencies. I used to talk about them. They were developed greatly by Judea Pearl in Causality, where one thing he uses them for is counterfactual reasoning. One minimally modifies the network compatibly with the new information.

    Might this relate the context modification I’m after in #6?

    • CommentRowNumber24.
    • CommentAuthorMike Shulman
    • CommentTimeMay 8th 2016

    Yes, I agree. Although if A 4A_4 depends on A 2A_2 and A 2A_2 depends on A 0A_0, then A 4A_4 must also depend on A 0A_0, so it makes more sense to me to regard the dependency as an inverse category (or, if you like, a direct category). This sort of extra-structured context also seems to come up in modal type theory; Dan and I still haven’t really nailed down what “adjoint logic” looks like when there is type dependency, but it seems to me that something like this will have to be taken into account when describing how the context gets modified by modal operators. So insofar as your #6 is related to modal type theory, yes, it probably relates to this as well.

    • CommentRowNumber25.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 8th 2016

    Although if A 4A_4 depends on A 2A_2 and A 2A_2 depends on A 0A_0, then A 4A_4 must also depend on A 0A_0

    In Bayesian networks, they like to distinguish between direct and indirect causes, so taking the same structure as your case:

    A 0A_0 is a cause of A 4A_4, or A 4A_4 is dependent on A 0A_0, but it may be indirect, screened off by A 2A_2. So they’d mark a difference between A 0A 2A 4A_0 \to A_2 \to A_4 and a similar one with an extra arrow from A 0A 4A_0 \to A_4. In the former case, if I fix the value of A 2A_2, then A 0A_0 becomes irrelevant for A 4A_4.

    But perhaps the analogy doesn’t goes that far.

    • CommentRowNumber26.
    • CommentAuthorMike Shulman
    • CommentTimeMay 8th 2016

    I don’t think there’s much sense to such a distinction in type theory. When A 2A_2 depends on A 0A_0, fixing a value of A 2A_2 requires also fixing a value of A 0A_0.

    • CommentRowNumber27.
    • CommentAuthorDavid_Corfield
    • CommentTimeApr 20th 2018
    • (edited Feb 17th 2023)

    Re #24, so now we have a name, ’context shape’, for the dependency structure (slides 36-44 of Mike’s HOTTEST talk).