Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Am in Oxford at the moment, last night at dinner it turns out my table neighbour is a local philosopher using modal logic to reason about metaphysics.
How he defines his modal operators, I ask, whether he takes them to be anything that satisfies S4. Ah no, he says, that’s something only pure logicians would do who lost contact with the meaning of their symbols, he would always consider necessity being formalized by universal quantification over possible worlds.
(Otherwise, I’ll stay out of this discussion now, it seems a humongous social effort for such a tiny trivial point, I am exhausted and frustrated about it. Looking through the latest I am relieved to see that Mike does actually agree, but remain puzzled, as on previous similar occasions, what it is about my wording that would be so very different from other wordings of what seems to be the same point to me. In any case, what counts is that the $n$Lab entry is okay now. And thanks for somebody finally catching the only actual technical point. In fact Noam Zeilberger on g+ was first, and I was going to fix it in the entry, but then I had other things to do. )
Urs, Mike, I’ve changed the entry as SridharRamesh is right in #74, we don’t have the distribution of possibility over implication. We know that $\lozenge$ is a functor, but this says $p\to q \to (\lozenge p \to \lozenge q)$, not what was written.
The traditional name for the rule that SridharRamesh is calleing “generalization” is “Necessitation, Nec to its friends.
◊\lozenge preserves finite coproducts and □\Box finite products. Yes! Because \lozenge is a kind of existential and \Box is a kind of universal quantifier. you don’t need to like possible worlds for any of this…
Mike asks
why would everything that’s true be necessarily true? why Nec is a good rule? Valeria’s answer: Because the rule doesn’t say that anything that is true is necessarily true. The rule says that anything that is true in the *empty context” ie. from NO hypotheses, ie any assumption that you have ever made, has been discharged, it’s a closed term in the type theory, this is necessarily true. Axioms are bad ways of thinking about proofs, Gentzen rules and Natural Deduction are much better and using these you can see that the rule $\Box\Gamma\vdash A$ then $\Box \Gamma \vdash \box A$ is a very bad rule in ND, as it breaks the substitutivity of terms, you do need to use the more complicated rules that we discuss in our paper.
SridharRamesh, Mike The operator □\Box is defined by (□A)(w)=∀v(R(w,v)→A(v))(\Box A)(w) = \forall v (R(w, v) \to A(v)) [that is, □A\Box A holds at a world just in case AA holds at every world accessible from there]. The operator ◊\lozenge is defined by (◊A)(w)=∃v(R(w,v)∧A(v))(\lozenge A)(w) = \exists v (R(w, v) \wedge A(v)) [that is, ◊A\lozenge A holds at a world just in case AA holds at some world accessible from there].
NO! this is the case for classical modal logic. if you want to do categorical proof theory, ie if you want to have functors over a CCC (not over a poset), you need to do intuitionistic modal logic and the conditions on the accessability are more complicated, they are of the form \forall\forall and \forall \exists.
and yes, S4 (intuitionist or not) DOES satisfy the deduction theorem that “If $\Gamma, A\vdash B$ then $\Gamma\vdash A\to B$.”
Sridhar said:
Yeah; as noted before, the words “necessary” and “possible” can be used in many different senses, and I think many of the ordinary language ways in which they are used do not actually give any significant sense to iterated modalities like “possibly possible”. and I agree heartily, hear, hear..
Re: 102: It is presumably a typo, but just to clarify, we do not have $(p \to q) \to (\lozenge p \to \lozenge q)$. What functoriality of $\lozenge$ amounts to is the rule that $\lozenge A \vdash \lozenge B$ follows from $A \vdash B$. We also have the internal theorem $\Box(p \to q) \to (\lozenge p \to \lozenge q)$.
[As for the other possible parenthesized parsing $p \to (q \to (\lozenge p \to \lozenge q))$, we do have this once we accept $\mathrm{T}$, but just as a trivial consequence of $q \to \lozenge q$]
Re: 103: I don’t see how any of what I said is problematic as a description of possible world semantics of modal logic within an intuitionistic background. Note that I am not simultaneously using the possible worlds to give Kripke semantics for the intuitionistic background logic (i.e., I am not making any use of the translation of intuitionistic logic into classical S4). [More explicitly, letting $F_w(p)$ denote the truth value of the claim that $p$ holds at world $w$, I am taking $F_w$ to distribute over all non-modal logical operations in the ordinary way; e.g., to ascertain whether a world satisfies $p \to q$, one only needs to know information about that world, not about any other world, as $F_w(p \to q) = F_w(p) \to F_w(q)$. I am only using the relations between possible worlds to interpret the modal operators $\Box$ and $\lozenge$.]
As for “the deduction theorem”, as I said, I was referring to concluding $A \vdash B$ from the ability to derive $\vdash B$ from $\vdash A$. This fails for us, since we can derive $\vdash \Box A$ from $\vdash A$, but do not have $A \vdash \Box A$. We do still have the universal property of implication, that $\Gamma, A \vdash B$ and $\Gamma \vdash A \to B$ are equivalent, but this was not what I was calling “the deduction theorem”.
Re: 102 indeed you’re right, what we have is functoriality of Possibility. we have the internal theorem $\Box(p \to q) \to (\lozenge p \to \lozenge q)$ and many other variants of that, such as $\Box (A\to \lozenge B) \to (\lozenge A\to \lozenge B)$, etc…
Re #103 yes I know you’re not
I am not simultaneously using the possible worlds to give Kripke semantics for the intuitionistic background logic but I think you need to. Possible worlds are not my area of expertise, I am quoting others who seem to me to say that this simple semantics is not available for intuionistic modal logic. the basic reference is Simpson’s phd thesis.
I don’t see anything problematic about the possible world semantics I described for intuitionistic modal logic (when interpreted within an intuitionistic background), but I’ll take a look at Simpson’s thesis.
(For what it’s worth, the markup system here requires the explicit inclusion of a blank line before the next paragraph in order to end a quotation)
Having taken a look, it seems to me the account of possible world semantics I gave for intuitionistic modal logic within an intuitionistic background logic is precisely the same as the one Simpson is primarily concerned with in his thesis (introduced in section 3.4, under the title “Our approach to intuitionistic modal logic”).
I’m trying to fix the mathematical content of the page based on this discussion. I had a look at the cited paper of Bierman & de Paiva, and now I’m more confused than before.
Question for Sridhar and Valeria: you’ve been saying that $\Box$ preserves products and $\lozenge$ preserves coproducts. But I don’t see that in the paper; the semantics described seem to say that $\Box$ is a lax monoidal comonad and $\lozenge$ is a $\Box$-strong, or strong, monad.
Question for Valeria, I guess: Can the intuitionistic axioms/rules for $\lozenge$ be stated without reference to $\Box$?
Question for everyone: Our page currently says “the S4 axioms just say (Bierman & de Paiva 92) that $\Box$ is an (idempotent) comonad and $\lozenge$ is an (idempotent) monad”. But that’s not true, is it? They say more than that (the previous question is about how much more).
Question for Urs: I want to add some discussion of the fact that $\Box$ and $\lozenge$ are not “internal” modalities like $\sharp$ and $ʃ$, but are both (even the monadic $\lozenge$) “external” like $\flat$. Syntactically this means that, as we’ve mentioned above, their rules don’t apply equally in all contexts. Do we have a discussion yet anywhere on the lab of this distinction? It would go naturally at modal type theory I guess, with maybe a mention at modal logic.
Re: Question 1: From a quick glance* at the Bierman & de Paiva paper, $\Box$’s product-preservation (within the lattice of propositions; unless otherwise indicated, I am always speaking about the traditional non-categorified context) can be deduced from what are on page 4 called “Axiom 10” and “Rule Box”, as in post 81 of this thread. Put another way, a lax monoidal functor is in fact product-preserving upon passage to preorder reflection.
$\lozenge$’s coproduct-preservation appears not to be part of the system considered in that paper, considering the remark on page 26. I do not know why, but perhaps this is by analogy to classical $\mathrm{S}4$, which needn’t explicitly axiomatize $\lozenge$ as coproduct-preserving (for, classically, it results automatically by the identification of $\lozenge$ with $\not \Box \not$). I would prefer to axiomatize $\lozenge$ as coproduct-preserving even in the intuitionistic context, as does Simpson in his aforementioned thesis (see “Figure 3-6: Axiomatization of IK” on page 52 of said thesis); note that this is demanded by the possible world semantics I gave before.
(*: My preemptive apologies if I’ve misinterpreted any aspect of the Bierman & de Paiva paper, which I have not read in detail yet. I trust Valeria will correct any such misunderstandings.)
Re: Question 3: The answer is “You are correct; the $\mathrm{S}4$ axioms say more than that”. The $\mathrm{K}$ axioms (when taken to match the most straightforward possible world semantics) say that $\Box$ is a product-preserving functor, $\lozenge$ is a coproduct-preserving functor, and the two are related via $(\lozenge A \to \Box B) \vdash \Box(A \to B) \vdash (\lozenge A \to \lozenge B)$. The $\mathrm{S}4$ axioms add on top of that that $\Box$ is a comonad and $\lozenge$ is a monad (automatically idempotent in the non-categorified context).
Put another way, a lax monoidal functor is in fact product-preserving upon passage to preorder reflection.
Ah, right, thanks. And it took me until just now to realize that $\Box(A\to B) \vdash (\Box A \to \Box B)$ is just the axiom for a lax closed functor, so of course it’s the same as being lax monoidal.
What does $(\lozenge A \to \Box B) \vdash \Box(A \to B) \vdash (\lozenge A \to \lozenge B)$ mean categorically? The second entailment looks like it has to do with the Bierman–de Paiva notion of “$\Box$-strong monad”. In fact, just as a strong endofunctor of a closed monoidal category $V$ is the same as a $V$-enriched functor $V\to V$, I guess a $\Box$-strong endofunctor is probably the same as a $V$-enriched functor $\Box_\bullet V\to V$, where $\Box_\bullet$ is the induced endomorphism of $V$-enriched categories induced by the monoidal functor $\Box:V\to V$.
But what does $(\lozenge A \to \Box B) \vdash \Box(A \to B)$ mean?
I did a bit of editing at necessity and possibility, but I’m still a little confused about the role of the “necessitation”, and also that last axiom mentioned above.
ref #107 yes, your account is exactly Simpson’s and Simpson has NO categorical model for his framework, as yet. More than about time to put this right, I say. the notion of deductive system that he uses has “worlds” as part of the syntax.
□\Box preserves products and ◊\lozenge preserves coproducts.
yes, this has always been folklore both in traditional modal logic (where it has been stated zillions of times) and in categorical logic. I’m sure it is in one of the other papers like Reyes, since you say it’s not in ours. but it ought to be there too.
Can the intuitionistic axioms/rules for ◊\lozenge be stated without reference to □\Box?
No, this was the point I was making somewhere above. Box is more fundamental than Diamond. The same way that conjunction is more fundamental than disjunction in traditional categorical formulations of intuitionistic logic.
the S4 axioms just say (Bierman & de Paiva 92) that □\Box is an (idempotent) comonad and ◊\lozenge is an (idempotent) monad
this is true in first approximation, but yes, you’re right there is more meaning, when you dig deeper.
I would prefer to axiomatize ◊\lozenge as coproduct-preserving even in the intuitionistic context, as does Simpson
I would prefer not to do it. this is one of the choices that constructivization forces on you. Thinking about the poset case is very useful, but it only gets you so far. please have a look at the last section of the Bierman-de Paiva paper.
Preservation of the coproduct by diamonds is not part of the Natural Deduction of the system, has to be imposed “ad hoc” and by force, to make the algebraic semantics pretty.
also □\Box is an (idempotent) comonad and ◊\lozenge is an (idempotent) monad
the idempotency issue is not a triviality. Idempotency is not forced on you by the syntax, it actually is a modelling choice, and a bad one, from the proof theoretical point of view. I get my intuitions mostly from what the syntax forces you to do, as it’s described with excruciating detail in the technical report “Term Assignment for Intuitionistic Linear Logic” http://logic-forall.blogspot.com/2014/07/bacat-yes.html.
Valeria, as long as you format your comments in Markdown+Itex, you get use LaTex in the usual way, and if you want to quote some text above, you can type e.g.
> I would prefer to axiomatize $\lozenge$ as coproduct-preserving even in the intuitionistic context
and this will produce highlighted text:
I would prefer to axiomatize $\lozenge$ as coproduct-preserving even in the intuitionistic context
Note that you can click on Source to copy the text of someone else.
I’m lost as to why we’re turning this page into a general treatment of modal logic. Maybe it should have a special name, but the line Urs was adopting is in the nPOV spirit of the wiki, and is going to be lost under a wealth of detail. If I might add, the idea of HoTT that propositions are only some types seems to me just the kind of thing worth pursuing, and was pointing to some things philosophers had treated, e.g., rigid designators as terms in necessary types.
Developing this line will get lost under what’s there now about propositional S4. Why not have that on another page? We have modal logic which needs a work over, and there are several related pages there, and the logic S4(m), and so on.
@David, good point. I just didn’t want the page to be making incorrect claims about S4.
Valeria, as long as you format your comments in Markdown+Itex, you get use LaTex in the usual way
Thanks for the tip Todd_Trimble (long time, no see). only learning the Markdown+Itex that is necessary for the job at hand… (also thanks Sridhar for a similar tip before)
and yes, agreeing with Mike_Shulman that I did not want to see the page containing incorrect stuff, but wanting to see it discussing the basics of modalities, as I believe they’re important.
Didn’t know about the S4 page, but also noticed things in the modal T page that are not correct… oh dear.
Notice Mike_Shulman that the page is not correct, yet.
The nullary version □(⊤)↔⊤
is not assumed by many modal logicians. the other nullary version
$\lozenge(\bot)$ equivalent $\bot$ is mostly assumed.
But neither makes much sense from the point of view of raw intuitions: why would necessary truth be the same as truth? why would possible false be the same as false? we want to create modalities exactly so that we can tell the differences between the necessary truths and the contingent ones, between whatever is possibly false and the actually false.
Valeria, do feel free to correct things that you find that are wrong! The great thing about a wiki is that anyone can do that. Just please announce your changes here on the forum, so other people can keep track of what’s going on and put their input in if they like.
I don’t understand your point about $\Box\top \leftrightarrow \top$. Certainly $\Box \top\to\top$ since everything implies $\top$, while I thought the “necessitation rule” said that if a proposition $p$ is provable in the empty context (which $\top$ certainly is), then so is $\Box p$. That makes perfect sense to me intuitively; while a particular truth may not be necessary, $\top$ is true by definition, so its truth is certainly necessary.
The dual version $\lozenge\bot \leftrightarrow \bot$ seems like it should follow by duality in classical modal logic, and it makes sense to me intuitionistically for the dual reason: $\bot$ is by definition false, so it shouldn’t even be possible for it to be true.
Huh? To say □(⊤)↔⊤ is not to say that necessary truth is the same as truth; it doesn’t follow that □(A)↔A more generally. It’s just to note that the particular proposition ⊤ which is definitionally true is in fact necessarily true.
Note that □(⊤)↔⊤ is readily proven in the systems we’ve been discussing; left to right trivially by definition of ⊤, and right to left as most basic example of the rule that has been called “Necessitation”.
On edit: I wrote this post before seeing Mike’s last post, which makes the same points.
From Mike in 110:
But what does $(\lozenge A \to \Box B) \vdash \Box(A \to B)$ mean [categorically]?
I don’t know exactly what I would want to say about it, but I’ll start by noting that in the possible world semantics, this arises from the fact that, given an adjoint triple $L \dashv M \dashv R$, we have a canonical natural transformation from $(L A \to R B)$ to $R(A \to B)$. [Indeed, the adjunctions give us this for any closed structure $\to$ so long as $M$ is correspondingly lax monoidal; of course, when the monoidal structure is categorical product, $M$ being a right adjoint makes it monoidal for free]
I gave in post 81 a categorical account of possible world semantics in terms of slices of LCCCs, but I realize now I might as well abstract it further. I think I might want to say that, for any ordinary (non-modal) propositional logic, the morally corresponding notion of $\mathrm{K}$ should include precisely the laws validated by models of the following form: $M$ and $N$ are parallel functors preserving all such ordinary (non-modal) propositional structure, $\Box$ is taken to be $N$ followed by a right adjoint to $M$, and $\lozenge$ is taken to be $N$ followed by a left adjoint to $M$. The difficulty, of course, is in characterizing this just in terms of $\Box$ and $\lozenge$, without explicitly considering $M$ and $N$ individually or needing to introduce their codomain.
From Mike in 111:
I’m still a little confused about the role of the “necessitation”
Does it help to think of “necessitation” as just part of lax monoidality? (If $\Box$ is a lax monoidal functor, and we have a morphism $m : 1 \vdash A$, then we obtain by functoriality a morphism $\Box(m) : \Box(1) \vdash \Box(A)$, to which we can precompose the lax monoidal unit law $1 \vdash \Box(1)$ to obtain $1 \vdash \Box(A)$. Thus we derive the “necessitation” rule. More generally, we have that $A_1, ..., A_n \vdash B$ yields $\Box A_1, ..., \Box A_n \vdash \Box B$; another way of thinking about this would be that $\Box$ acts as an internal filter on the lattice of propositions ordered by generic (i.e., non-contingent) entailment.)
Just please announce your changes here on the forum, so other people can keep track of what’s going on and put their input in if they like.
Mike, Thanks, I was about to ask about the social procedure for wiki changes.
I just edited modal logic adding “implication” (which cannot be defined as $(\neg A) \vee B$ in intuitionistic logic) to the operators of propositional logic.
There is a big issue with the page http://ncatlab.org/nlab/show/modal+logic as it assumes all over “classical” propositional logic as the basis. So definition 1 needs to be amended, etc..
about $\Box\top \leftrightarrow \top$ while the left to right direction is indeed the definition of true, the right to left depends, as you both noted, on whether you believe that Necessitation is the right way of formalizing the most basic notion of necessity. Note that if you use a Gentzen modelling of system K (which has been called a Scott version) what you have is that $\Gamma\vdash A$ implies $\Box \Gamma \vdash \Box A$ so it’s just a convention to say $\Box \emptyset \leftrightarrow \top$, but a good convention, it seems to me. I accept it, but definitely not prepared to accept that $\lozenge(\bot) \leftrightarrow \bot$. Again one half is the definition of $\bot$, but the other?
Sridhar
If $\Box$ is a lax monoidal functor, and we have a morphism $m : 1 \vdash A$, then we obtain by functoriality a morphism $\Box(m) : \Box(1) \vdash \Box(A)$, to which we can precompose the lax monoidal unit law $1 \vdash \Box(1)$ to obtain $1 \vdash \Box(A)$
indeed, but this is because you’re assuming (like I am, we like category theory) that $\Box$ needs to be a functor. Because we both agree that functors are the right way of doing proof-theory and category theory. If you go the way syntactic proof-theorists think about it, functoriality or Scott’s rule is harder to justify than Necessitation. Or so I am told.
ref #121 I would like to say that I don’t understand the following at all….
for any ordinary (non-modal) propositional logic, the morally corresponding notion of $\mathrm{K}$ should include precisely the laws validated by models of the following form: $M$ and $N$ are parallel functors preserving all such ordinary (non-modal) propositional structure, $\Box$ is taken to be $N$ followed by a right adjoint to $M$, and $\lozenge$ is taken to be $N$ followed by a left adjoint to $M$. The difficulty, of course, is in characterizing this just in terms of $\Box$ and $\lozenge$, without explicitly considering MM and NN individually or needing to introduce their codomain.
ref #119
it makes sense to me intuitionistically for the dual reason: $\bot$ is by definition false, so it shouldn’t even be possible for it to be true.
you’re thinking classically. We know there is a map from $\bot$ to $\lozenge(\bot)$ because there is one such map/proof for any object whatsoever, that is what being false means. But why there’s a map going from $\lozenge(\bot)$ to $\bot$, where’s my proof of what? it’s just convention.
ref #118 I have to correct myself again: both the equivalences are assumed by most modal logicians, but the systems I work with only assume $\Box(\top)\leftrightarrow \top$
Re: 125: It is an abstraction of possible world semantics: Imagine the domain of $M$ and $N$ as propositions indexed by possible worlds, with the codomain being propositions indexed by possible “moves” between worlds. $M$ and $N$ are the substitution functors arising from sending such a “move” to its source and destination, respectively. So $\Box p$ is taken to mean “Every move from the current world leads to a world satisfying $p$” while $\lozenge p$ is taken to mean “Some move from the current world leads to a world satisfying $p$”.
ref #128 Sridhar in my books possible world semantics is just another semantics, in the same footing as algebraic semantics and/or categorical semantics. It has no special , explanatory role.
I like to think that by now I can tell when I’m thinking classically. There’s nothing non-constructive about saying that $\bot$ is not true, so I don’t see anything any less constructive about saying that $\bot$ is not even possibly true. I’m not saying you can prove it in any particular system; I’m just saying it makes sense to me intuitively. Certainly it makes no less sense to me than “necessitation”. More generally, just as $(\neg \exists) \leftrightarrow (\forall\neg)$ even constructively, it seems reasonable to me to expect that $(\neg\lozenge)\leftrightarrow (\Box\neg)$.
A note about constructivity: plenty of pages on the nLab are only written from a classical point of view, and we don’t regard them as being “wrong” in any way. If anyone feels motivated to add a discussion of the constructive case, that’s a great enhancement; but we do also try not to scare off classically-minded mathematicians, and we often start with a discussion of the classical case even on pages that include the constructive version as well.
I’ll start by noting that in the possible world semantics, this arises from the fact that, given an adjoint triple $L \dashv M \dashv R$, we have a canonical natural transformation from $(L A \to R B)$ to $R(A \to B)$.
Hmm, okay. So also in cohesion, we have $(ʃ A \to \flat B) \to \flat (A \to B)$?
Re: 131: I know little about cohesion, but the adjoint triple $ʃ \dashv \flat \dashv \sharp$ would induce $(ʃ A \to \sharp B) \to \sharp(A \to B)$.
Re: 130: I of course concur with Mike in finding the distribution of $\lozenge$ over coproducts (including the nullary coproduct $\perp$) and the identity $(\not \lozenge) \leftrightarrow (\Box \not)$ perfectly constructive, by analogy with the corresponding tradition for constructive $\exists$ and $\forall$. [We can also understand the property $\lozenge A \to \Box B \vdash \Box(A \to B)$ from this perspective, as $\exists x A(x) \to \forall x B(x) \vdash \forall x (A(x) \to B(x))$. This presentation has the benefit of also being understandable to non-category theorists. :)]
Hmm, I was thinking of applying $L\dashv M\dashv R$ to the adjoint triple of functors $\Pi_\infty \dashv \Delta \dashv \Gamma$ relating the cohesive topos to the base topos. That gives $(\Pi_\infty A \to \Gamma B) \to \Gamma(A\to B)$, and then upon applying $\Delta$ to get back into the cohesive topos we get $(ʃ A \to \flat B) \to \flat (A \to B)$. I thought that’s also what you were doing in the possible-worlds case, using the adjunction $\Sigma_W \dashv W^* \dashv \Pi_W$ relating the slice over $W$ to the slice over $1$, then applying $W^*$ to get back into the slice over $W$. But I guess you could also apply the same argument to the adjoint triple of modalities themselves, as you say.
Ah. As I said, I know little about cohesion; does $\Delta$ necessarily preserve exponentials? If it does, then I suppose that works.
However, the final step of postcomposing $\Delta$ isn’t analogous to what I was doing. Rather, starting from $L \dashv M \dashv R$, I obtain $(L A \to R B) \to R(A \to B)$, and then, given some $N$ parallel to $M$, I precompose $N$, obtaining $(L N A \to R N B) \to R(N A \to N B)$, which is as good as $(L N A \to R N B) \to R N(A \to B)$ when $N$ preserves exponentials [in the situation of interest, both $M$ and $N$ preserve all non-modal propositional structure].
In terms of possible worlds, I am not generally using the approach in your penultimate sentence (though it is what Urs wrote into the original article), which only works for $\mathrm{S}5$. Rather, as in post 81, I am supposing we have some parallel $M, N : E \to W$ (thought of as sending directed edges between worlds to their source and destination), giving us $\Sigma_M \dashv M^{*} \dashv \Pi_M$ relating the slice over $E$ to the slice over $W$. And to start from the slice over $W$, I preapply $N^{*}$; that is, $\Box = \Pi_M N^{*}$ and $\lozenge = \exists_M N^{*}$. (Urs’ approach is equivalent to the special case where every world has a unique edge to every world)
Yes, $\Delta$ preserves exponentials because a cohesive topos is locally connected. I see now what you meant, thanks.
There’s nothing non-constructive about saying that ⊥\bot is not true, so I don’t see anything any less constructive about saying that ⊥\bot is not even possibly true.
There is nothing non constructive about saying that $\bot$ is not true. but saying that $\lozenge (\bot) \leftrightarrow \bot$ implies that $\lozenge(\bot) \to \bot$ which to be true constructively needs a map going from $\lozenge(\bot)$ to $\bot$ and I don’t know a necessary map there.
What I know is that rejecting $\lozenge(\bot) \to \bot$ as an axiom in possible worlds semantics is equivalent to assuming “impossible worlds”: this I learned from Michael Mendler when he showed me how to make a Kripke semantics for a system that does not satisfy the distribution of $\lozenge$ over disjunction, both binary and nullary, http://www.cs.bham.ac.uk/~vdp/publications/ck-paper2.pdf. Now I did not know until today that Kripke invented “impossible worlds” to prove completeness of modal systems that don’t have the rule “Necessitation” , this I just read in the wikipedia entry on “impossible worlds. So your sentence
Certainly it makes no less sense to me than “necessitation”
seems quite appropriate.
and I do like my classical mathematics as much as the next person, I just prefer to know where I need it to be classical. So the entry in modal logic should have all the connectives and collapse them when dealing with classical logic, instead of not even being able to talk about the constructive case I think.
@Mike Shulman:
I don’t really know much about modal logic, but I read SridharRameshs chess example yesterday and played around with it a bit, came up with a proof of a simple statement - thought I’d share: http://i.imgur.com/ZUHnPDE.png
Re: 136: It seems to me the system $\mathrm{CK}$ described in the linked paper could be understood as corresponding to possible world semantics where, instead of each world providing a standard model of ordinary propositional logic (what would classically be an ultrafilter on the Lindenbaum algebra of propositions), rather, each world only need provide some propositional theory (a filter on the Lindenbaum algebra). This in turn could be modeled by each world containing a set of countries and each country providing an ordinary propositional model in the standard way (so there are no inconsistent countries), while the accessibility relation is defined at the level of worlds and a world is considered to validate a proposition just in case all its countries do (so a world may validate $\perp$ if it has no countries; similarly we find that a world may validate $A \vee B$ without individually validating $A$ or validating $B$, and thus worlds accessing it may validate $\lozenge (A \vee B)$ without validating $(\lozenge A) \vee (\lozenge B)$).
[Neither of these are the semantics studied in the paper, which imposes intuitionistic logic by fiat rather than absorbing it from the background logic in which the semantics are interpreted (again using a birelational model, taking advantage of the interpretation of intuitionistic logic into classical $\mathrm{S}4$). I think it’s an interesting system, but not what I would (subjectively) consider the default intuitionistic correspondent of traditional $\mathrm{K}$, just as I would take the default intuitionistic correspondent of traditional predicate logic to have $\exists$ distribute over disjunction in the ordinary way even though one could analogously formalize a weaker logic and a broader notion of model obviating that property as well]
I’ll also note that the system $\mathrm{CK}$ (defined on page 4 of the linked paper as propositional logic + the necessitation rule + the axioms $\Box(A \to B) \to (\Box A \to \Box B)$ and $\Box(A \to B) \to (\lozenge A \to \lozenge B)$) does not have the property that, when its underlying propositional logic is made classical, it yields classical $\mathrm{K}$. This is clear by noting that, no matter what underlying logic it is taken upon, any theorem of $\mathrm{CK}$ remains a theorem when $\lozenge$ is replaced by $\Box$ throughout, but this property does not hold for classical $\mathrm{K}$.
Re: 138 As I said when you ‘constructivize’ a notion, you have, sometimes, many options. There are several non-equivalent Constructive Set Theories, for example.
When it comes to constructive modal logic K, I take my cues from the Curry-Howard correspondence and the the categorical semantics. They do not support the distribution of $\lozenge$ over disjunction: when you look at a ND-tree that ends up in $\lozenge(A)\vee \lozenge(B)$ there is no obvious way to see a map getting there from $\lozenge(A\vee B)$. At least none that I can see.
The $\lozenge(A\vee B) \to \lozenge(A)\vee \lozenge(B)$ distribution can, nonetheless, be considered a basic law (by fiat) and Simpson’s has a very pretty system that does it. But there is no categorical model of it, as far as I know. I will be delighted if you can provide one. Meanwhile the system without the distribution, which has been considered several times over by applications in CS – e.g Nerode’s work – does have categorical models.
Re: 139 It is indeed true and known that the system CK does not become K when you make the basis classical. This corresponds to the idea that modalities also have their own classical or not character. When you want to make a system consisting of propositional basis + modalities classical you have to make both components classical, the basis becomes classical propositional logic and the constructive modalities have to become classical modalities.
Re: 141: What do you mean by making constructive modalities into classical modalities? (I’ll note that the $\Box$ fragment of $\mathrm{CK}$ does become the $\Box$ fragment of classical $\mathrm{K}$ when using a classical propositional basis)
Re: 140: I agree that you have many options when constructivizing a notion. So my argument is about a subjective sense of what seems to me the most natural intuitionistic correspondent of $\mathrm{K}$, rather than about any objective claim.
What do you consider to be a categorical model? As I’ve said, the notion of intuitionistic $\mathrm{K}$ I have in mind could be modelled by considering two models of intuitionistic logic (i.e., cartesian closed categories with coproducts) $C$ and $D$, two homomorphisms (i.e., cartesian closed functors preserving coproducts) $M$ and $N$ from $C$ to $D$, and right and left adjoints to $M$, taking $\Box$ and $\lozenge$ to be $N$ followed by the right and left adjoints to $M$, respectively. A particular case of this arises when considering a directed multigraph, with each proposition assigned some set at each node of the multigraph; the set assigned to $\Box A$ at a node is the product, over all edges out of that node, of the sets assigned to $A$ at the destination of those edges, and similarly for $\lozenge A$ replacing “product” with “sum”. (As these sets need not be subterminal, this is indeed a categorification.)
Re 142: For me, a categorical model is one as discussed in the Studia Logica paper: The fundamental idea of a categorical treatment of proof theory is that propositions should be interpreted as the objects of a category and proofs (lambda-terms) should be interpreted as morphisms. The proof rules correspond to natural transformations between appropriate hom-functors. The proof theory gives a number of reduction rules, which can be viewed as equalities between proofs. In particular these equalities should hold in the categorical model, page 19 and forward ones of On an Intuitionistic Modal Logic (with Gavin Bierman), Studia Logica (65):383-416, 2000.
Re 141 indeed, the $\Box$ fragments do agree. but to make the modalities “classical” we need to make sure that they are inter-definable i.e. that $\lozenge(A) = \neg (\Box (\neg A))$.
Your idea of a model is not proved to satisfy the equations forced on us by the proof-theory. at least it is not obvious to me that it does. the problematic cases are again the distributions that we’ve been discussing. This is what I mean by saying that categorical models for me come from the Curry-Howard correspondence. (and by the way thanks for reading the paper on CK.)
On another linguistic note:
So we have, recall, that $W^\ast \prod_W$ is to be pronounced necessity and that $W^\ast \sum_W$ is to be pronounced possibility. Then: is there a good candidate for a natural language incantation for $\prod_W W^\ast$ ?
I have come to think that there is: this is randomness or contingency.
(This works a bit better in German; Notwendigkeit, Möglichkeit, Zufälligkeit.)
For instance, fixing the argument to be the prop universe $\Omega$ and varying in the space $W$ of “possible worlds”, then $\prod_{(-)} (-)^\ast \Omega$ is the power set operation. Changing $\Omega$ to something more linear gives more linear (probability-)distributions.
And not the least, on linear dependent types, it is $\prod_W W^\ast$, not $W^\ast \prod_W$, which governs path integral quantization via secondary integral transforms.
So in the ordinary nonlinear case, $\prod_W W^\ast (-)$ is just the function type $[W, -]$?
Yes!
Is there likely to be a good name for it then?
Yes, randomness.
You should best think of applying it to pointed types, such as $\Omega$ or such as stable types. Then $\prod_W W^\ast$ sends this to the collection of all distributions of possible worlds where in each world either “it holds” or it doesn’t. A term of $\prod_W W^\ast \Omega$ is a random subset of all possible worlds.
The understanding of a proposition as a set of possible worlds is commonplace. E.g., here
The possible-worlds analysis of propositions identifies a proposition with the set of possible worlds where it is true.
That’s what here we usually call “propositions as types”. But I suppose we are talking past each other now, for I am not sure what #149 is reacting to.
Try something more linear than $\Omega$ to get a better impression of randomness, such as $[0,1]$. Of course it works really fully well only for stable types.
But if my little linguistic intuition here does not resonate with you, it’s not too important. The nForum might be the wrong place to try to sort it out further.
Maps to $[0, 1]$ appear in the adjunction on p. 2 of Duality for Convexity.
David, thanks for getting back to me.
That article considers maps to $[0,1]$ as homomorphisms of effect algebras in the hope to capture aspects of quantum physics. Despite exposure to many seminar hours of effort in Nigmegen, I remain to be convinced of the effect of effect algebras. If we are asking for effect of categorical logic in general – and of the (co-)monads $\sum_W W^\ast$ and $\prod_W W^\ast$ in particular – on quantum physics, then only modesty keeps me from claiming that the only genuinely viable approach to that which I am aware of is that via secondary integral transforms in dependent linear homotopy type theory. It is due to this that I know for a fact that the dependent linear semantics of $\sum_W W^\ast$ and $\prod_W W^\ast$ captures the phenomenon of quantum randomness observed in the real world.
But what I was after here now is more elementary, more Platonistic, maybe just linguistic. I am thinking that just from an elementary observation of the basic nature of $\prod_W W^\ast$, and in view of the pronounciation of $W^\ast \prod_W$ as necessity, it should seem sensible to pronounce it as randomness.
It’s supposed to be utterly simple: you can try to feed $\prod_W W^\ast$ a proposition. But since this monad now acts on the empty context instead of on the context of possible worlds as its dual sublings do, this comes out vacuous. Either we feed it the true proposition, then it returns true, or we feed it the false proposition, then it returns false. Hence we conclude that the meaning of $\prod_W W^\ast$ is beyond first-order logic,beyond the Prop-truncation, and is genuinely type theoretic.
So we feed it any type $V$ that is not $(-1)$-truncated instead. What does it return? The type of $V$-valued functions on the space of possible worlds.
At this simplistic level, there is no normalization on these functions that would constrain them from $V$-valued “distributions” to probability distributions. But then, comparison to the case where $V = KU$ and where I fully understand how this relates to genuine real quantum physics, I see that the idea to normalize these functions right away is too naive anyway, that it’s rather probability amplitudes that matter and that in addition there is some holography in the game, etc., and that we better let the tao of mathematics lead the way than trying to force it to submit to our prejudices.
So to disregard such technicalities as normalization; I wonder: if you walk over to your colleague’s office and say: “I have here a space of possible configurations and the space of functions over it,what might that be?”; wouldn’t most people think of the space of functions on a space of configurations as a space of random variables? What else?
Maybe I’ve developed a certain nervousness around the term ’random’ from reading Edwin Jaynes. He generally writes it in scare quotes, and attributes much said about randomness to the so-called ’mind projection fallacy’, projecting our ignorance into ideas of chance in the world.
Those factors which vary in an uncontrolled way are often called random, a term which we shall usually avoid, because in current English usage it carries some very wrong connotations.
To many, the term ‘random’ signifies on the one hand lack of physical determination of the individual results, but, at the same time, operation of a physically real ‘propensity’ rigidly fixing long-run frequencies. Naturally, such a self-contradictory view of things gives rise to endless conceptual difficulties and confusion throughout the literature of every field that uses probability theory. We note some typical examples in Chapter 10, where we confront this idea of ‘randomness’ with the laws of physics. (Probability Theory as Logic, 271)
I find this equating probability with plausibility in the face of incomplete knowledge quite plausible in our everyday dealings with the world. As for quantum physics, I don’t know. Jaynes seems to want to push it all the way:
incomplete knowledge is the only working material a scientist has! (61)
By the way, I came across another category theoretic takes on probability theory
This takes up Tom Leinster’s and Tom Avery’s account of the Giry monad as codensity monad.
That’s part of my motivation for finding good formalization of natural language: once we have it, then quandaries as you quote will find resolution.
We have seen this in the above discussion for the example of discusison of necessity and possibility: once we fix that these are formalized by $W^\ast \sum_W$ and by $W^\ast \prod_W$, then answers to all the traditional questions may systematically be worked out.
Regarding randomness; the mathematical concept of random variables is perfectly well established. The space of random variables is a suitable space of functions on the given space of possible worlds. Luckily, this concept is sensible and useful independently of any questions of physics that Jaynes raises; it’s a concept of pure thought, a topic of the doctrine of essence, not of nature. (Zufälligkeit translates both to randomness as well as to contingency. Fewer people would come up with the second translation, though, I’d think) Once we agree on a good modalic formalization of this, then it is time to see how this will work itself out in nature, by following the formalism.
For what it’s worth, I have found two references that express the suggestion that I am making:
Notice that the monad in question, the one I am suggesting should be called Zufälligkeit, $\prod_W W^\ast$, is what computer scientists call the reader monad.
Verdier 14 is so kind to agree with what I wrote above:
The intuition behind the Reader monad, for a mathematician, is perhaps stochastic variables. A stochastic variable is a function from a probability space to some other space. So we see a stochastic variable as a monadic value.
The same point, not in a prose sentence but by way of an item list, is made in Toronto-McCarthy 10, slide 24.
Notice that in the line before, slide 23, these authors are even so kind as to agree that a probability space deserves to be addresses as a space of “possible worlds”.
once we fix that these are formalized by $W^\ast \sum_W$ and by $W^\ast \prod_W$, then answers to all the traditional questions may systematically be worked out
…but not everyone may agree that the answers thereby obtained are the right ones. (-:
The whole point that I am making is that after what used to be just opinions has been formalized, then there is no more room for disagreement anymore than there is generally in mathematics.
That’s why analytic philosophers are eager to use formal modal logic for their arguments and dismissive of those collegues who don’t reason formally. While that is the right idea, unfortunately most analytic philosophers forget to specify their boxes and circles and might confuse them with a flat or a sharp. If we add in that missing specification, by recognizing $\Box_W = W^\ast \prod_W$ etc., then their program of turning philosophy into mathematics actually works.
What I’m saying is that by choosing a particular formalization of an informal notion, you of course determine the answers to lots of questions; but someone else might prefer a different choice of formalization of the same informal notion, leading to different answers to the same questions. So formalization is an excellent tool for clarifying the exact points of disagreement, but it can’t always be relied on to eliminate the disagreement even if everyone involved is completely rational.
addendum to #155:
I now see that Toronto-McCarthy 10 also say it in explicit prose, in the tiny side note text on slide 35:
you could interpret this by regarding random variables as reader monad computations.
There must be a published article that goes with these talk slides, but I am not sure yet if I have found it.
[edit: ah, of course it’s here: (pdf). In 2.2 they call $\prod_W W^\ast$ the “random variable idiom” ]
If $W^\ast \sum_W$ and $W^\ast \prod_W$ are both important enough to have names, and so is $\prod_W W^\ast$, shouldn’t we give a name to $\sum_W W^\ast$? Admittedly it doesn’t seem too interesting
$\sum_W W^\ast (-) \equiv W \times (-).$One piece of magic is that where randomness $\prod_W W^\ast$ is used in formalization of quantum physics, it is to be forced to act essentially equivalently to $\sum_W W^\ast$ – see at dependent linear type theory the section on fundamental classes. That’s the “ambidexterity” of Hopkins-Lurie.
Hence we find that for purposes of modelling reality in the the objective logic, the base change adjoint triple to a space of possible worlds/possible field configurations induces three modalities
$\underset{Wirklichkeit}{\underbrace{Moeglichkeit\;\;Notwendigkeit\;\; Zufaelligkeit}}$Of course that was noticed before.
In addition one may observe that the composite
$State = \prod_W W^\ast \sum_W W^\ast$is the state monad, for $W$ the type of possible internal states of the computer. The standard way in Haskell to implement random numbers is to consider a type $W = StdGen Int$ “of random integers” (say) and then work with $State StdGen Int$. This allows to read random numbers (which are the “current state”, the “currently realized world”) and then update them to the next one.
If we don’t care about “updating to the next possible world” but are happy with the one we have, then we may reduce this to $Reader = \prod_W W^\ast$.
It occurs to me that there is the following interesting application of the dependent-type-theoretic formulation of the possibility/necessity modalities that, years ago, we wrote down in this entry (here).
Namely, to point out that when applied to linear dependent types, then the counit of the necessity modality as defined here embodies what in quantum information theory is a “measurement gate”.
Like so:
Let $\mathcal{H}$ be a linear type in the empty context, which is to be regarded as the Hilbert space of states that some quantum protocol is operating on. Think of the quantum teleportation-protocol as a good illustration in the following.
Now consider that in the process of the protocol, a measurement(-gate) is to be performed. Unwinding what the traditional song-and-dance boils down to, this means that:
a set (a non-linear type) $R$ of measurement results is chosen;
a direct sum-decomposition
$\mathcal{H} \simeq \underset{r : R}{\oplus} \mathcal{H}_r$for an indexed set of direct summands $\mathcal{H}_\bullet = (r \mapsto \mathcal{H}_r)$
is chosen
(with $\mathcal{H}_r$ consisting of those quantum states that one may find the system to have “collapsed” to after the result $r$ has been measured);
a quantum circuit is drawn whose gates may depend on $r : R$ (in the quantum teleportation protocol as shown here, this dependency is through the variable called $\sigma$ there).
As a result, quantum circuits that include measurement gates are actually fiberwise morphisms in a category of bundles of Hilbert spaces over finite sets of (potential) classical outcomes.
Traditional literature never puts it this way (some texts on Quipper get close, but I haven’t seen it stated real clearly there, either), but it is clear from inspection that this is what happens.
Now one should ask: In this dependent-linear-typed picture of quantum information protocols: What is a measurement gate itself?
To see this, observe that for a finite index set of results $R$, the direct sum of $R$-indexed fin-dim vector spaces is equivalently their categorical product, hence their limit, hence their right base change along $R \to \ast$:
$\mathcal{H} \;\simeq\; \underset{r : R}{\prod} \mathcal{H}_r \,.$To put this into the context of possible measurement results, context-extend it to $R$ (which operation I’ll denote by $R^\ast$, if you allow) to get
$R^\ast \mathcal{H} \;\simeq\; R^\ast \underset{r : R}{\prod} \mathcal{H}_r \,.$Now the collapse postulate of quantum physics says that if we are in context $r$ then a measurement operates on quantum states as the linear projection $p_{r} \colon \mathcal{H} \xrightarrow{\;} \mathcal{H}_r$ onto its $r$th direct summand.
But, as a linear map dependent on $r$, this is just the $(R^\ast \dashv \underset{r : R}{\prod})$-counit
$R^\ast \mathcal{H} \;\simeq\; R^\ast \underset{r : R}{\prod} \mathcal{H}_r \xrightarrow{\phantom{--} \epsilon \phantom{--}} \mathcal{H}_\bullet$(!)
So if $\Box \,\coloneqq\, R^\ast \circ \underset{r : R}{\prod}$ denotes the corresponding comonadic modality (“necessarily so in possible worlds $R$”), then we discover that a quantum measurement gate with possible outcomes $\mathcal{H}_\bullet$ indexed by $R$, understood as an $R$-dependent linear map, is the necessity counit
$meas \;\colon\; \Box \mathcal{H}_\bullet \xrightarrow{\phantom{--} \epsilon \phantom{--}} \mathcal{H}_\bullet \,.$That seems kind of cute.
For example, in the quantum teleportation protocol, this would be Alice’s measurement, and then Bob would go and do $r : R \;\vdash\; \sigma_r : \mathcal{H}_r \xrightarrow{\;\;} \mathcal{H}_r$ to get
$\Box \mathcal{H}_\bullet \xrightarrow{\phantom{-} meas \phantom{-}} \mathcal{H}_\bullet \xrightarrow{\phantom{-} \sigma_\bullet \phantom{-}} \mathcal{H}_\bullet$etc.
As it goes with these Yoneda-like insights, it’s all locally trivial, but this looks like the right picture to make explicit:
Q: What is a quantum measurement gate seen in dependent linearly typed quantum programming languages?
A: The necessity counit on the dependent linear type of possible outcomes.
I was thinking that the finite-dimensional aspect of #163 had something to do with what we had discussed years ago (just above in this thread in #159-#162) about the reader and writer (co)monad coinciding. Now I see from quantum circuits via dependent linear types how this plays out with possibility and necessity coinciding.
Presumably they’ll be plenty from ’Quantization via Linear Homotopy Types’ to develop here.
One entertaining consequence of reading quantum measurement as the counit of an ambidextrous necessity-modality in a linear modal logic is that it formalizes a sensible version of the “many worlds interpretation” and of the Gell-Mann principle that “In quantum physics, everything that is possible actually happens (with some probability)” (cf. eg. arXiv:1907.04623):
$\array{ \llap{\text{The}}\;\text{possible} & \text{is} & \text{necessary} & \text{and hence} & \text{actualized} \\ \bigcirc \mathcal{H}_\bullet & \simeq & \Box \mathcal{H}_\bullet & \xrightarrow{\phantom{--} \epsilon \phantom{--}} & \mathcal{H}_{ \bullet \; \mathrlap{\text{with some probability}} } }$I have moved to a new sub-section (“Relation to contingency”, here) an old paragraph which alluded to the two (co)monads on the other side of the base change.
Then I have added a diagram which shows at a glance all four (co)monads induced by a base change adjoint triple.
The notational conventions used in the new diagram currently don’t match the rest of the text. I’ll harmonize this later, but first I’d like to fine-tune the choices.
For instance I was wondering: What’s a good name for the other two monads as modalities? As (co)monads they are known as “reader monad” and “writer comonad”, but that does not harmonize with speaking of “qualities of dependent truth” such as “necessity” and “possibility”.
I have come to think now that their names as modalities could be “definitely” and “contingently” (which is what the graphics currently shows), but I am open for suggestions.
By the way, there are some natural ways of passing between the (co)Kleisli categories of the (co)monads on the left of the base change to those on the right. Have these transformations been discussed anywhere?
Re #166, in #144 you’re speaking of randomness or contingency for the reader monad. Could ’randomly’ work?
Then in #160, I ask whether the writer comonad should have a name, and you then point to its equivalence to randomness, at least in quantum situations, hence only three modalities there.
Are we to consider here the general nonlinear setting?
Yes, I felt like it would help to first better sort out the situation of the four modalities in the non-linear situation in general, and only then to look at the degenerate case of quantum logic where they become pairwise equivalent.
Here I was thinking of the alternatives “probabilistically”, “randomly”, and “indeterministically” for “contingently”.
But then I felt that not only “probabilistically” but also “randomly” carries a connotation of there being a probability distribution, which is not the case for the plain base change considered here.
(I have figured out how to bring in the actual density matrices and hence the probability distribution, and it still involves these modalities, but now applied to tensor squares of $(\mathbb{Z}/2 \curvearrowright \mathbb{C})$-modules, which I’ll type up elsewhere).
And “indeterministically” seemed too long a word. :-)
But mainly, I wanted a good antonym to the adjoint modality, and I found that “definitely” is a really good word for that adjoint: Because in order to inhabit $\coprod_{b : B} P_b$ we need to make explicit a definite world $b_0$ in which we inhabit $P_{b_0}$. It is in this way that inhabiting $\coprod_b P_b$ is different from saying “possibly” (“I know there is some world $b$ in which we can inhabit $P_b$”) and from “necessarily” (“here is an inhabitant of $P_b$ for every world $b$”).
This way I arrived at looking for something akin to “randomly” but more antonymic to “definitely” and this way I arrived at “contigently”.
When I ask Google for synonyms to “contingent” it says “chance, accidental, unforeseen, unforeseeable, unexpected, unpredicted, unpredictable, unanticipated, unlooked-for, random”, which seems quite appropriate.
Some linguists used the reader comonad to represent terms where there is interpretative ambiguity. So we might have a type $Person$ and then a type $Person^I$ for some set of interpreters, $I$, where now, say, $best\;friend: Person^I$ is such that $best \;friend(i): Person$ is $i$’s best friend. Or we might have $here: Place^I$.
Philosophers of language might speak of an indexical, ’here’ is an indexical for place. $here: IndexicalPlace$? Or a contextual place.
I guess it depends on the role of the type of variation.
Yes, the reader monad expresses ambiguity resolved only by the specification of the extra datum of a term in the base type. Which is why “contingent” seemed good to me, if we want to rhyme on the general philosophical tune we are playing here.
But now I am tending to think that “contingent” is instead the right term for the types appearing under none of the modalities.
Namely, as for the adjoint pair upstairs, we need three adjectives to capture the modal situation: Upstairs we have:
$\array{ \mathllap{\text{if it is}\;} \text{necessary} &\text{then it is}& \text{actual} &\text{so it is}& \text{possible.} \\ \Box \mathscr{H}_\bullet &\xrightarrow{\phantom{-}\epsilon^{\Box}\phantom{-}}& \mathscr{H}_\bullet &\xrightarrow{\phantom{--} \eta^{\lozenge} \phantom{--}}& \lozenge \mathscr{H}_\bullet }$Now, one needs to carefully think about the analogous interpretation downstairs. From the quantum interpretation we know that the plain types downstairs are the Hilbert spaces of states $\mathscr{H}$, so these are “contingent”, in that for a term in these we don’t know in which (classical) world it was realized. This is in contrast to the terms of $\star \mathcal{H}$, whose inhabitation comes with a certificate of a definite classical world. Also, the counit $\star \mathcal{H} \xrightarrow{ \eta } \mathcal{H}$ turns out to be “superposition” (sends direct sums to actual sums), which is exactly what makes definite measurement basis states become contingent superpositions.
So downstairs it’s really something like this:
$\array{ \mathllap{\text{if it holds}\;} \text{definitely} &\text{then it holds}& \text{contingently} &\text{so it holds}& \text{??ly} \\ \star \mathscr{H}_\bullet &\xrightarrow{\phantom{-}\epsilon^{\star}\phantom{-}}& \mathscr{H}_\bullet &\xrightarrow{\phantom{--} \eta^{\bigcirc} \phantom{--}}& \bigcirc \mathscr{H}_\bullet }$Or maybe like this, after all:
$\array{ \mathllap{\text{if it holds}\;} \text{deterministically} &\text{then it holds}& \text{contingently} &\text{so it holds}& \text{indeterministically} \\ \star \mathscr{H}_\bullet &\xrightarrow{\phantom{-}\epsilon^{\star}\phantom{-}}& \mathscr{H}_\bullet &\xrightarrow{\phantom{--} \eta^{\bigcirc} \phantom{--}}& \bigcirc \mathscr{H}_\bullet \,. }$I have switched back to “randomness” after all. You are right, this works well. With this choice, we have these six (names of) modalities associated with a base change:
$\array{ necessity & actuality & possibility \\ definiteness & contingency & randomness }$This works out quite well if one looks at the system of (co)units (here) understood as implications.
Here one has to read the “contingency” of the types downstairs in the sense that their independcy of the context $B$ is due to ignorance of $B$: They try to be propositions dependent on $B$, but fail to resolve $B$. As in the joke
Q: Do you want beer or wine? $\;$ A: Yes.
where $Yes$ is correct, contingent on what is actually being served. Something like this.
What I had really wanted to do today is write out how measurement seen “downstairs” is the $\lozenge$-coaction on the cofree $\lozenge$-coalgebra which is $\mathscr{H} = (p_B)_! \mathscr{H}_\bullet$, but now it’s getting too late. Maybe tomorrow.
The outer modalities seem to be captured by determined/undetermined or definite/indefinite.
The middle one has the flavour of indiscriminate, or like the current expression of indifference, “whatever”.
How about indifferent?
The answers to your joke question in #172 could be phrased: beer (choosing one decisively); whatever; it depends.
The outer modalities seem to be captured by determined/undetermined or definite/indefinite.
Yes, I agree, all of this would work. I had settled now for “definite”/”random” but I am open to adjusting this as need be (except that it is tedious to relabel all my graphics)
The middle one has the flavour of indiscriminate, or like the current expression of indifference, “whatever”.
I would like the modal transformation “$\star \to id \to \bigcirc$” to have a nice pronounciation. Currently I read this as:
$\array{ \text{The definite} &\text{becomes}& \text{ contingent } &\text{which is}& \text{random} \\ \star &\longrightarrow& id &\longrightarrow& \bigcirc }$This seems to work rather well when matched against the operational meaning of the actual (co)units (as shown here), but I am open to be convinced of better options.
Another desideratum is that the middle item here should (anti-)align well with its upstairs correspondent “actual”. I find that the pair “actual/contingent” flows well, while “actual/indiscriminate” might not be quite as compelling.
I have made explicit (here) the observation that the monadicity theorem applies to those adjoint triples which yield necessity/possibility-modalities — and that this helps us formalize the interpretation of “contingent types”:
Namely, the monadicity theorem says that the $P \in Type$ are equivalently those actual types $P_\bullet \in Type_B$ which have a possibility action, hence an implication
$possibly \; P_\bullet \longrightarrow actually \; P_\bullet \,.$But a kind of proposition which actually holds as soon as it possibly holds, that’s clearly a contingent proposition.
Typo in the diagram after
More in detail, since $(p_B)^{\ast}$ is a conservative functor, the monadicity theorem (see this example) says
where there’s $\lozenge_J$ instead of $\lozenge_B$.
But a kind of proposition which actually holds as soon as it possibly holds, that’s clearly a contingent proposition.
That’s going to sound odd to analytic philosophers. When they’re taught propositional logic and how to evaluate the value of a compound proposition from the values of its atoms in a truth table, then contingent propositions are those which aren’t tautologies or contradictions, i.e., the proposition evaluates as both true and false depending on the line of the truth table. Then if they consider the lines of the truth table to be like worlds, then knowing that one line yields True (so possibly holds) would only having a bearing on another line (e.g., actual conditions) to the extent that it’s not contingent.
I go on about this in my book. Types with possibility actions are those where you know how to transport across the fibres. I give you an element in some fibre and you know the corresponding element in some other designated fibre. They come from base change, as though one carries across a standard (gauge) to compare fibres.
In the relative case, this transport only happens within an equivalence class, as with the jet comonad transporting in an infinitesimal neighbourhood.
Typo in the diagram
Thanks! Fixed now (here).
That’s going to sound odd to analytic philosophers.
Do you have a reference for where people use/define “contingency” in modal logic?
I go on about this in my book.
I had another look at section 4.1. But maybe I need a page number to see what you are referring to here.
Types with possibility actions are those where you know how to transport across the fibres.
And much more: They are actually constant across fibers. No?
(Above I argued this abstractly through appeal to the monadicity theorem. But it’s manifest for the free possibility algebras and those exhaust all possibility algebras since $p_!$ is evidently eso, in our case. )
The apparatus of possible worlds allows us to introduce a set of modal notions: a proposition is necessary just in case it is true in all possible worlds, a proposition is possible just in case it is true in some possible worlds, and it is contingent just in case it is true in some but not all possible worlds.
And much more: They are actually constant across fibers. No?
Yes, sure. One might think that the adjoint triple concerns the relationship between variability and constancy. The types on the right are constant, unvarying, indifferent (to B).
Thanks for the pointer to SEP!
I see, I wasn’t aware that “contingency” was already taken as a technical term, to mean $contingently P \;\coloneqq\; possibly P \;and\; possibly \;not\; P$. I may need to look for another term then, not to collide with established conventions. But to be frank, I wonder whether this convention really matches the everyday meaning of “contingent”. (See below.)
In any case, it would be great to have a citable source for this convention. I didnt’t see “contingency” really defined in any one of the textbooks listed in our entry here, though now I just found one and added it to the list (Bradley & Swartz 1979, where it appears in chapter 6).
The types on the right are constant, unvarying, indifferent (to B).
But maybe not quite as tautological is the statement we were talking about: that these independent types are, in addition, equivalently those dependent types which carry the structure of possibility-algebras and/or the structure of necessity algebras.
I feel this captures the everyday-meaning of “contingency” much better. For instance it reflects the important real-world fact that if my airline guarantees me transport only contingent on whether they are overbooked, I possibly lose but they necessarily win (“we are happy to serve you – or not”), highlighting that any contingent statement is necessarily true, in contrast to what modal logicians seem to believe. ;-)
I’m struggling to understand why you take that situation of the last paragraph to be one of contingency.
A warning at a holiday camp might say:
Outdoor activities are contingent on the weather.
We mean something, say, lawn tennis, may be called off if the rain’s too heavy. if it’s sunny, it goes ahead.
There’s some event that depends on some factor and it genuine depends, so that it will or will not happen depending on the value of some other variable.
The contract is contingent on approval by the Board of Trustees.
Collins dictionary entry
ADJECTIVE 1. Obsolete: touching; tangential 2. that may or may not happen; possible 3. happening by chance; accidental; fortuitous 4. unpredictable because dependent on chance 5. dependent (on or upon something uncertain); conditional 6. Logic: true only under certain conditions or in certain contexts; not always or necessarily true 7. Philosophy: not subject to determinism; free
Yes, take your examples:
Outdoor activities are contingent on the weather.
Now find yourself in Abu Dhabi in autumn. The weather is paradisical, day in, day out. It would be absurd to claim that therefore the above sentence becomes wrong.
The contract is contingent on approval by the Board of Trustees.
Hearing this, nobody will later complain if, after the contract is signed, one learns that the board had already approved it ahead of time, so that there was in fact no chance of it not being approved. Nobody will shout: “You said it was contingent on approval, we will sue you for approving of it in any case!”
I find these example show that explicitly excluding the case that the condition on a contingent statement is secretly already met is something a real-world speaker would never think of.
I’ve had covid for about a fortnight now and haven’t fully lost the brain fog yet, so maybe it’s me, but I’m still struggling with this.
From the position of analytic philosophy, a contingent proposition is a proper, non-empty set of worlds, in other words, a nonconstant map, $p: W \to 2$. There is genuine dependence.
Your examples seem to me to say: if $p$ is only evaluated at a subset of $W$ for which $p(w)$ is always true, or at a particular $w$ for which $p(w)$ is true, wouldn’t we still call $p$ contingent?
Well, yes, $p$ is still contingent. But what of it? $p$ belongs to the $W$-dependent types. Its values belong to non-dependent types.
From the position of analytic philosophy, a contingent proposition is (…) a nonconstant map
Yes, and I have argued that the demand of non-constancy does not reflect the everyday use of “contingent”. Then I have provided some non-trivial argument that the everyday word usage of “contingent” is better formalized by possibility algebras.
You may agree with this or not, and I understand that I am being contrarian to a decade-old logic folklore that seems to never have been questioned or scrutinized before.
I may adjust the wording in the article later, when I have the energy. Meanwhile, let’s either discuss the actual arguments I provided or else leave it at that, for otherwise we seem to be going in circles.
In the section of quantum modal logic, I have added the remark (here) that with the evident notational abreviations ($\mathcal{H} \coloneqq p_! \mathcal{H}_\bullet \,\simeq\, p_\ast \mathscr{H}_\bullet$ and leaving an outermost $p^\ast$ notationally implicit) the quantum writer/reader monads appear as:
$\begin{array}{ccc} \star \mathcal{H} &\simeq& \Box \lozenge \mathscr{H}_\bullet \\ \bigcirc \mathcal{H} &\simeq& \lozenge \Box \mathscr{H}_\bullet \end{array}$These identifications are borne out vividly in the analysis of quantum measurement/state preparation here, where they show that the quantum writer comonad records quantum measurements and the quantum reader monad reads out which quantum states to prepare.
I’m not appealing to the authority of philosophy. It’s about the meaning of the English word ’contingent’. We use it almost synonymously with ’dependent’, either in terms of some known factor (where generally we don’t yet know the value of that factor “I’m planning for all contingencies”) or in terms of some unknown factor (close to ’random’).
If contingency is anywhere it’s on the side of the slice, $\mathbf{H}/W$ (when the source of dependency is known). Of course, there’s the trivial dependency brought about by context extension, $W^{\ast}$. These constant dependent types, $W^{\ast} X$, are the possibility algebras. So the non-dependent types can give rise to a kind of degenerate dependency, but the paradigmatically contingent concerns genuine dependence.
If this is just a matter of English language use, then sample some other people.
It occurs to me that this linguistic issue has a classical resolution: The antonym to “actuality” which we need is “potentiality” as in Aristotle, and Heisenberg had already proposed this as the modality of the quantum state, just what we need in labelling the diagram for Quantum Modal Logic.
I have made corresponding edits now: here.
With that linguistic stumbling block out of the way (?! :-), back to the math:
I realized that not only are the potential types the possibility-modules in actual types (which holds also classically)
but for linear/quantum types we have the converse: Actual types are the randomness/reader modules in potential types!
As a stand-alone statement this is now recorded at Reader monad – Examples – Quantum reader monad. Will now draw the respective diagram here.
I claim that the “dynamic lifting” of the Quipper community is taking the preimage of the necessity counit under the functor which regards potential types as the free randomness-modules.
Will be typing this out now…
Excellent! And what was I thinking? I’m supposed to be the philosopher. I have a colleague who endlessly goes on about Aristotle’s actuality and potentiality. I’ll add a SEP entry on his now.
The diagram (5) needs changing too, so that it’s “the definite becomes potential” and “the potential is random”.
Thanks. I’ll be adjusting the labels of diagram (5) now.
I keep feeling undecided whether “$definite \Rightarrow potential \Rightarrow random$” is linguistically as satisfactory as “$necessary \Rightarrow actual \Rightarrow possible$”.
But I grew fond of “random” for the reader modality, when I realized that the quantum reader, coinciding with the quantum coreader, is really close to what is called QRAM.
Yes, I was wondering about that. Maybe ’indefinite’ is better. “The potential is indefinite”.
So for Aristotle, a wooden bowl can be seen as a piece of wood one of whose potentialities (being a bowl) has been realised. As a piece of unformed wood, it has an array of potentialities (to be a bowl, to be a table, etc.). It is as yet indefinite as to form.
Or should we be returning to Hegel, Transition to Actuality? Very little about potentiality there.
Too much for my poor brain at this moment.
I find “$potential \Rightarrow indefinite$” works well in itself, but it clashes badly with “$definite \Rightarrow potential$”.
Maybe we need to abandon “definite” as the linguistic value of the $\star$-comonad. The problem is that “definite” equals “in-in-definite”, but we need a word for “co-in-definite”, instead. (Like the necessary is the co-im-possible :-)
On the other hand, given that people got used to the word “coinvariant”, maybe “coindefiniteness” is not such a stretch, in particular since both are aspects of left base change.
In any case, I have now typeset the full Eilenberg-Moore-factorization in the dependent linear case: here
added missing cross-link with epistemic modal logic