Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Am in Oxford at the moment, last night at dinner it turns out my table neighbour is a local philosopher using modal logic to reason about metaphysics.
How he defines his modal operators, I ask, whether he takes them to be anything that satisfies S4. Ah no, he says, that’s something only pure logicians would do who lost contact with the meaning of their symbols, he would always consider necessity being formalized by universal quantification over possible worlds.
(Otherwise, I’ll stay out of this discussion now, it seems a humongous social effort for such a tiny trivial point, I am exhausted and frustrated about it. Looking through the latest I am relieved to see that Mike does actually agree, but remain puzzled, as on previous similar occasions, what it is about my wording that would be so very different from other wordings of what seems to be the same point to me. In any case, what counts is that the $n$Lab entry is okay now. And thanks for somebody finally catching the only actual technical point. In fact Noam Zeilberger on g+ was first, and I was going to fix it in the entry, but then I had other things to do. )
Urs, Mike, I’ve changed the entry as SridharRamesh is right in #74, we don’t have the distribution of possibility over implication. We know that $\lozenge$ is a functor, but this says $p\to q \to (\lozenge p \to \lozenge q)$, not what was written.
The traditional name for the rule that SridharRamesh is calleing “generalization” is “Necessitation, Nec to its friends.
◊\lozenge preserves finite coproducts and □\Box finite products. Yes! Because \lozenge is a kind of existential and \Box is a kind of universal quantifier. you don’t need to like possible worlds for any of this…
Mike asks
why would everything that’s true be necessarily true? why Nec is a good rule? Valeria’s answer: Because the rule doesn’t say that anything that is true is necessarily true. The rule says that anything that is true in the *empty context” ie. from NO hypotheses, ie any assumption that you have ever made, has been discharged, it’s a closed term in the type theory, this is necessarily true. Axioms are bad ways of thinking about proofs, Gentzen rules and Natural Deduction are much better and using these you can see that the rule $\Box\Gamma\vdash A$ then $\Box \Gamma \vdash \box A$ is a very bad rule in ND, as it breaks the substitutivity of terms, you do need to use the more complicated rules that we discuss in our paper.
SridharRamesh, Mike The operator □\Box is defined by (□A)(w)=∀v(R(w,v)→A(v))(\Box A)(w) = \forall v (R(w, v) \to A(v)) [that is, □A\Box A holds at a world just in case AA holds at every world accessible from there]. The operator ◊\lozenge is defined by (◊A)(w)=∃v(R(w,v)∧A(v))(\lozenge A)(w) = \exists v (R(w, v) \wedge A(v)) [that is, ◊A\lozenge A holds at a world just in case AA holds at some world accessible from there].
NO! this is the case for classical modal logic. if you want to do categorical proof theory, ie if you want to have functors over a CCC (not over a poset), you need to do intuitionistic modal logic and the conditions on the accessability are more complicated, they are of the form \forall\forall and \forall \exists.
and yes, S4 (intuitionist or not) DOES satisfy the deduction theorem that “If $\Gamma, A\vdash B$ then $\Gamma\vdash A\to B$.”
Sridhar said:
Yeah; as noted before, the words “necessary” and “possible” can be used in many different senses, and I think many of the ordinary language ways in which they are used do not actually give any significant sense to iterated modalities like “possibly possible”. and I agree heartily, hear, hear..
Re: 102: It is presumably a typo, but just to clarify, we do not have $(p \to q) \to (\lozenge p \to \lozenge q)$. What functoriality of $\lozenge$ amounts to is the rule that $\lozenge A \vdash \lozenge B$ follows from $A \vdash B$. We also have the internal theorem $\Box(p \to q) \to (\lozenge p \to \lozenge q)$.
[As for the other possible parenthesized parsing $p \to (q \to (\lozenge p \to \lozenge q))$, we do have this once we accept $\mathrm{T}$, but just as a trivial consequence of $q \to \lozenge q$]
Re: 103: I don’t see how any of what I said is problematic as a description of possible world semantics of modal logic within an intuitionistic background. Note that I am not simultaneously using the possible worlds to give Kripke semantics for the intuitionistic background logic (i.e., I am not making any use of the translation of intuitionistic logic into classical S4). [More explicitly, letting $F_w(p)$ denote the truth value of the claim that $p$ holds at world $w$, I am taking $F_w$ to distribute over all non-modal logical operations in the ordinary way; e.g., to ascertain whether a world satisfies $p \to q$, one only needs to know information about that world, not about any other world, as $F_w(p \to q) = F_w(p) \to F_w(q)$. I am only using the relations between possible worlds to interpret the modal operators $\Box$ and $\lozenge$.]
As for “the deduction theorem”, as I said, I was referring to concluding $A \vdash B$ from the ability to derive $\vdash B$ from $\vdash A$. This fails for us, since we can derive $\vdash \Box A$ from $\vdash A$, but do not have $A \vdash \Box A$. We do still have the universal property of implication, that $\Gamma, A \vdash B$ and $\Gamma \vdash A \to B$ are equivalent, but this was not what I was calling “the deduction theorem”.
Re: 102 indeed you’re right, what we have is functoriality of Possibility. we have the internal theorem $\Box(p \to q) \to (\lozenge p \to \lozenge q)$ and many other variants of that, such as $\Box (A\to \lozenge B) \to (\lozenge A\to \lozenge B)$, etc…
Re #103 yes I know you’re not
I am not simultaneously using the possible worlds to give Kripke semantics for the intuitionistic background logic but I think you need to. Possible worlds are not my area of expertise, I am quoting others who seem to me to say that this simple semantics is not available for intuionistic modal logic. the basic reference is Simpson’s phd thesis.
I don’t see anything problematic about the possible world semantics I described for intuitionistic modal logic (when interpreted within an intuitionistic background), but I’ll take a look at Simpson’s thesis.
(For what it’s worth, the markup system here requires the explicit inclusion of a blank line before the next paragraph in order to end a quotation)
Having taken a look, it seems to me the account of possible world semantics I gave for intuitionistic modal logic within an intuitionistic background logic is precisely the same as the one Simpson is primarily concerned with in his thesis (introduced in section 3.4, under the title “Our approach to intuitionistic modal logic”).
I’m trying to fix the mathematical content of the page based on this discussion. I had a look at the cited paper of Bierman & de Paiva, and now I’m more confused than before.
Question for Sridhar and Valeria: you’ve been saying that $\Box$ preserves products and $\lozenge$ preserves coproducts. But I don’t see that in the paper; the semantics described seem to say that $\Box$ is a lax monoidal comonad and $\lozenge$ is a $\Box$-strong, or strong, monad.
Question for Valeria, I guess: Can the intuitionistic axioms/rules for $\lozenge$ be stated without reference to $\Box$?
Question for everyone: Our page currently says “the S4 axioms just say (Bierman & de Paiva 92) that $\Box$ is an (idempotent) comonad and $\lozenge$ is an (idempotent) monad”. But that’s not true, is it? They say more than that (the previous question is about how much more).
Question for Urs: I want to add some discussion of the fact that $\Box$ and $\lozenge$ are not “internal” modalities like $\sharp$ and $ʃ$, but are both (even the monadic $\lozenge$) “external” like $\flat$. Syntactically this means that, as we’ve mentioned above, their rules don’t apply equally in all contexts. Do we have a discussion yet anywhere on the lab of this distinction? It would go naturally at modal type theory I guess, with maybe a mention at modal logic.
Re: Question 1: From a quick glance* at the Bierman & de Paiva paper, $\Box$’s product-preservation (within the lattice of propositions; unless otherwise indicated, I am always speaking about the traditional non-categorified context) can be deduced from what are on page 4 called “Axiom 10” and “Rule Box”, as in post 81 of this thread. Put another way, a lax monoidal functor is in fact product-preserving upon passage to preorder reflection.
$\lozenge$’s coproduct-preservation appears not to be part of the system considered in that paper, considering the remark on page 26. I do not know why, but perhaps this is by analogy to classical $\mathrm{S}4$, which needn’t explicitly axiomatize $\lozenge$ as coproduct-preserving (for, classically, it results automatically by the identification of $\lozenge$ with $\not \Box \not$). I would prefer to axiomatize $\lozenge$ as coproduct-preserving even in the intuitionistic context, as does Simpson in his aforementioned thesis (see “Figure 3-6: Axiomatization of IK” on page 52 of said thesis); note that this is demanded by the possible world semantics I gave before.
(*: My preemptive apologies if I’ve misinterpreted any aspect of the Bierman & de Paiva paper, which I have not read in detail yet. I trust Valeria will correct any such misunderstandings.)
Re: Question 3: The answer is “You are correct; the $\mathrm{S}4$ axioms say more than that”. The $\mathrm{K}$ axioms (when taken to match the most straightforward possible world semantics) say that $\Box$ is a product-preserving functor, $\lozenge$ is a coproduct-preserving functor, and the two are related via $(\lozenge A \to \Box B) \vdash \Box(A \to B) \vdash (\lozenge A \to \lozenge B)$. The $\mathrm{S}4$ axioms add on top of that that $\Box$ is a comonad and $\lozenge$ is a monad (automatically idempotent in the non-categorified context).
Put another way, a lax monoidal functor is in fact product-preserving upon passage to preorder reflection.
Ah, right, thanks. And it took me until just now to realize that $\Box(A\to B) \vdash (\Box A \to \Box B)$ is just the axiom for a lax closed functor, so of course it’s the same as being lax monoidal.
What does $(\lozenge A \to \Box B) \vdash \Box(A \to B) \vdash (\lozenge A \to \lozenge B)$ mean categorically? The second entailment looks like it has to do with the Bierman–de Paiva notion of “$\Box$-strong monad”. In fact, just as a strong endofunctor of a closed monoidal category $V$ is the same as a $V$-enriched functor $V\to V$, I guess a $\Box$-strong endofunctor is probably the same as a $V$-enriched functor $\Box_\bullet V\to V$, where $\Box_\bullet$ is the induced endomorphism of $V$-enriched categories induced by the monoidal functor $\Box:V\to V$.
But what does $(\lozenge A \to \Box B) \vdash \Box(A \to B)$ mean?
I did a bit of editing at necessity and possibility, but I’m still a little confused about the role of the “necessitation”, and also that last axiom mentioned above.
ref #107 yes, your account is exactly Simpson’s and Simpson has NO categorical model for his framework, as yet. More than about time to put this right, I say. the notion of deductive system that he uses has “worlds” as part of the syntax.
□\Box preserves products and ◊\lozenge preserves coproducts.
yes, this has always been folklore both in traditional modal logic (where it has been stated zillions of times) and in categorical logic. I’m sure it is in one of the other papers like Reyes, since you say it’s not in ours. but it ought to be there too.
Can the intuitionistic axioms/rules for ◊\lozenge be stated without reference to □\Box?
No, this was the point I was making somewhere above. Box is more fundamental than Diamond. The same way that conjunction is more fundamental than disjunction in traditional categorical formulations of intuitionistic logic.
the S4 axioms just say (Bierman & de Paiva 92) that □\Box is an (idempotent) comonad and ◊\lozenge is an (idempotent) monad
this is true in first approximation, but yes, you’re right there is more meaning, when you dig deeper.
I would prefer to axiomatize ◊\lozenge as coproduct-preserving even in the intuitionistic context, as does Simpson
I would prefer not to do it. this is one of the choices that constructivization forces on you. Thinking about the poset case is very useful, but it only gets you so far. please have a look at the last section of the Bierman-de Paiva paper.
Preservation of the coproduct by diamonds is not part of the Natural Deduction of the system, has to be imposed “ad hoc” and by force, to make the algebraic semantics pretty.
also □\Box is an (idempotent) comonad and ◊\lozenge is an (idempotent) monad
the idempotency issue is not a triviality. Idempotency is not forced on you by the syntax, it actually is a modelling choice, and a bad one, from the proof theoretical point of view. I get my intuitions mostly from what the syntax forces you to do, as it’s described with excruciating detail in the technical report “Term Assignment for Intuitionistic Linear Logic” http://logic-forall.blogspot.com/2014/07/bacat-yes.html.
Valeria, as long as you format your comments in Markdown+Itex, you get use LaTex in the usual way, and if you want to quote some text above, you can type e.g.
> I would prefer to axiomatize $\lozenge$ as coproduct-preserving even in the intuitionistic context
and this will produce highlighted text:
I would prefer to axiomatize $\lozenge$ as coproduct-preserving even in the intuitionistic context
Note that you can click on Source to copy the text of someone else.
I’m lost as to why we’re turning this page into a general treatment of modal logic. Maybe it should have a special name, but the line Urs was adopting is in the nPOV spirit of the wiki, and is going to be lost under a wealth of detail. If I might add, the idea of HoTT that propositions are only some types seems to me just the kind of thing worth pursuing, and was pointing to some things philosophers had treated, e.g., rigid designators as terms in necessary types.
Developing this line will get lost under what’s there now about propositional S4. Why not have that on another page? We have modal logic which needs a work over, and there are several related pages there, and the logic S4(m), and so on.
@David, good point. I just didn’t want the page to be making incorrect claims about S4.
Valeria, as long as you format your comments in Markdown+Itex, you get use LaTex in the usual way
Thanks for the tip Todd_Trimble (long time, no see). only learning the Markdown+Itex that is necessary for the job at hand… (also thanks Sridhar for a similar tip before)
and yes, agreeing with Mike_Shulman that I did not want to see the page containing incorrect stuff, but wanting to see it discussing the basics of modalities, as I believe they’re important.
Didn’t know about the S4 page, but also noticed things in the modal T page that are not correct… oh dear.
Notice Mike_Shulman that the page is not correct, yet.
The nullary version □(⊤)↔⊤
is not assumed by many modal logicians. the other nullary version
$\lozenge(\bot)$ equivalent $\bot$ is mostly assumed.
But neither makes much sense from the point of view of raw intuitions: why would necessary truth be the same as truth? why would possible false be the same as false? we want to create modalities exactly so that we can tell the differences between the necessary truths and the contingent ones, between whatever is possibly false and the actually false.
Valeria, do feel free to correct things that you find that are wrong! The great thing about a wiki is that anyone can do that. Just please announce your changes here on the forum, so other people can keep track of what’s going on and put their input in if they like.
I don’t understand your point about $\Box\top \leftrightarrow \top$. Certainly $\Box \top\to\top$ since everything implies $\top$, while I thought the “necessitation rule” said that if a proposition $p$ is provable in the empty context (which $\top$ certainly is), then so is $\Box p$. That makes perfect sense to me intuitively; while a particular truth may not be necessary, $\top$ is true by definition, so its truth is certainly necessary.
The dual version $\lozenge\bot \leftrightarrow \bot$ seems like it should follow by duality in classical modal logic, and it makes sense to me intuitionistically for the dual reason: $\bot$ is by definition false, so it shouldn’t even be possible for it to be true.
Huh? To say □(⊤)↔⊤ is not to say that necessary truth is the same as truth; it doesn’t follow that □(A)↔A more generally. It’s just to note that the particular proposition ⊤ which is definitionally true is in fact necessarily true.
Note that □(⊤)↔⊤ is readily proven in the systems we’ve been discussing; left to right trivially by definition of ⊤, and right to left as most basic example of the rule that has been called “Necessitation”.
On edit: I wrote this post before seeing Mike’s last post, which makes the same points.
From Mike in 110:
But what does $(\lozenge A \to \Box B) \vdash \Box(A \to B)$ mean [categorically]?
I don’t know exactly what I would want to say about it, but I’ll start by noting that in the possible world semantics, this arises from the fact that, given an adjoint triple $L \dashv M \dashv R$, we have a canonical natural transformation from $(L A \to R B)$ to $R(A \to B)$. [Indeed, the adjunctions give us this for any closed structure $\to$ so long as $M$ is correspondingly lax monoidal; of course, when the monoidal structure is categorical product, $M$ being a right adjoint makes it monoidal for free]
I gave in post 81 a categorical account of possible world semantics in terms of slices of LCCCs, but I realize now I might as well abstract it further. I think I might want to say that, for any ordinary (non-modal) propositional logic, the morally corresponding notion of $\mathrm{K}$ should include precisely the laws validated by models of the following form: $M$ and $N$ are parallel functors preserving all such ordinary (non-modal) propositional structure, $\Box$ is taken to be $N$ followed by a right adjoint to $M$, and $\lozenge$ is taken to be $N$ followed by a left adjoint to $M$. The difficulty, of course, is in characterizing this just in terms of $\Box$ and $\lozenge$, without explicitly considering $M$ and $N$ individually or needing to introduce their codomain.
From Mike in 111:
I’m still a little confused about the role of the “necessitation”
Does it help to think of “necessitation” as just part of lax monoidality? (If $\Box$ is a lax monoidal functor, and we have a morphism $m : 1 \vdash A$, then we obtain by functoriality a morphism $\Box(m) : \Box(1) \vdash \Box(A)$, to which we can precompose the lax monoidal unit law $1 \vdash \Box(1)$ to obtain $1 \vdash \Box(A)$. Thus we derive the “necessitation” rule. More generally, we have that $A_1, ..., A_n \vdash B$ yields $\Box A_1, ..., \Box A_n \vdash \Box B$; another way of thinking about this would be that $\Box$ acts as an internal filter on the lattice of propositions ordered by generic (i.e., non-contingent) entailment.)
Just please announce your changes here on the forum, so other people can keep track of what’s going on and put their input in if they like.
Mike, Thanks, I was about to ask about the social procedure for wiki changes.
I just edited modal logic adding “implication” (which cannot be defined as $(\neg A) \vee B$ in intuitionistic logic) to the operators of propositional logic.
There is a big issue with the page http://ncatlab.org/nlab/show/modal+logic as it assumes all over “classical” propositional logic as the basis. So definition 1 needs to be amended, etc..
about $\Box\top \leftrightarrow \top$ while the left to right direction is indeed the definition of true, the right to left depends, as you both noted, on whether you believe that Necessitation is the right way of formalizing the most basic notion of necessity. Note that if you use a Gentzen modelling of system K (which has been called a Scott version) what you have is that $\Gamma\vdash A$ implies $\Box \Gamma \vdash \Box A$ so it’s just a convention to say $\Box \emptyset \leftrightarrow \top$, but a good convention, it seems to me. I accept it, but definitely not prepared to accept that $\lozenge(\bot) \leftrightarrow \bot$. Again one half is the definition of $\bot$, but the other?
Sridhar
If $\Box$ is a lax monoidal functor, and we have a morphism $m : 1 \vdash A$, then we obtain by functoriality a morphism $\Box(m) : \Box(1) \vdash \Box(A)$, to which we can precompose the lax monoidal unit law $1 \vdash \Box(1)$ to obtain $1 \vdash \Box(A)$
indeed, but this is because you’re assuming (like I am, we like category theory) that $\Box$ needs to be a functor. Because we both agree that functors are the right way of doing proof-theory and category theory. If you go the way syntactic proof-theorists think about it, functoriality or Scott’s rule is harder to justify than Necessitation. Or so I am told.
ref #121 I would like to say that I don’t understand the following at all….
for any ordinary (non-modal) propositional logic, the morally corresponding notion of $\mathrm{K}$ should include precisely the laws validated by models of the following form: $M$ and $N$ are parallel functors preserving all such ordinary (non-modal) propositional structure, $\Box$ is taken to be $N$ followed by a right adjoint to $M$, and $\lozenge$ is taken to be $N$ followed by a left adjoint to $M$. The difficulty, of course, is in characterizing this just in terms of $\Box$ and $\lozenge$, without explicitly considering MM and NN individually or needing to introduce their codomain.
ref #119
it makes sense to me intuitionistically for the dual reason: $\bot$ is by definition false, so it shouldn’t even be possible for it to be true.
you’re thinking classically. We know there is a map from $\bot$ to $\lozenge(\bot)$ because there is one such map/proof for any object whatsoever, that is what being false means. But why there’s a map going from $\lozenge(\bot)$ to $\bot$, where’s my proof of what? it’s just convention.
ref #118 I have to correct myself again: both the equivalences are assumed by most modal logicians, but the systems I work with only assume $\Box(\top)\leftrightarrow \top$
Re: 125: It is an abstraction of possible world semantics: Imagine the domain of $M$ and $N$ as propositions indexed by possible worlds, with the codomain being propositions indexed by possible “moves” between worlds. $M$ and $N$ are the substitution functors arising from sending such a “move” to its source and destination, respectively. So $\Box p$ is taken to mean “Every move from the current world leads to a world satisfying $p$” while $\lozenge p$ is taken to mean “Some move from the current world leads to a world satisfying $p$”.
ref #128 Sridhar in my books possible world semantics is just another semantics, in the same footing as algebraic semantics and/or categorical semantics. It has no special , explanatory role.
I like to think that by now I can tell when I’m thinking classically. There’s nothing non-constructive about saying that $\bot$ is not true, so I don’t see anything any less constructive about saying that $\bot$ is not even possibly true. I’m not saying you can prove it in any particular system; I’m just saying it makes sense to me intuitively. Certainly it makes no less sense to me than “necessitation”. More generally, just as $(\neg \exists) \leftrightarrow (\forall\neg)$ even constructively, it seems reasonable to me to expect that $(\neg\lozenge)\leftrightarrow (\Box\neg)$.
A note about constructivity: plenty of pages on the nLab are only written from a classical point of view, and we don’t regard them as being “wrong” in any way. If anyone feels motivated to add a discussion of the constructive case, that’s a great enhancement; but we do also try not to scare off classically-minded mathematicians, and we often start with a discussion of the classical case even on pages that include the constructive version as well.
I’ll start by noting that in the possible world semantics, this arises from the fact that, given an adjoint triple $L \dashv M \dashv R$, we have a canonical natural transformation from $(L A \to R B)$ to $R(A \to B)$.
Hmm, okay. So also in cohesion, we have $(ʃ A \to \flat B) \to \flat (A \to B)$?
Re: 131: I know little about cohesion, but the adjoint triple $ʃ \dashv \flat \dashv \sharp$ would induce $(ʃ A \to \sharp B) \to \sharp(A \to B)$.
Re: 130: I of course concur with Mike in finding the distribution of $\lozenge$ over coproducts (including the nullary coproduct $\perp$) and the identity $(\not \lozenge) \leftrightarrow (\Box \not)$ perfectly constructive, by analogy with the corresponding tradition for constructive $\exists$ and $\forall$. [We can also understand the property $\lozenge A \to \Box B \vdash \Box(A \to B)$ from this perspective, as $\exists x A(x) \to \forall x B(x) \vdash \forall x (A(x) \to B(x))$. This presentation has the benefit of also being understandable to non-category theorists. :)]
Hmm, I was thinking of applying $L\dashv M\dashv R$ to the adjoint triple of functors $\Pi_\infty \dashv \Delta \dashv \Gamma$ relating the cohesive topos to the base topos. That gives $(\Pi_\infty A \to \Gamma B) \to \Gamma(A\to B)$, and then upon applying $\Delta$ to get back into the cohesive topos we get $(ʃ A \to \flat B) \to \flat (A \to B)$. I thought that’s also what you were doing in the possible-worlds case, using the adjunction $\Sigma_W \dashv W^* \dashv \Pi_W$ relating the slice over $W$ to the slice over $1$, then applying $W^*$ to get back into the slice over $W$. But I guess you could also apply the same argument to the adjoint triple of modalities themselves, as you say.
Ah. As I said, I know little about cohesion; does $\Delta$ necessarily preserve exponentials? If it does, then I suppose that works.
However, the final step of postcomposing $\Delta$ isn’t analogous to what I was doing. Rather, starting from $L \dashv M \dashv R$, I obtain $(L A \to R B) \to R(A \to B)$, and then, given some $N$ parallel to $M$, I precompose $N$, obtaining $(L N A \to R N B) \to R(N A \to N B)$, which is as good as $(L N A \to R N B) \to R N(A \to B)$ when $N$ preserves exponentials [in the situation of interest, both $M$ and $N$ preserve all non-modal propositional structure].
In terms of possible worlds, I am not generally using the approach in your penultimate sentence (though it is what Urs wrote into the original article), which only works for $\mathrm{S}5$. Rather, as in post 81, I am supposing we have some parallel $M, N : E \to W$ (thought of as sending directed edges between worlds to their source and destination), giving us $\Sigma_M \dashv M^{*} \dashv \Pi_M$ relating the slice over $E$ to the slice over $W$. And to start from the slice over $W$, I preapply $N^{*}$; that is, $\Box = \Pi_M N^{*}$ and $\lozenge = \exists_M N^{*}$. (Urs’ approach is equivalent to the special case where every world has a unique edge to every world)
Yes, $\Delta$ preserves exponentials because a cohesive topos is locally connected. I see now what you meant, thanks.
There’s nothing non-constructive about saying that ⊥\bot is not true, so I don’t see anything any less constructive about saying that ⊥\bot is not even possibly true.
There is nothing non constructive about saying that $\bot$ is not true. but saying that $\lozenge (\bot) \leftrightarrow \bot$ implies that $\lozenge(\bot) \to \bot$ which to be true constructively needs a map going from $\lozenge(\bot)$ to $\bot$ and I don’t know a necessary map there.
What I know is that rejecting $\lozenge(\bot) \to \bot$ as an axiom in possible worlds semantics is equivalent to assuming “impossible worlds”: this I learned from Michael Mendler when he showed me how to make a Kripke semantics for a system that does not satisfy the distribution of $\lozenge$ over disjunction, both binary and nullary, http://www.cs.bham.ac.uk/~vdp/publications/ck-paper2.pdf. Now I did not know until today that Kripke invented “impossible worlds” to prove completeness of modal systems that don’t have the rule “Necessitation” , this I just read in the wikipedia entry on “impossible worlds. So your sentence
Certainly it makes no less sense to me than “necessitation”
seems quite appropriate.
and I do like my classical mathematics as much as the next person, I just prefer to know where I need it to be classical. So the entry in modal logic should have all the connectives and collapse them when dealing with classical logic, instead of not even being able to talk about the constructive case I think.
@Mike Shulman:
I don’t really know much about modal logic, but I read SridharRameshs chess example yesterday and played around with it a bit, came up with a proof of a simple statement - thought I’d share: http://i.imgur.com/ZUHnPDE.png
Re: 136: It seems to me the system $\mathrm{CK}$ described in the linked paper could be understood as corresponding to possible world semantics where, instead of each world providing a standard model of ordinary propositional logic (what would classically be an ultrafilter on the Lindenbaum algebra of propositions), rather, each world only need provide some propositional theory (a filter on the Lindenbaum algebra). This in turn could be modeled by each world containing a set of countries and each country providing an ordinary propositional model in the standard way (so there are no inconsistent countries), while the accessibility relation is defined at the level of worlds and a world is considered to validate a proposition just in case all its countries do (so a world may validate $\perp$ if it has no countries; similarly we find that a world may validate $A \vee B$ without individually validating $A$ or validating $B$, and thus worlds accessing it may validate $\lozenge (A \vee B)$ without validating $(\lozenge A) \vee (\lozenge B)$).
[Neither of these are the semantics studied in the paper, which imposes intuitionistic logic by fiat rather than absorbing it from the background logic in which the semantics are interpreted (again using a birelational model, taking advantage of the interpretation of intuitionistic logic into classical $\mathrm{S}4$). I think it’s an interesting system, but not what I would (subjectively) consider the default intuitionistic correspondent of traditional $\mathrm{K}$, just as I would take the default intuitionistic correspondent of traditional predicate logic to have $\exists$ distribute over disjunction in the ordinary way even though one could analogously formalize a weaker logic and a broader notion of model obviating that property as well]
I’ll also note that the system $\mathrm{CK}$ (defined on page 4 of the linked paper as propositional logic + the necessitation rule + the axioms $\Box(A \to B) \to (\Box A \to \Box B)$ and $\Box(A \to B) \to (\lozenge A \to \lozenge B)$) does not have the property that, when its underlying propositional logic is made classical, it yields classical $\mathrm{K}$. This is clear by noting that, no matter what underlying logic it is taken upon, any theorem of $\mathrm{CK}$ remains a theorem when $\lozenge$ is replaced by $\Box$ throughout, but this property does not hold for classical $\mathrm{K}$.
Re: 138 As I said when you ‘constructivize’ a notion, you have, sometimes, many options. There are several non-equivalent Constructive Set Theories, for example.
When it comes to constructive modal logic K, I take my cues from the Curry-Howard correspondence and the the categorical semantics. They do not support the distribution of $\lozenge$ over disjunction: when you look at a ND-tree that ends up in $\lozenge(A)\vee \lozenge(B)$ there is no obvious way to see a map getting there from $\lozenge(A\vee B)$. At least none that I can see.
The $\lozenge(A\vee B) \to \lozenge(A)\vee \lozenge(B)$ distribution can, nonetheless, be considered a basic law (by fiat) and Simpson’s has a very pretty system that does it. But there is no categorical model of it, as far as I know. I will be delighted if you can provide one. Meanwhile the system without the distribution, which has been considered several times over by applications in CS – e.g Nerode’s work – does have categorical models.
Re: 139 It is indeed true and known that the system CK does not become K when you make the basis classical. This corresponds to the idea that modalities also have their own classical or not character. When you want to make a system consisting of propositional basis + modalities classical you have to make both components classical, the basis becomes classical propositional logic and the constructive modalities have to become classical modalities.
Re: 141: What do you mean by making constructive modalities into classical modalities? (I’ll note that the $\Box$ fragment of $\mathrm{CK}$ does become the $\Box$ fragment of classical $\mathrm{K}$ when using a classical propositional basis)
Re: 140: I agree that you have many options when constructivizing a notion. So my argument is about a subjective sense of what seems to me the most natural intuitionistic correspondent of $\mathrm{K}$, rather than about any objective claim.
What do you consider to be a categorical model? As I’ve said, the notion of intuitionistic $\mathrm{K}$ I have in mind could be modelled by considering two models of intuitionistic logic (i.e., cartesian closed categories with coproducts) $C$ and $D$, two homomorphisms (i.e., cartesian closed functors preserving coproducts) $M$ and $N$ from $C$ to $D$, and right and left adjoints to $M$, taking $\Box$ and $\lozenge$ to be $N$ followed by the right and left adjoints to $M$, respectively. A particular case of this arises when considering a directed multigraph, with each proposition assigned some set at each node of the multigraph; the set assigned to $\Box A$ at a node is the product, over all edges out of that node, of the sets assigned to $A$ at the destination of those edges, and similarly for $\lozenge A$ replacing “product” with “sum”. (As these sets need not be subterminal, this is indeed a categorification.)
Re 142: For me, a categorical model is one as discussed in the Studia Logica paper: The fundamental idea of a categorical treatment of proof theory is that propositions should be interpreted as the objects of a category and proofs (lambda-terms) should be interpreted as morphisms. The proof rules correspond to natural transformations between appropriate hom-functors. The proof theory gives a number of reduction rules, which can be viewed as equalities between proofs. In particular these equalities should hold in the categorical model, page 19 and forward ones of On an Intuitionistic Modal Logic (with Gavin Bierman), Studia Logica (65):383-416, 2000.
Re 141 indeed, the $\Box$ fragments do agree. but to make the modalities “classical” we need to make sure that they are inter-definable i.e. that $\lozenge(A) = \neg (\Box (\neg A))$.
Your idea of a model is not proved to satisfy the equations forced on us by the proof-theory. at least it is not obvious to me that it does. the problematic cases are again the distributions that we’ve been discussing. This is what I mean by saying that categorical models for me come from the Curry-Howard correspondence. (and by the way thanks for reading the paper on CK.)
On another linguistic note:
So we have, recall, that $W^\ast \prod_W$ is to be pronounced necessity and that $W^\ast \sum_W$ is to be pronounced possibility. Then: is there a good candidate for a natural language incantation for $\prod_W W^\ast$ ?
I have come to think that there is: this is randomness or contingency.
(This works a bit better in German; Notwendigkeit, Möglichkeit, Zufälligkeit.)
For instance, fixing the argument to be the prop universe $\Omega$ and varying in the space $W$ of “possible worlds”, then $\prod_{(-)} (-)^\ast \Omega$ is the power set operation. Changing $\Omega$ to something more linear gives more linear (probability-)distributions.
And not the least, on linear dependent types, it is $\prod_W W^\ast$, not $W^\ast \prod_W$, which governs path integral quantization via secondary integral transforms.
So in the ordinary nonlinear case, $\prod_W W^\ast (-)$ is just the function type $[W, -]$?
Yes!
Is there likely to be a good name for it then?
Yes, randomness.
You should best think of applying it to pointed types, such as $\Omega$ or such as stable types. Then $\prod_W W^\ast$ sends this to the collection of all distributions of possible worlds where in each world either “it holds” or it doesn’t. A term of $\prod_W W^\ast \Omega$ is a random subset of all possible worlds.
The understanding of a proposition as a set of possible worlds is commonplace. E.g., here
The possible-worlds analysis of propositions identifies a proposition with the set of possible worlds where it is true.
That’s what here we usually call “propositions as types”. But I suppose we are talking past each other now, for I am not sure what #149 is reacting to.
Try something more linear than $\Omega$ to get a better impression of randomness, such as $[0,1]$. Of course it works really fully well only for stable types.
But if my little linguistic intuition here does not resonate with you, it’s not too important. The nForum might be the wrong place to try to sort it out further.
Maps to $[0, 1]$ appear in the adjunction on p. 2 of Duality for Convexity.
David, thanks for getting back to me.
That article considers maps to $[0,1]$ as homomorphisms of effect algebras in the hope to capture aspects of quantum physics. Despite exposure to many seminar hours of effort in Nigmegen, I remain to be convinced of the effect of effect algebras. If we are asking for effect of categorical logic in general – and of the (co-)monads $\sum_W W^\ast$ and $\prod_W W^\ast$ in particular – on quantum physics, then only modesty keeps me from claiming that the only genuinely viable approach to that which I am aware of is that via secondary integral transforms in dependent linear homotopy type theory. It is due to this that I know for a fact that the dependent linear semantics of $\sum_W W^\ast$ and $\prod_W W^\ast$ captures the phenomenon of quantum randomness observed in the real world.
But what I was after here now is more elementary, more Platonistic, maybe just linguistic. I am thinking that just from an elementary observation of the basic nature of $\prod_W W^\ast$, and in view of the pronounciation of $W^\ast \prod_W$ as necessity, it should seem sensible to pronounce it as randomness.
It’s supposed to be utterly simple: you can try to feed $\prod_W W^\ast$ a proposition. But since this monad now acts on the empty context instead of on the context of possible worlds as its dual sublings do, this comes out vacuous. Either we feed it the true proposition, then it returns true, or we feed it the false proposition, then it returns false. Hence we conclude that the meaning of $\prod_W W^\ast$ is beyond first-order logic,beyond the Prop-truncation, and is genuinely type theoretic.
So we feed it any type $V$ that is not $(-1)$-truncated instead. What does it return? The type of $V$-valued functions on the space of possible worlds.
At this simplistic level, there is no normalization on these functions that would constrain them from $V$-valued “distributions” to probability distributions. But then, comparison to the case where $V = KU$ and where I fully understand how this relates to genuine real quantum physics, I see that the idea to normalize these functions right away is too naive anyway, that it’s rather probability amplitudes that matter and that in addition there is some holography in the game, etc., and that we better let the tao of mathematics lead the way than trying to force it to submit to our prejudices.
So to disregard such technicalities as normalization; I wonder: if you walk over to your colleague’s office and say: “I have here a space of possible configurations and the space of functions over it,what might that be?”; wouldn’t most people think of the space of functions on a space of configurations as a space of random variables? What else?
Maybe I’ve developed a certain nervousness around the term ’random’ from reading Edwin Jaynes. He generally writes it in scare quotes, and attributes much said about randomness to the so-called ’mind projection fallacy’, projecting our ignorance into ideas of chance in the world.
Those factors which vary in an uncontrolled way are often called random, a term which we shall usually avoid, because in current English usage it carries some very wrong connotations.
To many, the term ‘random’ signifies on the one hand lack of physical determination of the individual results, but, at the same time, operation of a physically real ‘propensity’ rigidly fixing long-run frequencies. Naturally, such a self-contradictory view of things gives rise to endless conceptual difficulties and confusion throughout the literature of every field that uses probability theory. We note some typical examples in Chapter 10, where we confront this idea of ‘randomness’ with the laws of physics. (Probability Theory as Logic, 271)
I find this equating probability with plausibility in the face of incomplete knowledge quite plausible in our everyday dealings with the world. As for quantum physics, I don’t know. Jaynes seems to want to push it all the way:
incomplete knowledge is the only working material a scientist has! (61)
By the way, I came across another category theoretic takes on probability theory
This takes up Tom Leinster’s and Tom Avery’s account of the Giry monad as codensity monad.
That’s part of my motivation for finding good formalization of natural language: once we have it, then quandaries as you quote will find resolution.
We have seen this in the above discussion for the example of discusison of necessity and possibility: once we fix that these are formalized by $W^\ast \sum_W$ and by $W^\ast \prod_W$, then answers to all the traditional questions may systematically be worked out.
Regarding randomness; the mathematical concept of random variables is perfectly well established. The space of random variables is a suitable space of functions on the given space of possible worlds. Luckily, this concept is sensible and useful independently of any questions of physics that Jaynes raises; it’s a concept of pure thought, a topic of the doctrine of essence, not of nature. (Zufälligkeit translates both to randomness as well as to contingency. Fewer people would come up with the second translation, though, I’d think) Once we agree on a good modalic formalization of this, then it is time to see how this will work itself out in nature, by following the formalism.
For what it’s worth, I have found two references that express the suggestion that I am making:
Notice that the monad in question, the one I am suggesting should be called Zufälligkeit, $\prod_W W^\ast$, is what computer scientists call the reader monad.
Verdier 14 is so kind to agree with what I wrote above:
The intuition behind the Reader monad, for a mathematician, is perhaps stochastic variables. A stochastic variable is a function from a probability space to some other space. So we see a stochastic variable as a monadic value.
The same point, not in a prose sentence but by way of an item list, is made in Toronto-McCarthy 10, slide 24.
Notice that in the line before, slide 23, these authors are even so kind as to agree that a probability space deserves to be addresses as a space of “possible worlds”.
once we fix that these are formalized by $W^\ast \sum_W$ and by $W^\ast \prod_W$, then answers to all the traditional questions may systematically be worked out
…but not everyone may agree that the answers thereby obtained are the right ones. (-:
The whole point that I am making is that after what used to be just opinions has been formalized, then there is no more room for disagreement anymore than there is generally in mathematics.
That’s why analytic philosophers are eager to use formal modal logic for their arguments and dismissive of those collegues who don’t reason formally. While that is the right idea, unfortunately most analytic philosophers forget to specify their boxes and circles and might confuse them with a flat or a sharp. If we add in that missing specification, by recognizing $\Box_W = W^\ast \prod_W$ etc., then their program of turning philosophy into mathematics actually works.
What I’m saying is that by choosing a particular formalization of an informal notion, you of course determine the answers to lots of questions; but someone else might prefer a different choice of formalization of the same informal notion, leading to different answers to the same questions. So formalization is an excellent tool for clarifying the exact points of disagreement, but it can’t always be relied on to eliminate the disagreement even if everyone involved is completely rational.
addendum to #155:
I now see that Toronto-McCarthy 10 also say it in explicit prose, in the tiny side note text on slide 35:
you could interpret this by regarding random variables as reader monad computations.
There must be a published article that goes with these talk slides, but I am not sure yet if I have found it.
[edit: ah, of course it’s here: (pdf). In 2.2 they call $\prod_W W^\ast$ the “random variable idiom” ]
If $W^\ast \sum_W$ and $W^\ast \prod_W$ are both important enough to have names, and so is $\prod_W W^\ast$, shouldn’t we give a name to $\sum_W W^\ast$? Admittedly it doesn’t seem too interesting
$\sum_W W^\ast (-) \equiv W \times (-).$One piece of magic is that where randomness $\prod_W W^\ast$ is used in formalization of quantum physics, it is to be forced to act essentially equivalently to $\sum_W W^\ast$ – see at dependent linear type theory the section on fundamental classes. That’s the “ambidexterity” of Hopkins-Lurie.
Hence we find that for purposes of modelling reality in the the objective logic, the base change adjoint triple to a space of possible worlds/possible field configurations induces three modalities
$\underset{Wirklichkeit}{\underbrace{Moeglichkeit\;\;Notwendigkeit\;\; Zufaelligkeit}}$Of course that was noticed before.
In addition one may observe that the composite
$State = \prod_W W^\ast \sum_W W^\ast$is the state monad, for $W$ the type of possible internal states of the computer. The standard way in Haskell to implement random numbers is to consider a type $W = StdGen Int$ “of random integers” (say) and then work with $State StdGen Int$. This allows to read random numbers (which are the “current state”, the “currently realized world”) and then update them to the next one.
If we don’t care about “updating to the next possible world” but are happy with the one we have, then we may reduce this to $Reader = \prod_W W^\ast$.