Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Even as recently as this year, work is published attempting to deal with the logic of imperatives. I’ve not found a treatment of the logic of imperatives in HoTT. I’m curious whether such treatment exists or is possible.
Working through some examples, it’s not clear to me whether the available tools in HoTT are able to capture the logic of imperatives without some modification. Although it would be possible to convert imperative statements to declarative ones, Ross’s paradox (see part 6 here) seems to remain unsolved. However, I have an (unsubstantiated) hunch that HoTT may be closer to the solution due to the existence of typed function calls over which inference can be made. Consider a variation over the (unfortunately rather punitive) Donkey sentence:
“If you are a farmer and own a donkey, beat it”
Although HoTT provides an elegant way to construct the dependent product that formalizes the original declarative sentence, the imperative form seems to imply, rather, that the invocation of the dependent product is required. If we take $f:\equiv\Pi_{z:{\Sigma_{x:Donkey}(Owns(John, x))}} Beats(John, p1(z))$, then it seems that what is desired is the invocation $f(z)$ for all $z$ within the context.
Let’s consider a follow on imperative:
“If you beat your donkey, console it.”
There is an obvious inference here but I’m struggling to see how it can be made from from the respective declarative HoTT types. It seems as though what is missing is a type which represents the act of executing an action on some elements (for example, $f(z)$). An element of this type would be a warrant that a specific action was executed. The imperative sentence then is satisfied when such a warrant exists.
I’m fairly new to HoTT so I don’t know if that makes sense at all, but I’d appreciate a nudge or hint in the right direction.
Thanks for the questions. To ask about imperatives is the natural next step after questions, which I discussed on the blog.
It’s worth pointing out that dependent type theory wasn’t designed for natural language:
In Stockholm, when I first discussed the project with Per Martin-Löf, he said that he had designed type theory for mathematics, and than natural language is something else. (Aarne Ranta, from longer quotation here)
So it’s not as though all has been settled about how judgements in the world should be construed.
In fact Ranta has something less that a page on imperatives, a good part of which is a recipe for mussels, in his Type-theoretic Grammar (OUP, 1994, pp. 142-3). His two quasi-formal rules are
(!) React to the instruction $!A$ by making $A$ true, that is, by producing an object of type $A$.
(!’) You may give the instruction $!A$ if the reader is able to produce an object of type $A$.
He mentions that these instructions may use anaphoric reference to the result of previous instructions.
Lots to think about here. I’m reminded of Wittgenstein’s building site language game “Brick!”, “Slab!”. But then these could be seen as short for “Make it the case that I have a brick in my hand!”, etc.
A couple of things to consider: Imperative computer languages, and instructions in Euclid’s geometry “Draw a line through A perpendicular to BC.”
Regarding your imperative dependent on an action, from $x:A \vdash B(x):Type$ (or maybe just $Prop$), we might have $x:A \vdash !B(x)$, but do we form $\prod_{x:A} !B(x)$?
Anyway, I don’t see that $A$ being some event type is a worry.
Thank you for the insights David. Focusing on imperative programming languages might be a good place to start. Consider the perennial imperative programming language statement:
print "hello world"
In Ranta’s treatment, which appears to be a dependent type theory version of Dubislav’s trick, $A$ would be the proposition $printed('hello world')$ which would be true once the action took place. I’m interested then in how we might represent the function call print
in dependent type theory. This function actually produces no output, which already makes representation somewhat difficult. One could use the trick outlined here to concatenate the effects of the function to its output through a product. However, the effects of this function do not form the predicate $A$. Rather, they may be predicates like $displayedonscreen('hello world')$ or $printedtocommandline('hello world')$, whereas the imperative seems to be a more specific command: execute a particular function. There is some ambiguity here, as the word print also describes one of the effects of the function. We can shelve that temporarily by renaming our print function to ’foo’.
It seems then that to form $A$ we would need the function to return a warrant to the proposition that the function was called. That could either be an event nucleus as per your book, or a bespoke proposition that is appended (via product) to the function’s (more accurately, the dependent product’s) output type. Abusing notation, would have: $foo: \Pi_{x:String}printed(x)$.
Some inference seems possible in this scheme. For example,
before printing X, convert it to a string.
print(2)
Here we must infer that $printed(2)$ is not obtainable by invoking foo(2)
, as it does not accept an integer. 2 needs to first be converted to a string, perhaps via $int2str : Int \to String$. We can then construct the proof of $printed(2)$ via composition: $foo \circ into2str(2)$. This does seem, on the surface, to solve Ross’s paradox due to the explicit requirement to obtain a warrant for $A$. A warrant for $A \vee B$ no longer satisfies the imperative, while $A \wedge B$ operates as expected.
I do wonder about the difference between satisfying an imperative and constructing a proof for a declaration. There appear to be (Section 5) different rules for satisfaction and truth, especially when imperatives and declaratives are mixed together. Dependent type theory may do away with these, or the rules could be encoded simply as types rather than axioms.
It seems that you are asking for models of what is called “side effects” – such as input/output – in functional languages (such as type theory, dependent or not). This is an interesting but well-understood question: These effects are modeled by monads (in the sense of CS), see there at For imperative programs
I wonder what there is concerning monads and dependent type theory for imperative programming. I see there’s an early paper
The use of monads appears to be helpful in formalizing imperative programs (notably their effects) in functional formalisms. However I don’t see how they address the underlying problem of imperative inference and its various paradoxes. I think the difference is as follows:
The complexity of formalizing imperative programming is in dealing with the effects of functions that extend beyond their immediate outputs. This necessitates handling the state of the context of execution. Monads offer a solution here.
The complexity of formalizing imperative logic, on the other hand, is that imperative inference seems to have different rules to declarative inference. For example, taking Dubislav’s convetion of $!A$ as denoting an imperative that is satisfied if $A$ is true, then we could infer $!A \vee B$. Imagining $A$ to be “The letter is sent” and $B$ to be “The letter is burnt”, it’s evident that this inference is not sound regardless of the context.
I believe the second problem no longer exists with Ranta’s type theoretic approach, as an element of type $A+B$ cannot produce a warrant for $A$ (The letter is sent) if it was constructed via right injection of a warrant of $B$ (The letter is burnt). The paradoxes and complications around imperative inference run deeper than this though. Looking around, I couldn’t find a treatment for the problem using type theory. I wonder if an attempt has been made.
Looking around, I couldn’t find a treatment for the problem using type theory. I wonder if an attempt has been made.
It wouldn’t surprise me if Ranta’s few lines were all there is. It’s really a strange situation. There’s vast amounts of work in philosophy of language and metaphysics on topics that would benefit from dependent type theory. Researchers in linguistics are a little more likely to use it, and computer scientists yet more. But there’s a great opportunity here to weave backwards and forwards through these disciplines.
Look at the summary of philosophical work on questions – SEP: Questions – not a whiff of type theory.
I see there are
Elgesem, D. (1997). The modal logic of agency. Nordic Journal of Philosophical Logic, 2(2), 1–46
Governatori, G., & Rotolo, A. (2005). On the axiomatisation of Elgesem’s logic of agency and ability. Journal of Philosophical Logic, 34(4), 403–431
The first introduces an operator $\mathbf{Does} p$ that means ‘the agent brings it about that $p$’
1 to 8 of 8