Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 8 of 8
A reading group I belong to is reading Robert Brandom’s Articulating Reasons. The fourth chapter is entitled ’ What Are Singular Terms, and Why Are There Any?’. It argues that in any language sufficient for the giving of reasons for claims there will be two kinds of relationship between sub-sentential components.
First, there will be what he calls ’singular terms’ which permit symmetric inferential relations between assertions in which they occur. So, there will be ’frames’ into which these terms can be placed, and for some pairs of terms each will give rise to equivalent frame variants. E.g., any predicate applied to Benjamin Franklin equally applies to ’the first postmaster general of the US’. (Modalities may not allow this, as with ’I believe x to have been a postmaster general’.)
This seems to me very like our terms, with substitutions made possible by warrants in identity types for two terms.
Second, there will be replacements of frames, for instance replacing one predicate applied to a singular term by another suitable predicate. Some of these replacements will generate pairs of propositions for which there’s an asymmetric inference relation, e.g., x being a dog to x being a mammal.
It seems to me these are our types, allowing non-invertible functions between them.
No appeal is made by Brandom to a formal language, so it’s interesting to see if there are grounds here for why we ought to expect there to be two different kinds of sub-sentential role. One of the main arguments through the chapter is the we need both kinds of relationship. We couldn’t have both symmetrical as no inference could then take place.
On the other hand pairs of ’singular terms’ couldn’t generate asymmetric inference relations. This seems to amount to an argument that if for two syntactically substitutible terms, we had $a \lt b$, such that $P(a) \vdash P(b)$ for all frames/types $P$, then there would be vast array of frames/types allowing inference from the $b$ variant to the $a$ variant, e.g., we’d have $P(b) \to R \vdash P(a) \to R$. It would require very restrictive frame formation, were we to require universal asymmetry of inference relation between $a$ and $b$ variants.
Naturally, there’s more to the argument than that. As I mentioned, Brandom doesn’t appeal to any formal system here (but I need to check out the much longer ’Making It Explicit’), but interesting to see what may be the beginnings of an explanation for why we’ve ended up with a type theory with two kinds of syntax – terms and type. Has anyone seen any attempt to explain why these two seems so natural?
If he’s onto something here, that would be a plus point for type theory over, say, predicate logic, for which it is not immediate how to divide the different components – variables, predicates, relations, functions, constants - as to the two roles. Of course, we also have types themselves being (or having names which are) terms of $Type$, but in this role again it’s all about symmetric substitutions.
Brandom doesn’t appeal to a formal system for his argument in Making it Explicit (MIE), but he does discuss a semi-formal one in Appendix I to Chapter 6, which is the analogue to Chapter 4 in Articulating Reasons. There he shows how to recover Lewis’s functional-categorical setup from “General Semantics”. The idea is that one starts with some basic categories like singular terms (T) and sentences (S), and then predicates in the derived category (T -> S) are defined syntactically as functions taking terms as arguments and yielding sentences as values (the categorical language is Lewis’s, and isn’t meant to invoke other meanings of these terms). Semantically, they’re interpreted by functions from semantic interpretants of terms to semantic interpretants of sentences. This is all from Lewis, and then Brandom argues that Lewis’s functional-categorical setup can be obtained from Brandom’s substitution-structural approach. He gives two ways of doing this, depending on whether one starts with the subsentential expressions as given, or whether one only has the substitution transformations. I take it that Brandom doesn’t want to restrict himself to a particular formal system, given the aims of his wider project, but that Lewis’s General Semantics is supposed to be broad enough that if Brandom can show how to couch his substitutional-structural view in that, then the reader can try to cash it out in his or her favorite, more specific, formal system.
Either way, what he ends up with is a formal system that has a distinction between terms and sentences, and then predicates are defined in terms of substitution. So unlike Lewis’s General Semantics, which permits any number of basic categories, it looks like Brandom wants specifically two: terms and sentences. He then also wants a hierarchy of derived categories, with predicates at the bottom. Each level of the hierarchy needs a “replaceability equivalence structure” (p. 408), which is some kind of groupoid structure on the predicates. At bottom, though, the distinction between terms predicates is that the significance of a term is determined by the content associated with a symmetric simple material substitution inferential commitment. This symmetry is necessary and sufficient to mark something as a term, so this is the reason that we get two kinds: either the SMSIC is symmetric or it isn’t.
As for type theory, it’s not clear to me that the Brandom’s distinction between terms and frames maps very cleanly onto the term/type distinction. Terms seem to be roughly the same thing in each case, but I’m having a hard time hooking up Brandom’s frames with types. The basic semantic unit for Brandom is the sentence, so I’m trying to identify what that would be in type theory. One option is that a sentence is a judgement in a context, so for some judgement x : X we can note that it contains one subsentential term, x, meaning that ’: X’ is the frame. On this picture we’d have to allow terms to have lots of different types, since we’d have sentences like ’x is a dog’ and ’x is a mammal’, corresponding to judgements ’x : Dog’ and ’x : Mammal’. We’d also have to cash out n-ary predicates in terms of judgements like ’(x, y, z) : P’. I don’t immediately see a problem with this, but it doesn’t look much like the work on type-theoretical grammar that I’ve seen. The more natural thought to me is that one ought to think of predicates as terms of type T -> Type, meaning that a sentence would be the type P(x).
But on this picture frames are terms, not types. In fact, I think that this is a broader tension in reading Brandom’s term/frame distinction as the term/type distinction. As he discusses in Appendix I of Chapter 6, there are two orthogonal divisions of roles along substitution-structural and functional-categorical lines. The term/frame distinction tracks the functional-categorical roles of belonging to a basic v. a derived category. But items of derived categories must be able to play the tole of functions, arguments, or values. So a frame has to be a term of a function type, meaning that it can’t itself be a type. A family of types, on the other hand, belongs to a derived category,is of a function type, and can be the argument or value of other functions. So families of types seem to fit better, but now Brandom’s distinction doesn’t seem to map on to as central a distinction in type theory.
It’s great that you’ve thought about these things. I guess, as you say, he’s thinking about frames as mapping terms to sentences. That would seem to fit with a view of them as dependent types, in this case propositions (employing the ’proposition as some types’ approach). To take a simple case
$x: Object \vdash Red(x): Type.$Many placed predicates are fine too, e.g.,
$x: Person, y: City \vdash LivesIn(x, y): Type.$Where you have
corresponding to judgements ’x : Dog’ and ’x : Mammal’.
normally, ’x’s are used as variables in which case what you have above aren’t judgements.
we’d have to allow terms to have lots of different types
Officially, in type theory we can’t have this. I mean to probe our type theorist friends about ’coercions’. Without this, if we have related types such as Dog and Mammal, the relation will be something like by defining Dog as ’Mammal which is canine’, in which case a term of type Dog would be formed of a pair which is a term in Mammal and a warrant to vouch for its canineness.
Perhaps this is a way to dissolve philosophers’ mystification that a clay statue when rolled back into a lump of clay destroys the former but not the clay itself. A term of type $ClayStatue$ is a pair $\langle l, s \rangle$, where $l: ClayLump$ and $s: Statue(l)$, having defined the dependent type
$x: ClayLump \vdash Statue(x): Prop.$So it is not well-formed to ask of a term of type $ClayLump$ and one of type $ClayStatue$ whether they are identical. On the other hand, there is a projection from the latter to the underlying lump.
Perhaps this is a way to dissolve philosophers’ mystification that a clay statue when rolled back into a lump of clay destroys the former but not the clay itself.
Something along these lines has occurred to me, but I’m not quite sure how it would go, in part because I’m not totally clear on the rules of the ontological game, here. I recall discussing Chisholm’s rejection of this solution in a seminar once and being dissatisfied with it. In Chapter III of his Person and Object he gives a couple possible interpretations of Butler’s claim that there’s only a “loose and popular sense” in which ships like Theseus’s persist, and in the “strict and philosophical sense” they don’t persist. One of the interpretations he gives is essentially that identity is typed:
we should distinguish the two locutions ’A is identical with B and A is a so-and-so’ and ’A is the same so-and-so as B’. It has even been suggested that, for purposes of philosophy, the first of these two locutions should be abandoned in favour of the second (p. 95).
In the endnotes he refers to Geach as a proponent of this view. Chisholm’s main objection to typed identity is that there’s only a reason for it if we can find instances of the schema
A is the same so-and-so as B, and A is a such-and-such but is not the same such-and-such as B.
But, he says, we can’t find any real ones. In MLTT this schema isn’t strictly possible, since A can’t be both a so-and-so and a such-and-such.[1] So Chisholm’s criterion doesn’t seem right. Maybe one ought to allow implicit coercions in it. If so, then if we use some lump of clay $l : ClayLump$ to make two statues $s, s' : Statue(l)$ we end up with a case where $\langle l, s\rangle$ is the same lump of clay as $\langle l, s'\rangle$, but $\langle l, s\rangle$ and $\langle l, s'\rangle$ are different statues. This kind of implicit coercion feels like the right semantics for this sentence anyway. But I don’t have enough of a sense of the debate to know the kinds of objections this would face.
[1] I don’t know if this is true of other type theories or not. I don’t see why you couldn’t have a type theory where terms can have lots of types. Even MLTT has multiply-typed terms, since if $A : Type_{i}$ then $A : Type_{i + 1}$. But I don’t know what the issues are here.
When you write “This kind of implicit coercion” are you alluding to the technical sense of ’coercion’ as discussed here and with its new entry coercion?
I was thinking we’d need something like coercion to deal with entities classified in some hierarchical scheme, such as
Kingdom: Animalia; Phylum: Chordata; Class: Mammalia; Order: Carnivora; Family: Felidae; Genus: Felis; Species: Felis catus.
It would seem odd to have terms of these types as pairs of terms of type $Animalia$ and a warrant for it’s being mammal, carnivore, etc., or worse a tuple of a term in Animalia, followed by a list of warrants for each of the next stages.
As we learn to speak, some of the classes come early – animal and cat, some much later (if at all) – chordata, Felis. I wonder if the child’s mind works with the early ones along the lines of coercions, such as $Cat \lt_{c} Animal$ for $c$ the inclusion.
On the other hand, a class like chordata probably is learned as ’animal with a notochord’ – first something is recognised as animal, then checked to see if it has a notochord.
Have to dash now, but to think about, when do we have
$\frac {\vdash a, b: A \;\;\; p:Id_A(a,b) \;\;\; A \lt_{c} B : Type}{\vdash p: Id_B (a, b)}?$Is there coercion of identity proofs?
I’ll pose that last part to the coercion experts over here.
for two syntactically substitutible terms, we had $a \lt b$, such that $P(a) \vdash P(b)$ for all frames/types $P$, then there would be vast array of frames/types allowing inference from the $b$ variant to the $a$ variant, e.g., we’d have $P(b) \to R \vdash P(a) \to R$. It would require very restrictive frame formation, were we to require universal asymmetry of inference relation between $a$ and $b$ variants.
Very roughly, I would say that that’s why directed type theory is difficult. (-:
I don’t see why you couldn’t have a type theory where terms can have lots of types.
One thing to consider is that there are different kinds of type theories; two principal ones sometimes go by the name of “typing a la Curry” and “typing a la Church”. Around here we generally talk about typing a la Church, in which the type of a term is an intrinsic aspect of it, but in typing a la Curry there is a pre-existing collection of untyped objects to which one then assigns types. In Church typing, for one term to have multiple types you generally talk about coercions, whereas with Curry typing there is no problem at all with assigning one term multiple types.
1 to 8 of 8