Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
We don’t have anything on this, I think, but there is mention of “intrinsic” and “à la Church” at coercion. From Type refinement and monoidal closed bifibrations:
One of the difficulties in giving a clear mathematical definition of the “topic” of type theory is that the word “type” is actually used with two very different intuitive meanings and technical purposes in mind:
- Like the syntactician’s parts of speech, as a way of defining the grammar of well-formed expressions.
- Like the semanticist’s predicates, as a way of identifying subsets of expressions with certain desirable properties.
These two different views of types are often associated respectively with Alonzo Church and Haskell Curry (hence “types à la Church” and “types à la Curry”), while the late John Reynolds referred to these as the intrinsic and the extrinsic interpretations of types [11]. In the intrinsic view, all expressions carry a type, and there is no need (or even sense) to consider the meaning of “untyped” expressions; while in the extrinsic view, every expression carries an independent meaning, and typing judgments serve to assert some property of that meaning.
[11] is John C. Reynolds. The Meaning of Types: from Intrinsic to Extrinsic Semantics. BRICS Report RS-00-32, Aarhus University, December 2000. pdf
There are two very different ways of giving denotational semantics to a programming language (or other formal language) with a nontrivial type system. In an intrinsic semantics, only phrases that satisfy typing judgements have meanings. Indeed, meanings are assigned to the typing judgements, rather than to the phrases themselves, so that a phrase that satisfies several judgements will have several meanings.
In contrast, in an extrinsic semantics, the meaning of each phrase is the same as it would be in a untyped language, regardless of its typing properties. In this view, a typing judgement is an assertion that the meaning of a phrase possesses some property.
The terms “intrinsic” and “extrinsic” are recent coinages by the author [1, Chapter 15], but the concepts are much older. The intrinsic view is associated with Alonzo Church, and has been called “ontological” by Leivant [2]. The extrinsic view is associated with Haskell Curry, and has been called “semantical” by Leivant.
[1] John C. Reynolds. Theories of Programming Languages. Cambridge University Press, Cambridge, England, 1998. [2] Daniel Leivant. Typing and computational properties of lambda expressions. Theoretical Computer Science, 44(1):51–68, 1986.
Anyone have a preferred name for this distinction?
Intrinsic/extrinsic isn’t bad.
Ok, so I’ve started something at intrinsic and extrinsic views of typing.
And I started type refinement.
In the (nonextended version of) A Fibrational Framework for Substructural and Modal Logics, there’s mention of Melliès and Zeilberger’s work:
Bifibrations have also been used recently to model refinement types and logical relations…Superficially, this seems to be a different use of the same semantic structure, because the base and domain categories of the bifibration play a different role…It would be interesting to investigate deeper connections and combine the two notions.
Noam was speaking on this work at Kent yesterday, and it was very good to meet him. It does surely seem that the common use of bifibrations is salient.
Yes, these two projects are much more closely related than I realized at first. When we last discussed their “Bifibrational reconstruction” paper, I was still thinking only of fibrations in an “indexing” sense, with cartesian lifts being “fundamental” and opcartesian lifts being “indexed coproducts”. But in LSR we’re using bifibrations in a very similar way as they are, with cartesian and opcartesian lifts being treated very dually, corresponding roughly to negative and positive type formers (“U-types” and “F-types” in adjoint-logic parlance). I think that their monoidal bifibrations are very close to the semantics of the still-nascent LSR 3-theory of “classical simple type theory” – like the extant LSR paper for “intuitionistic simple type theory” but allowing multiple consequents as well. (To be precise, right now I think that the most general thing we want is a monoidal bifibration over a compact closed cartesian bicategory.)
I don’t really know exactly what a “refinement type” is, but intuitively it seems to me that LSR can be thought of as a sort of “refinement typing”, and I think Dan has even used that word in referring to it. That is, the type theory “upstairs” is really just ordinary “cartesian” type theory, but “refined” by associating modal information to everything that controls what sort of derivations are permissible.
Intriguing. It would be good to resume the discussion with Noam.
I don’t really know exactly what a “refinement type” is…
There’s certainly more to say at type refinement. Should we extract ideas from this cs.stackexchange answer?
One motivation for the extrinsic point of view seems to be that it makes better sense of subtypes. So if I say things about a red apple, I may mean that they apply to the apple itself. In ’This red apple is tasty’, the ’tasty’ is a predicate applied to apples or foodstuffs, rather than to red apples.
So what’s the difference between a subtype, a dependent sum of a dependent type that is necessarily a proposition for each element of the base type, and a dependent type that happens to be a subsingleton for each element of the base?
Perhaps as an example of the three: Married man; Man and warrant of his marriage; and, Man and his spouse. (Assuming we’re in a monogamous society.)
Presumably the extrinsic type theorist is prepared to distinguish the first from the second, since in ’The married man is tall’ the height only applies to the man, not his marital status. As for the third, we’d certainly have to signal the projection to speak just of the man.
The man and his spouse are tall.
I guess if it’s just a question of a bifibration , you can have the extremes, and , and then you’re not going to speak of refinement. If just has as objects and as in Pfenning-Davies, the ’refining’ seems mostly about adding to the fibre, whereas if concerns all sets and predicates on those sets, it’s easier to see as being refined.
Hi David and Mike,
Maybe I can say a couple of things.
I agree that there is a close connection between the two projects (our work on type refinement systems and the LSR approach). It might help to know that the idea of building a fine-grained, “fibrational proof theory” (where the base category encodes the allowable operations on the context) was actually one of our original motivations when we started out, and the connection to refinement typing only came later. We talked a little bit about these proof-theoretical motivations in the “Isbell duality” paper, in particular the idea of interpreting logical formulas as fibered over context types, and inference rules as fibered over context operations. (For example, the rule of linear logic is fibered over the operation of context concatenation.) So what we call “context types” (which are “refined” by formulas in substructural logics) are very analogous to what LSR call “modes”.
A reason the analogy to refinement typing was originally very useful to me was that it made clear the importance of “working in the total category” over “working in the fibers”. Under the interpretation of a functor as a type refinement system, vertical morphisms correspond to subtyping derivations, which are just the special case of typing the identity. In some situations (namely for a fibration/opfibration), it is possible to reduce typing to subtyping, but somehow this is less natural from a type-theoretic perspective. Another useful example from computer science is Hoare logic, which can be modelled as a type refinement system over a one-object category of “commands”. In some cases it is possible to reduce reasoning in Hoare logic to reasoning in ordinary predicate logic by computing weakest preconditions (= pullbacks/right-cartesian liftings) or strongest postconditions (= pushforwards/left-cartesian liftings), but these may not always exist depending on the language of predicates and commands.
And from a formal perspective, alluding to what Mike said, one of the things that working with a type refinement system as a functor rather than as a fibration lets you do is treat left-cartesian and right-cartesian morphisms on equal footing. An example of this is the very useful fact that in an adjunction of refinement systems, the left adjoint preserves left-cartesian morphisms and the right adjoint preserve right-cartesian morphisms.
David wrote:
I guess if it’s just a question of a bifibration , you can have the extremes, and , and then you’re not going to speak of refinement.
But you can! These are actually examples that we use in the “Isbell duality” paper. The idea is that a category can be seen as a degenerate kind of “refinement system” in two different canonical ways, and this is actually useful sometimes for thinking of refinement systems as generalizations of categories (in the same sense as displayed categories).
I understand that the terminology of type refinement may not be helpful for everyone. I like the slide from Paul-André’s recent talk at Bonn where he said that a functor can be seen in two complementary ways: either as a category with “infrastructure” , or as a category with “superstructure” . In either case, we refer to the objects in as “refinements” of the objects in , although in the first case it may be more natural to think of the objects in as “shadows” of the objects in .
re David #8:
One motivation for the extrinsic point of view seems to be that it makes better sense of subtypes. So if I say things about a red apple, I may mean that they apply to the apple itself. In ’This red apple is tasty’, the ’tasty’ is a predicate applied to apples or foodstuffs, rather than to red apples.
Yes, to me that sounds very analogous to the motivation for the original (non-categorical) work on type refinement, which was to have types capturing more precise properties of programs but without changing the semantics of those programs. In pure dependent type theory the possibility of taking such an interpretation is not always clear, because as we modify the type of a program expression to make it more precise, we also have to modify the code of that expression. (On the other hand, dependent type theory is expressive enough that working within a fragment, you can try to maintain the invariant that programs with more precise types are erasable to programs with simpler types: for example, that’s the motivation for Conor McBride’s work on “ornaments”.)
I understand that the terminology of type refinement may not be helpful for everyone.
It’s probably the usual sort of situation: saying that A is like B can only help people understand B if they already understand A. For me, the analogy is probably more likely to be helpful in understanding what “type refinement” means than the other way around.
Noam, re #10, do you know whether the slides of Paul-André’s talk at Bonn will be made available?
Do any interesting lifting problems arise against such refinement bifibrations, ?
[EDIT: Bonn talk slides are here.]
Noam, re #10, do you know whether the slides of Paul-André’s talk at Bonn will be made available?
I don’t know, but will try asking…
Do any interesting lifting problems arise against such refinement bifibrations, ?
Indeed. You can ask about the lifting of a monad or endofunctor on to a monad or endofunctor on , and there are some interesting general constructions, see for example this paper by Shin-ya Katsumata, and this paper by Atkey, Johann, and Ghani.
Returning to this conversation, the video of Paul-André’s talk is available, but unfortunately in a way that’s difficult to see what he’s writing on the blackboard.
I was thinking, John Baez used to make a great deal about the analogy between hom sets and inner products, e.g., in Higher-Dimensional Algebra II: 2-Hilbert Spaces, explaining the similarity between adjoint operators and adjoint functors. With this bifibration set-up above of a total 2-category over a 2-category of modes, it seems like we’re seeing something similar to parallel transport of vectors. So a path (or arrow) in the 2-category of modes allows transport of elements in the fibres in both directions, sending the fibre over to that over , and doing the opposite. Then indicates the preservation of the ’inner product’.
By the way, Mike, can we expect any time soon the next instalment of the LSR project? Is that to be on dependent type (2-)theories, or perhaps the ’classical simple type theory’ in #6?
I think the next installment will be on dependent type theories. We’re working on it, but the details are fiddly, and none of us have a lot of time right now, so I don’t have an ETA yet.
Having written above
… we’re seeing something similar to parallel transport of vectors,
I see now Jason Reed writes in A Judgmental Deconstruction of Modal Logic:
It becomes evident in fact that the most common and familiar modal operations are precisely the ‘failure of holonomy’ of certain paths among modes of truth.
On p. 2, what does he have against (“We are careful not to indulge in the Martin-Löfian absurdity”) inferring from a proposition in one mode and a mode relation that something is a proposition in a different mode? Presumably we might have universes for each mode, and judge something to be a type in a mode.
I guess since Reed is dealing with a preorder of modes, he means to say that compositions of adjoints back to the same category need not be the identity. The LSR 2-category of modes approach, however, allow genuine circuits of left adjoints.
I wrote a long reply to this and then lost it by mistakenly hitting reload. d-: The short version is that I think Reed’s point is that a judgment is a relation on pre-existing entities, so if the judgment is written as , then you need to have already defined a set of “untyped terms” from which the syntax is drawn, as he did in the previous paragraph. This is a valid point but I don’t think it’s necessary to be so dogmatic about it; one can informally consider the set of terms to be implicitly inductively defined along with their typing judgments.
Shame about the loss. I used to use a recovery app back in the day when posting to the n-Café was so precarious.
I’m not sure I quite get the point. Does his objection cover any judgment of the form , as this is a relation on supposed pre-existing entities?
I can’t see why the specific case of rules concerning -propositions giving rise to -propositions is any more problematic.
If I understand correctly, then yes, he would say that before giving rules for a judgment , one ought to first define the untyped syntax of terms from which and are drawn.
Unfortunately Lazarus doesn’t work with current firefox any more.
Yeah, I really miss Lazarus.
Unfortunately Lazarus doesn’t work with current firefox any more.
There is Form History Control though it is much more complicated (has abilities way beyond Lazarus that those looking for a replacement don’t need.)
If you do start using it you should set your rentention period to just a few days. I had it getting clogged up with the default setting.
Thanks for the suggestion. What exactly does “clogged up” mean?
by “clogged up” I meant that Form History Control can save a lot of data which can affect its performance.
I installed FHC as a Lazarus replacement then didn’t need to actually use it for many months. When I did I recall getting messages about there being too much history to display and urging me to clean it up, and some actions never completing. I think I was able to recover my typing. maybe after a FF restart.
Then again, I’m running Firefox Nightly (which can be flaky itself) and FHA has been updated since I needed it so it now may work fine with the default settings.
Still, if you are just using it to recover stuff you recently typed in, a 10 day window might be wise so you don’t have to deal with too much data in a tool you hardly ever use and have little familiarity with.
Personally, I’ve gotten into the habit of doing a select-all then copy before I submit any text form. This is usually enough to recover if the remote process loses the text. I see FHC as a second line of defense.
Of course, a judgment like is also often a relation between pre-existing untyped syntax. For instance, this is pretty much the only way to make formal sense of dependent type theory with the conversion rule
(Ok, well, “pretty much the only” is probably too strong, but it’s certainly the most obvious and straightforward way.)
What does well-typed-up-to-simple-types mean?
That you’ve defined the basic syntax of your language to not include all untyped lambda terms, but only those that are typeable in the simply-typed lambda calculus, and then define dependent types as a refinement of that type discipline. https://www.cs.princeton.edu/~dpw/papers/CMU-CS-02-101.pdf is an example of being able to compute a term’s beta-normal form by relying on an arbitrary term’s simple type as an induction measure, instead of having to appeal to arbitrary beta-zigzags in the .
Is that also what people mean when they talk about type “erasure”?
Also, am I right that such an approach only works for “simple enough” dependent type theories, in particular those not combining positive types with a universe? For if you have bool and U, you can define a type P : bool -> U by cases, and then a dependently typed function f : Pi(x:bool) P(x) which probably isn’t simply typable?
Are we returning to the relation in the quotation in #5
Bifibrations have also been used recently to model refinement types and logical relations…Superficially, this seems to be a different use of the same semantic structure, because the base and domain categories of the bifibration play a different role…It would be interesting to investigate deeper connections and combine the two notions?
I see in Functors are Type Refinement Systems, they appeal to the microcosm principle:
These kinds of definitions—where in order to define some logical structure of interest we rely on a similar structure in the logical framework—are a recurring pattern, and an example of the “microcosm principle” in the sense of Baez and Dolan. In Section 6.4 we will see this pattern again in the definition of the logical relation.
as Mike has done for his LSR approach.
Does this put limits on refinement?
1 to 39 of 39