Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorDavid_Corfield
    • CommentTimeApr 9th 2016

    We don’t have anything on this, I think, but there is mention of “intrinsic” and “à la Church” at coercion. From Type refinement and monoidal closed bifibrations:

    One of the difficulties in giving a clear mathematical definition of the “topic” of type theory is that the word “type” is actually used with two very different intuitive meanings and technical purposes in mind:

    1. Like the syntactician’s parts of speech, as a way of defining the grammar of well-formed expressions.
    2. Like the semanticist’s predicates, as a way of identifying subsets of expressions with certain desirable properties.

    These two different views of types are often associated respectively with Alonzo Church and Haskell Curry (hence “types à la Church” and “types à la Curry”), while the late John Reynolds referred to these as the intrinsic and the extrinsic interpretations of types [11]. In the intrinsic view, all expressions carry a type, and there is no need (or even sense) to consider the meaning of “untyped” expressions; while in the extrinsic view, every expression carries an independent meaning, and typing judgments serve to assert some property of that meaning.

    [11] is John C. Reynolds. The Meaning of Types: from Intrinsic to Extrinsic Semantics. BRICS Report RS-00-32, Aarhus University, December 2000. pdf

    There are two very different ways of giving denotational semantics to a programming language (or other formal language) with a nontrivial type system. In an intrinsic semantics, only phrases that satisfy typing judgements have meanings. Indeed, meanings are assigned to the typing judgements, rather than to the phrases themselves, so that a phrase that satisfies several judgements will have several meanings.

    In contrast, in an extrinsic semantics, the meaning of each phrase is the same as it would be in a untyped language, regardless of its typing properties. In this view, a typing judgement is an assertion that the meaning of a phrase possesses some property.

    The terms “intrinsic” and “extrinsic” are recent coinages by the author [1, Chapter 15], but the concepts are much older. The intrinsic view is associated with Alonzo Church, and has been called “ontological” by Leivant [2]. The extrinsic view is associated with Haskell Curry, and has been called “semantical” by Leivant.

    [1] John C. Reynolds. Theories of Programming Languages. Cambridge University Press, Cambridge, England, 1998. [2] Daniel Leivant. Typing and computational properties of lambda expressions. Theoretical Computer Science, 44(1):51–68, 1986.

    Anyone have a preferred name for this distinction?

    • CommentRowNumber2.
    • CommentAuthorMike Shulman
    • CommentTimeApr 11th 2016

    Intrinsic/extrinsic isn’t bad.

    • CommentRowNumber3.
    • CommentAuthorDavid_Corfield
    • CommentTimeApr 11th 2016

    Ok, so I’ve started something at intrinsic and extrinsic views of typing.

    • CommentRowNumber4.
    • CommentAuthorDavid_Corfield
    • CommentTimeApr 11th 2016

    And I started type refinement.

    • CommentRowNumber5.
    • CommentAuthorDavid_Corfield
    • CommentTimeJun 20th 2018
    • (edited Aug 8th 2018)

    In the (nonextended version of) A Fibrational Framework for Substructural and Modal Logics, there’s mention of Melliès and Zeilberger’s work:

    Bifibrations have also been used recently to model refinement types and logical relations…Superficially, this seems to be a different use of the same semantic structure, because the base and domain categories of the bifibration play a different role…It would be interesting to investigate deeper connections and combine the two notions.

    Noam was speaking on this work at Kent yesterday, and it was very good to meet him. It does surely seem that the common use of bifibrations is salient.

    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeJun 20th 2018

    Yes, these two projects are much more closely related than I realized at first. When we last discussed their “Bifibrational reconstruction” paper, I was still thinking only of fibrations in an “indexing” sense, with cartesian lifts being “fundamental” and opcartesian lifts being “indexed coproducts”. But in LSR we’re using bifibrations in a very similar way as they are, with cartesian and opcartesian lifts being treated very dually, corresponding roughly to negative and positive type formers (“U-types” and “F-types” in adjoint-logic parlance). I think that their monoidal bifibrations are very close to the semantics of the still-nascent LSR 3-theory of “classical simple type theory” – like the extant LSR paper for “intuitionistic simple type theory” but allowing multiple consequents as well. (To be precise, right now I think that the most general thing we want is a monoidal bifibration over a compact closed cartesian bicategory.)

    I don’t really know exactly what a “refinement type” is, but intuitively it seems to me that LSR can be thought of as a sort of “refinement typing”, and I think Dan has even used that word in referring to it. That is, the type theory “upstairs” is really just ordinary “cartesian” type theory, but “refined” by associating modal information to everything that controls what sort of derivations are permissible.

    • CommentRowNumber7.
    • CommentAuthorDavid_Corfield
    • CommentTimeJun 21st 2018
    • (edited Jun 21st 2018)

    Intriguing. It would be good to resume the discussion with Noam.

    I don’t really know exactly what a “refinement type” is…

    There’s certainly more to say at type refinement. Should we extract ideas from this cs.stackexchange answer?

    • CommentRowNumber8.
    • CommentAuthorDavid_Corfield
    • CommentTimeJun 22nd 2018

    One motivation for the extrinsic point of view seems to be that it makes better sense of subtypes. So if I say things about a red apple, I may mean that they apply to the apple itself. In ’This red apple is tasty’, the ’tasty’ is a predicate applied to apples or foodstuffs, rather than to red apples.

    So what’s the difference between a subtype, a dependent sum of a dependent type that is necessarily a proposition for each element of the base type, and a dependent type that happens to be a subsingleton for each element of the base?

    Perhaps as an example of the three: Married man; Man and warrant of his marriage; and, Man and his spouse. (Assuming we’re in a monogamous society.)

    Presumably the extrinsic type theorist is prepared to distinguish the first from the second, since in ’The married man is tall’ the height only applies to the man, not his marital status. As for the third, we’d certainly have to signal the projection to speak just of the man.

    The man and his spouse are tall.

    • CommentRowNumber9.
    • CommentAuthorDavid_Corfield
    • CommentTimeJun 23rd 2018

    I guess if it’s just a question of a bifibration 𝒟\mathcal{D} \to \mathcal{M}, you can have the extremes, =1\mathcal{M} = \mathbf{1} and 𝒟=\mathcal{D} = \mathcal{M}, and then you’re not going to speak of refinement. If \mathcal{M} just has as objects tt and vv as in Pfenning-Davies, the ’refining’ seems mostly about adding to the tt fibre, whereas if \mathcal{M} concerns all sets and 𝒟\mathcal{D} predicates on those sets, it’s easier to see \mathcal{M} as being refined.

    • CommentRowNumber10.
    • CommentAuthorNoam_Zeilberger
    • CommentTimeJun 23rd 2018
    • (edited Jun 23rd 2018)

    Hi David and Mike,

    Maybe I can say a couple of things.

    I agree that there is a close connection between the two projects (our work on type refinement systems and the LSR approach). It might help to know that the idea of building a fine-grained, “fibrational proof theory” (where the base category encodes the allowable operations on the context) was actually one of our original motivations when we started out, and the connection to refinement typing only came later. We talked a little bit about these proof-theoretical motivations in the “Isbell duality” paper, in particular the idea of interpreting logical formulas as fibered over context types, and inference rules as fibered over context operations. (For example, the R{\otimes}R rule of linear logic is fibered over the operation of context concatenation.) So what we call “context types” (which are “refined” by formulas in substructural logics) are very analogous to what LSR call “modes”.

    A reason the analogy to refinement typing was originally very useful to me was that it made clear the importance of “working in the total category” over “working in the fibers”. Under the interpretation of a functor as a type refinement system, vertical morphisms correspond to subtyping derivations, which are just the special case of typing the identity. In some situations (namely for a fibration/opfibration), it is possible to reduce typing to subtyping, but somehow this is less natural from a type-theoretic perspective. Another useful example from computer science is Hoare logic, which can be modelled as a type refinement system over a one-object category of “commands”. In some cases it is possible to reduce reasoning in Hoare logic to reasoning in ordinary predicate logic by computing weakest preconditions (= pullbacks/right-cartesian liftings) or strongest postconditions (= pushforwards/left-cartesian liftings), but these may not always exist depending on the language of predicates and commands.

    And from a formal perspective, alluding to what Mike said, one of the things that working with a type refinement system as a functor rather than as a fibration lets you do is treat left-cartesian and right-cartesian morphisms on equal footing. An example of this is the very useful fact that in an adjunction of refinement systems, the left adjoint preserves left-cartesian morphisms and the right adjoint preserve right-cartesian morphisms.

    David wrote:

    I guess if it’s just a question of a bifibration 𝒟\mathcal{D}\to\mathcal{M}, you can have the extremes, =1\mathcal{M}=1 and 𝒟=\mathcal{D}=\mathcal{M}, and then you’re not going to speak of refinement.

    But you can! These are actually examples that we use in the “Isbell duality” paper. The idea is that a category can be seen as a degenerate kind of “refinement system” in two different canonical ways, and this is actually useful sometimes for thinking of refinement systems as generalizations of categories (in the same sense as displayed categories).

    I understand that the terminology of type refinement may not be helpful for everyone. I like the slide from Paul-André’s recent talk at Bonn where he said that a functor p:p : \mathcal{E} \to \mathcal{B} can be seen in two complementary ways: either as a category \mathcal{E} with “infrastructure” \mathcal{B}, or as a category \mathcal{B} with “superstructure” \mathcal{E}. In either case, we refer to the objects in \mathcal{E} as “refinements” of the objects in \mathcal{B}, although in the first case it may be more natural to think of the objects in \mathcal{B} as “shadows” of the objects in \mathcal{E}.

  1. re David #8:

    One motivation for the extrinsic point of view seems to be that it makes better sense of subtypes. So if I say things about a red apple, I may mean that they apply to the apple itself. In ’This red apple is tasty’, the ’tasty’ is a predicate applied to apples or foodstuffs, rather than to red apples.

    Yes, to me that sounds very analogous to the motivation for the original (non-categorical) work on type refinement, which was to have types capturing more precise properties of programs but without changing the semantics of those programs. In pure dependent type theory the possibility of taking such an interpretation is not always clear, because as we modify the type of a program expression to make it more precise, we also have to modify the code of that expression. (On the other hand, dependent type theory is expressive enough that working within a fragment, you can try to maintain the invariant that programs with more precise types are erasable to programs with simpler types: for example, that’s the motivation for Conor McBride’s work on “ornaments”.)

    • CommentRowNumber12.
    • CommentAuthorMike Shulman
    • CommentTimeJun 23rd 2018

    I understand that the terminology of type refinement may not be helpful for everyone.

    It’s probably the usual sort of situation: saying that A is like B can only help people understand B if they already understand A. For me, the analogy is probably more likely to be helpful in understanding what “type refinement” means than the other way around.

    • CommentRowNumber13.
    • CommentAuthorDavid_Corfield
    • CommentTimeJun 25th 2018
    • (edited Sep 15th 2020)

    Noam, re #10, do you know whether the slides of Paul-André’s talk at Bonn will be made available?

    Do any interesting lifting problems arise against such refinement bifibrations, p:p : \mathcal{E} \to \mathcal{B}?

    [EDIT: Bonn talk slides are here.]

  2. Noam, re #10, do you know whether the slides of Paul-André’s talk at Bonn will be made available?

    I don’t know, but will try asking…

    Do any interesting lifting problems arise against such refinement bifibrations, p:p : \mathcal{E} \to \mathcal{B}?

    Indeed. You can ask about the lifting of a monad or endofunctor on \mathcal{B} to a monad or endofunctor on \mathcal{E}, and there are some interesting general constructions, see for example this paper by Shin-ya Katsumata, and this paper by Atkey, Johann, and Ghani.

    • CommentRowNumber15.
    • CommentAuthorDavid_Corfield
    • CommentTimeAug 8th 2018
    • (edited Sep 2nd 2018)

    Returning to this conversation, the video of Paul-André’s talk is available, but unfortunately in a way that’s difficult to see what he’s writing on the blackboard.

    I was thinking, John Baez used to make a great deal about the analogy between hom sets and inner products, e.g., in Higher-Dimensional Algebra II: 2-Hilbert Spaces, explaining the similarity between adjoint operators and adjoint functors. With this bifibration set-up above of a total 2-category over a 2-category of modes, it seems like we’re seeing something similar to parallel transport of vectors. So a path (or arrow) in the 2-category of modes f:ABf:A \to B allows transport of elements in the fibres in both directions, F fF_f sending the fibre over AA to that over BB, and U fU_f doing the opposite. Then (F fX,Y) B(X,U fY) A(F_f X, Y)_B \cong (X, U_f Y)_A indicates the preservation of the ’inner product’.

    By the way, Mike, can we expect any time soon the next instalment of the LSR project? Is that to be on dependent type (2-)theories, or perhaps the ’classical simple type theory’ in #6?

    • CommentRowNumber16.
    • CommentAuthorMike Shulman
    • CommentTimeAug 8th 2018

    I think the next installment will be on dependent type theories. We’re working on it, but the details are fiddly, and none of us have a lot of time right now, so I don’t have an ETA yet.

    • CommentRowNumber17.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 2nd 2018

    Having written above

    … we’re seeing something similar to parallel transport of vectors,

    I see now Jason Reed writes in A Judgmental Deconstruction of Modal Logic:

    It becomes evident in fact that the most common and familiar modal operations are precisely the ‘failure of holonomy’ of certain paths among modes of truth.

    On p. 2, what does he have against (“We are careful not to indulge in the Martin-Löfian absurdity”) inferring from a proposition in one mode and a mode relation that something is a proposition in a different mode? Presumably we might have universes for each mode, and judge something to be a type in a mode.

    • CommentRowNumber18.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 2nd 2018

    I guess since Reed is dealing with a preorder of modes, he means to say that compositions of adjoints back to the same category need not be the identity. The LSR 2-category of modes approach, however, allow genuine circuits of left adjoints.

    • CommentRowNumber19.
    • CommentAuthorMike Shulman
    • CommentTimeSep 2nd 2018

    I wrote a long reply to this and then lost it by mistakenly hitting reload. d-: The short version is that I think Reed’s point is that a judgment is a relation on pre-existing entities, so if the judgment is written as A pprop p\vdash A_p \; prop_p, then you need to have already defined a set of “untyped terms” from which the syntax A pA_p is drawn, as he did in the previous paragraph. This is a valid point but I don’t think it’s necessary to be so dogmatic about it; one can informally consider the set of terms to be implicitly inductively defined along with their typing judgments.

    • CommentRowNumber20.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 2nd 2018

    Shame about the loss. I used to use a recovery app back in the day when posting to the n-Café was so precarious.

    I’m not sure I quite get the point. Does his objection cover any judgment of the form a:A\vdash a: A, as this is a relation on supposed pre-existing entities?

    I can’t see why the specific case of rules concerning pp-propositions giving rise to qq-propositions is any more problematic.

    • CommentRowNumber21.
    • CommentAuthorMike Shulman
    • CommentTimeSep 2nd 2018

    If I understand correctly, then yes, he would say that before giving rules for a judgment a:Aa:A, one ought to first define the untyped syntax of terms from which aa and AA are drawn.

    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeSep 2nd 2018

    Unfortunately Lazarus doesn’t work with current firefox any more.

    • CommentRowNumber23.
    • CommentAuthorTodd_Trimble
    • CommentTimeSep 2nd 2018

    Yeah, I really miss Lazarus.

    • CommentRowNumber24.
    • CommentAuthorRodMcGuire
    • CommentTimeSep 2nd 2018

    Unfortunately Lazarus doesn’t work with current firefox any more.

    There is Form History Control though it is much more complicated (has abilities way beyond Lazarus that those looking for a replacement don’t need.)

    If you do start using it you should set your rentention period to just a few days. I had it getting clogged up with the default setting.

    • CommentRowNumber25.
    • CommentAuthorMike Shulman
    • CommentTimeSep 3rd 2018

    Thanks for the suggestion. What exactly does “clogged up” mean?

    • CommentRowNumber26.
    • CommentAuthorRodMcGuire
    • CommentTimeSep 3rd 2018

    by “clogged up” I meant that Form History Control can save a lot of data which can affect its performance.

    I installed FHC as a Lazarus replacement then didn’t need to actually use it for many months. When I did I recall getting messages about there being too much history to display and urging me to clean it up, and some actions never completing. I think I was able to recover my typing. maybe after a FF restart.

    Then again, I’m running Firefox Nightly (which can be flaky itself) and FHA has been updated since I needed it so it now may work fine with the default settings.

    Still, if you are just using it to recover stuff you recently typed in, a 10 day window might be wise so you don’t have to deal with too much data in a tool you hardly ever use and have little familiarity with.

    Personally, I’ve gotten into the habit of doing a select-all then copy before I submit any text form. This is usually enough to recover if the remote process loses the text. I see FHC as a second line of defense.

    • CommentRowNumber27.
    • CommentAuthorjcreed
    • CommentTimeSep 3rd 2018
    • (edited Sep 3rd 2018)
    Hi all - I'm the author of that "Judgmental Deconstruction" note cited above.

    The "Martin-Lofian absurdity" line is something that I half-regret
    writing. It concerns an actual mathematical/aesthetic opinion I have,
    but it was needless of me to be so polemical about it.

    The opinion is pretty narrow and not a hill I'd die on, really: stated
    with the benefit of some rumination in the years since then, I can
    say that if you're going to set up a type of "pre-propositions" and
    carve out the "real propositions" as a subset of them, and then carve
    out the "true propositions" as a subset of *them* in turn, then I feel
    you ought to have a good reason for doing so, and articulate it. And I
    think (it's been a while since I've read the original sienna lectures,
    so I hope I'm not mischaracterizing them) the particular notation he
    used to write down the rules

    |- A prop |- B prop
    -------------------------- ^formation
    |- A ^ B prop

    and

    |- A true |- B true
    -------------------------- ^introduction
    |- A ^ B true

    is confusing levels in a certain way.

    For when I write down, in an LF (a la harper honsell plotkin)
    signature to represent these things, I'd declare constants and
    constant type families

    prop : type.
    ^ : prop -> prop -> prop.
    true : prop -> type.
    ^introduction : Pi A : prop. Pi B : prop. true A -> true B -> true (A ^ B)

    and so the most "direct" translation, in my mind, of the ^-formation part of this
    signature back into informal inference-rule notation, is actually

    |- prop |- prop
    ------------------ ^
    |- prop

    That is, the syntactic operation of conjunction is telling us that if
    prop is inhabited, and prop is inhabited, then prop is inhabited ---
    and the name of this method of getting from two props to one is called
    "^".

    Now, you might ask: *can* we arrange the logical-framework encoding so
    that M-L's formation rule is what you get out? Sure! One way we could do it is

    preprop : type.
    prop : type.
    pre^ : preprop -> preprop -> preprop.
    true : Pi A : preprop -> type.
    ^formation : Pi A : preprop. Pi B : preprop. prop A -> prop B -> prop (A pre^ B)
    ^introduction : Pi A : preprop. Pi B : preprop. true A -> true B -> true (A pre^ B)

    or alternatively we could bake in the requirement that the preprops
    we're dealing with are props everywhere. But this has all the
    advantages of trying to write a compiler in lisp instead of
    haskell/ocaml/sml/etc. For the particular case of propositional logic,
    the first encoding, without this unnecessary abstraction level, to me
    seems like a better way of setting things up.

    I'm happy to admit that other, more complex, situations may merit
    careful staging of some pre-syntax, and filtering out a better behaved
    subset of it. But philosophically it seems to me that if you write
    down

    |- A prop |- B prop
    -------------------------- ^formation
    |- A ^ B prop

    you're *already assuming that I know what you mean by A ^ B as a syntactic object*,
    so it seems just as well to say "what the syntax of propositions is, is

    Prop A ::= A ^ B | A v B | ... etc. "

    instead of saying (either tacitly, as I think M-L does, or explicitly)
    "what the syntax of pre-propositions is, is

    Pre-Prop A ::= A ^ B | A v B | ... etc. and the propositions are characterized by

    |- A prop |- B prop
    -------------------------- ^formation
    |- A ^ B prop
    "

    I hope some of that makes sense --- I find it hard to think/talk about these things clearly
    without reference to some hard-edged formal system as actually implemented in a computer,
    which is why I attemped to explain in terms of pseudo-twelf notation above...
    • CommentRowNumber28.
    • CommentAuthorjcreed
    • CommentTimeSep 3rd 2018
    Which is all to say that Mike's understanding above of what I was getting as is fairly correct. It is indeed not essential to be dogmatic about it, and I repudiate past-me for thinking so.
    • CommentRowNumber29.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 4th 2018
    • (edited Sep 4th 2018)
    Thanks for dropping by, Jason, and clarifying your position. I wonder if the shift to 'a: A' style notation makes a difference here, or would you reply as Mike did in #21 above?

    I mean the shift from

    |- A prop |- B prop
    -------------------------- ^formation
    |- A ^ B prop

    to

    |- A: Prop |- B: Prop
    -------------------------- ^formation
    |- A ^ B: Prop

    and from

    |- A true |- B true
    -------------------------- ^introduction
    |- A ^ B true

    to

    |- a: A |- b: B
    -------------------------- ^introduction
    |- (a, b): A ^ B


    The first switch seems in line with what you write

    "the syntactic operation of conjunction is telling us that if prop is inhabited, and prop is inhabited, then prop is inhabited --- and the name of this method of getting from two props to one is called "^" "

    but uses mention of terms to indicate inhabitedness.
    • CommentRowNumber30.
    • CommentAuthorjcreed
    • CommentTimeSep 4th 2018
    • (edited Sep 4th 2018)
    Ah, I think I see.

    Yes, even though I feel like I'm splitting hairs, there's a sense in which the

    |- (A ^ B) : Prop

    from the second ^formation rule that you give, when explained as:
    the expression A ^ B is a piece of evidence, and the thing it's evidence for is the inhabitation of Prop
    feels very comfortable and reasonable and nice to me, and less so

    |- (A ^ B) Prop

    from the first, when explained as
    the expression A ^ B is the subject of a judgment, and judged to be a prop, and the evidence is the derivation.
    The only fault I'm claiming of this is merely some kind of redundancy or failure of parsimony or something ---
    because it remains to explain what the syntax of propositions is, if you're going to go judging things about them.
    Whereas |- A ^ B : Prop seems like an actual foundational act of defining syntax --- at least, together with the notion that the
    *judgment* of that sequent is "prop is inhabited".

    anyway, thanks for helping me probe these ideas!
    • CommentRowNumber31.
    • CommentAuthorMike Shulman
    • CommentTimeSep 4th 2018

    Of course, a judgment like a:Aa:A is also often a relation between pre-existing untyped syntax. For instance, this is pretty much the only way to make formal sense of dependent type theory with the conversion rule

    a:AABa:B. \frac{\vdash a:A \quad \vdash A\equiv B}{\vdash a:B}.
    • CommentRowNumber32.
    • CommentAuthorMike Shulman
    • CommentTimeSep 4th 2018

    (Ok, well, “pretty much the only” is probably too strong, but it’s certainly the most obvious and straightforward way.)

    • CommentRowNumber33.
    • CommentAuthorjcreed
    • CommentTimeSep 5th 2018
    I concur that for talking about dependent types, having some kind of phase distinction between an untyped (or merely well-typed-up-to-simple-types) terms and fully-dependently-typed terms is entirely reasonable.
    • CommentRowNumber34.
    • CommentAuthorjcreed
    • CommentTimeSep 5th 2018
    (for me and the kinds of things I worked on in grad school, it was more often convenient to make the 'first phase' already well-typed-up-to-simple-types, rather than arbitrary lambda terms, but I think the more important thing is to draw attention to the fact that we agree that these kinds of phase distinctions can be useful. It's easy to imagine that it sounds like I was glibly denying that in 'judgmental deconstruction', which I'm happy to take the blame for writing that line badly)
    • CommentRowNumber35.
    • CommentAuthorMike Shulman
    • CommentTimeSep 5th 2018

    What does well-typed-up-to-simple-types mean?

    • CommentRowNumber36.
    • CommentAuthorjcreed
    • CommentTimeSep 5th 2018

    That you’ve defined the basic syntax of your language to not include all untyped lambda terms, but only those that are typeable in the simply-typed lambda calculus, and then define dependent types as a refinement of that type discipline. https://www.cs.princeton.edu/~dpw/papers/CMU-CS-02-101.pdf is an example of being able to compute a term’s beta-normal form by relying on an arbitrary term’s simple type as an induction measure, instead of having to appeal to arbitrary beta-zigzags in the ABA \equiv B.

    • CommentRowNumber37.
    • CommentAuthorMike Shulman
    • CommentTimeSep 6th 2018

    Is that also what people mean when they talk about type “erasure”?

    • CommentRowNumber38.
    • CommentAuthorMike Shulman
    • CommentTimeSep 6th 2018

    Also, am I right that such an approach only works for “simple enough” dependent type theories, in particular those not combining positive types with a universe? For if you have bool and U, you can define a type P : bool -> U by cases, and then a dependently typed function f : Pi(x:bool) P(x) which probably isn’t simply typable?

    • CommentRowNumber39.
    • CommentAuthorDavid_Corfield
    • CommentTimeSep 6th 2018

    Are we returning to the relation in the quotation in #5

    Bifibrations have also been used recently to model refinement types and logical relations…Superficially, this seems to be a different use of the same semantic structure, because the base and domain categories of the bifibration play a different role…It would be interesting to investigate deeper connections and combine the two notions?

    I see in Functors are Type Refinement Systems, they appeal to the microcosm principle:

    These kinds of definitions—where in order to define some logical structure of interest we rely on a similar structure in the logical framework—are a recurring pattern, and an example of the “microcosm principle” in the sense of Baez and Dolan. In Section 6.4 we will see this pattern again in the definition of the logical relation.

    as Mike has done for his LSR approach.

    Does this put limits on refinement?