Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 14 of 14
In the context of CwF how are universes modelled?
A universe can be defined to just be a pair , where is a type over the terminal object, and a type over . Of course, such a universe is not very useful, as it is not assumed to be closed under type-formers.
However, one can prove that a universe determines a new CwF structure on the same category. Define to be the set of morphisms , and the set of morphisms . Furthermore, the identity functor can be extended to a CwF morphism from this new structure to the original one.
Now, if the original CwF has a -type structure, you can ask that the CwF structure induced by also has a -type structure, and that the corresponding morphism preserves it (i.e. it is a morphism of CwFs-with--types). This gives a definition of a “universe closed under -types”.
Is it possible to model a cumulative hierarchy of universes? What about things like type families?
There’s also Coquand’s definition of CwU which we also used in our work on modal type theory. Coquand’s notion is stricter then the usual notion and can be interpreted in sheaf categories.
And some more references at universe.
In a CwF, every “term” (element of the presheaf ) has a unique “type” (element of the presheaf ), so I think it doesn’t really make sense for it to directly model cumulative universes where the same term belongs to both and . (Similarly, since no element of is also an element of , it doesn’t really make sense for it to model Russell universes, only Tarski ones.) My current opinion (which could change again) is that cumulative universes and Russell universes should be regarded as sugary syntax, like un-annotated lambdas, which get automatically fully-annotated by a typechecking algorithm before being passed to an initiality theorem.
I would expect that cumulativity concerns types (and more generally , arrows obtained by pulling-back along a representable) rather than “term” .
In that context “ is at level ” would mean that the morphism factors through (in the CwF), or alternatively that is a pullback of .
Cumulativity could be stated as and being pullbacks of , no ?
I really do think cumulativity is a statement about terms belonging to a universe. There is no syntactic way to say “ is at level ” other than “ is a term in the th universe”.
Cumulativity could be stated as and being pullbacks of , no ?
But “being a pullback of” is not a mere property; you have to specify a pullback square. That structure then corresponds to an explicit “lifting” operator in the syntax, which is not what people usually mean by “cumulative”.
Fair enough. I am happy with an explicit “lifting” operator in the AST of the compiler, and letting the compiler (or the human reader) elaborate these for me (so that render them as sugar syntax as you pointed out).
Which does raise the question of how that elaboration should happen. Does anyone know, is there an algorithm in the literature for compiling Russell universes into Tarski ones and cumulative universes into non-cumulative ones, analogous to bidirectional typechecking algorithms that can automatically annotate abstractions?
@Mike supposedly this says that Russels can be modelled with Tarski ones. This is done by taking lifts as coercions. I don’t know so much about modelling coercions however.
Thanks; but there are no details there, only a few lines sketching the translation and a “claim” that it works. I’d like to see a full proof/implementation translating a Russell/cumulative theory into a Tarski/lifting theory.
@Mike I don’t know any actual algorithm that I could name but you can probably adapt the bidirectional algorithm. In checking mode, you have a judgement that takes as input the context , the term the type and a lower bound on the universe level and outputs the elaboration of the term (with explicit coercions) and the actual universe level . In synthetizing mode you have a judgement where and are inputs, , and outputs. You introduce coercions at the phase change step
The conversion judgement should ensure that is an upper bound of and .
I have written to mean the type seen as an element of . Is that the kind of algorithm you are looking for ?
Yes, that’s the sort of thing I’d like to see written out in detail.
1 to 14 of 14