Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I am currently experimenting with a new way to code comodalities such as in Coq. The problem, as we know, is that semantically the coreflector is not an indexed functor, and hence cannot be represented internally as an operation since such an operation would be applicable in any context. Previously our only method was to represent as an endomorphism of , thinking of the latter as the “external” universe of “internal” types. But recently I have realized that Coq actually already has a way to talk about things in the empty context, namely “modules”. This gives another way to axiomatize : rather than acting on “external” types in , we restrict it to act only on types in the empty context.
Some code is here (link probably not long-term stable). So far I’ve managed to prove that is comodal as soon as admits a section, and conclude that is comodal. This gives at least some evidence that this approach may actually be feasible. The syntax for modules is rather verbose, requiring a fair amount of boilerplate in order to do anything with: the first proof is 3-4 times as long as the analogous proof for modalities ( is modal as soon as admits a retraction). I don’t know how this will compare to the method, which will also be more verbose than a direct internal axiomatization; I haven’t yet tried to redevelop that theory in the new Coq library (I just finished the proof that is modal when is lex).
However, using modules has one potentially significant advantage over , namely, the types we work with are real types, and so all the machinery of the rest of the library can be applied to them directly. For instance, if we have some pointed connected type , then we need to jump through some module hoops in order to construct , but once we have it, we really have , and so we can (say) define directly as the fiber of the map , where “fiber” has the same meaning as it does anywhere else in the library. Thus we can go on to apply all the theorems about fibrations, fibers, fiber sequences, and so on. With we would first have to manually “externalize” all of those theorems, because would not be a type but only an element of . So in the long run, this may be a big win.
Interesting, thanks for the message.
What is the “theoretical” status of “modules” in Coq? From what you say it sounds like some feature that Coq happens to have, which is however outside of the “specification” (or whatever it is called) of HoTT.
Or maybe asked differently: what is the categorical semantics for the type theory expressed by Coq+Modules? Do you think this is clear enough such that when you now prove anything via modules, one still knows that the result translates to a proof in some infinity-topos? Or rather, since that’s what it seems you are thinking, could you maybe exand a bit on the type theoretical status of this “modules” feature?
That was my first question too. (-: I haven’t heard anything from Bob Harper yet (I plan to ask him about it in person the next time I see him), but Jason says
It is commonly believed that turning a parameterized module definition into a lambda abstration is impossible (though I know of no attempt at a formal proof), and my understanding of the universe algorithm is that such a transformation unsound, given how universes are implemented. (Of course, there could be a bug somewhere, given how complicated the implementation of modules and functors is.)
Obviously the categorical semantics of modules ought to be studied, if it hasn’t been! However, even if it hasn’t, I don’t think I would feel worse about using them for this than I do about, say, using the private-types hack for HITs. In both cases we don’t know for absolute sure that the hack is sound, and in fact in the latter case people have occasionally uncovered bugs or subtleties forcing us to modify how we do the hack or in some cases fix the underlying proof assistant. However, if we keep in mind the categorical situation we’re intending to model, I think there’s not much danger of “accidentally” using such a bug and proving something invalid.
Thanks. Well,it certainly sounds exciting. It sounds like it might be yet another instance of computer scientists considering a structure for computer-scientific reasons only which then turns out to mean something profound when thought of in terms of categorical semantics.
What are these “modules” intended for, or designed for, from the point of view of the Coq-programmer? Which problem do they solve for them?
I don’t really know. (-:
One thing they do is namespace control. You put a bunch of definitions in a module, and then somewhere else you can either import that module to get its definitions in your namespace, or you can refer to them qualified as Module.name
and avoid polluting your own namespace. In fact, every file in Coq is automatically a module, and when you load other files you’re technically importing their modules. However, namespace control requires only a tiny fraction of what modules can do.
In ML, which Coq’s modules are based on, I think modules are used to solve problems that arise in most real-world programming languages coming from the absence of dependent types, analogously to how typeclasses are used in Haskell. Coq also has typeclasses based on Haskell’s, but they are not as fundamental an innovation, because in a dependently typed language a typeclass can just be defined as a dependent record. Coq’s typeclasses add an automatic inference algorithm, which makes them easier to use, but they don’t add any fundamental expressivity to a language that already has dependent types (in fact, I believe they’re implemented entirely outside the kernel). ML uses modules in some of the same ways that Haskell uses typeclasses (e.g. Bob Harper wrote a blog post pointing out that monads, which Haskell implements with typeclasses and some extra syntactic sugar, can equally well be implemented with ML’s modules). But I don’t know what a traditional Coq programmer would use modules for that they couldn’t do just as well with dependent records.
BTW, we are already using modules in two other places in the HoTT library. Their namespace control feature, combined with private types, is how we implement HITs. And they allow us to hypothesize a universe-polymorphic object, which turns out to be useful for modalities, as we want “a modality” to act on all universes at once “in the same way”.
Hi Mike, just out of interest: any news regarding your experiment here?
No, I’ve been busy with other things and haven’t had the chance to push it any further.
You know, what would help, when I get back to it, would be to have a concrete goal to formalize. Can you suggest a theorem that’s interesting and not too difficult, but uses and its universal property in a nontrivial way?
One simple statement that is nevertheless of actual interest is the one we recently discussed in what I had managed to make a wild goose-chase towards a cohesive axiomatization of the Penrose-Ward transform:
Theorem: Given a cohesive -group , then the Maurer-Cartan form carries an essentially unique -equivariant structure, intertwining the right action of on itself with the gauge action of on its flat differential forms.
If this is too simple, let me know and I’ll think of something else.
That’s a good start, thanks.
In thinking about this some more I have realized that the two ways to axiomatize are closely related.
The version with modules works even in the absence of , of course. But as long as the discrete types are closed under finite limits (such as if exists), it follows that there is a way to internalize as a map . Namely, the counit induces a fibration over , and if we apply to its domain we get another fibration that is classified by another map ; now factor this through the counit to get . Then if we do have , this is equivalent to a map , and I expect that the rest of the original axiomatization follows. The only snag I expect is that without a resizing axiom, the codomain of will be a larger universe than its domain.
In the opposite direction, I think if we start from the original axiomatization in terms of and add (using modules) the single rule that any element in the empty context is of the form for some , then we can recover all the benefits of the full axiomatization with modules. For instance, a type in the empty context yields an element of in the empty context, to which we can apply , and then (by our rule) deduce . And so on. Semantically, I think what’s happening is that induces an indexed category over the topos of codiscrete types, and the axiomatization of asserts that the self-indexing of the codiscrete types is a coreflective sub-indexed-category of this; but without a rule like the one above, we can’t identify this indexed category with the one induced as usual by a geometric morphism (i.e. whose fiber over is the slice over ).
So now I am starting to think that maybe the best thing to do is add this single rule to the version, since the two are basically equivalent but this one doesn’t have a problem of universes. (This would also remedy a defect with the pure-modules axiomatization, namely that we can’t actually assert the full universal property of since it would require infinitely many axioms, and there is no way to “” a module function to get them all from a finite number the way we can internally.) But I’ll need to experiment with it to see how well it works… and I don’t know when I’ll have time for that…
Just to check if I am following: you are saying at the end that you now tend to not use modules after all, but to rather add that extra rule to sharp?
While I have no sense of what goes into the details of coding with those modules, I have some gut feelings (which may or may not be of interest, but which I’ll communicate anyway):
– Using modules seems to require expanding the underlying meta-theory more than I would have hoped to be necessary. I would have hoped that it has been settled what HoTT is and based on that safe foundation we now proceed to interesting applications on that basis. Changing the foundations as the applications change may at times be necessary, but here it seems to defeat the purpose a bit.
– One aspect of the axiom system of cohesion that makes it attractive is that it is so simple and elegant and yet so expressive (in the homotopy theoretic context at least). Speaking externally it is super-simple. Speaking internally we may expect some extra care to be necessary, as indeed is the case, but it would seem disappointing if the internal formalization ends up looking contrived or inelegant. Optimally, the internal formulation would superficially look a little more subtle than the external formulation, but would on reflection reveal some further intrinsic elegance.
– In this vein, I had very much enjoyed the fact that your original HoTT-formalization of cohesion gave the special role of laying the ground on which the others stand. This feels right in a philosophical way. (If we further assume that , i.e. that , i.e. if “resolves the absolute contradiction” , then other people had suggested before that this is the ground of everything else…)
Hi Mike,
please ignore the whimsical bits in #13, if they are off-putting. I’d just enjoy if you could get back to me on the question:
Just to check if I am following: you are saying at the end that you now tend to not use modules after all, but to rather add that extra rule to sharp?
Sorry, somehow I missed seeing #13 entirely. Thanks for the bump.
The extra rule for would itself have to be done using modules, since it’s only valid in the empty context and modules are the only way we have to say that. But that would be the only use of a module. I haven’t actually tried this way of formalizing anything yet, though, so I can’t say for sure how well it will work. In particular, I don’t know how often one would want to actually use that module. There might be different choices to be made about how often to use it, and I don’t know yet which would be best.
Regarding the rest, I share most of your feelings, but right now I’m not sure which approach I would consider more elegant. Also I do think it is a bit early to want the foundations of HoTT to be completely settled; the whole field is only a few years old.
I see, thanks. Hm, interesting.
Do you have a (higher-)topos-theoretic picture of this situation?
I mean something like: suppose you didn’t know type theory, would there be anything in topos theory that would have the kind of feeling to it of invoking this single use of “module”?
1 to 17 of 17