Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory internal-categories k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorSam Staton
    • CommentTimeDec 2nd 2018

    Attempt at making a page about defunctionalization. My first new page on nlab, I hope there are no faux pas’s. I noticed the resemblance to the adjoint functor theorem a while back, and several people seemed to find it interesting, so I thought I’d make a page.

    v1, current

    • CommentRowNumber2.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 2nd 2018

    Can we have some more context where we are working? I am guessing this is just simply typed lambda calculus? What is a functional program? What is a program? Can you give some examples?

    • CommentRowNumber3.
    • CommentAuthorSam Staton
    • CommentTimeDec 2nd 2018

    Good point. Linked to functional programming, and included an example, which I think I learnt from Mike Spivey.

    diff, v3, current

    • CommentRowNumber4.
    • CommentAuthorSam Staton
    • CommentTimeDec 2nd 2018
    And I know no reason why this wouldn't work in more complicated type theories than simple type theory, with dependent products etc., but I am not an expert on the compilers literature.
    • CommentRowNumber5.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 2nd 2018

    OK so I see what this is doing. You basically look at each abstraction and replace it with a constructor for some inductive type Defun. Then you redefine the original “program” to consider values from the inductive type rather than higher order functions?

    I’m not completely convinced this has actually gotten rid of any abstractions, morerather it has hidden them in the inductive type. If you expand out the inductive type you will get the abstractions back.

    • CommentRowNumber6.
    • CommentAuthorSam Staton
    • CommentTimeDec 2nd 2018
    Thanks! First paragraph: yes, that's right. Second paragraph, I don't fully understand. The result program can be typed in a first order language with no true function type, that's the main point as I understand it.

    I used to think it was a syntactic hack, which may be where you are leaning. But several people persuaded me to take it seriously (including Noam Zeilberger, who may have his own take). Now I think it is profound.

    For example, recently we addressed the problem that measurable spaces are not cartesian closed, by embedding them in a quasitopos and using adjoint functor theorems. Some programming people said, "why is the lack of function spaces a problem, when we can just defunctionalize". Now my response is, "that is kind-of the same thing as embedding in a bigger category and using AFT".
    • CommentRowNumber7.
    • CommentAuthorDavid_Corfield
    • CommentTimeDec 2nd 2018

    Re #1, the first of many I hope, Sam!

    • CommentRowNumber8.
    • CommentAuthorAli Caglayan
    • CommentTimeDec 2nd 2018

    I have never heard of “defunctionalizing”, do you have any more references?

    • CommentRowNumber9.
    • CommentAuthoratmacen
    • CommentTimeDec 3rd 2018

    Thanks, Sam; this is cool!

    Could you write something comparing and contrasting with closure conversion? They’re both implementation techniques for first-class functions, so it’d be nice to see right away the important difference(s). Here are some thoughts about it:

    At first glance, it looks like defunctionalization is very inefficient, because of that huge coproduct. But what if you “round up to infinity”? How would you implement an open-ended coproduct of that form? Like a tagged union: elements are tuples that have a tag identifying the lambda, and then all those elements that make up the environment. What should the tag be? It needs to tell evaleval what code to use, so make it a code pointer. But wait: a code pointer along with an environment is a closure! So from an implementation perspective they’re basically the same, but closure conversion is a more low-level view.

    But to handle the type of closures, it does seem like you need to close off the coproduct, or else the type gets more complicated, not simpler. But what if you’re OK with complicated types? In typed closure conversion, they tend to use an (impredicative) existential type to encode a type of closures. The existential is used to sum up over all types, which includes all types of environments, then they narrow it back down to things you can evaleval by giving the code pointer a sort of “global” or “external” function type:

    Γ:type,[Γ,AB]×Γ\exists \Gamma : type, [\Gamma,A \vdash B] \times \Gamma

    This seems even closer to using an adjoint functor theorem!

    • CommentRowNumber10.
    • CommentAuthorSam Staton
    • CommentTimeDec 5th 2018
    • (edited Dec 5th 2018)

    Sorry not to reply. I wanted to think about it but didn’t get very far. I suspect that closure conversion means different things to different people, but it is quite close to defunctionalization.

    My understanding (and I’m not a compiler hacker) is that traditional closure conversion + lambda lifting would provide, in the factorial example

    multiply :: Integer -> (Integer -> Integer) -> Integer -> Integer

    multiply x k y = k (x * y)

    identity :: Integer -> Integer

    identity x = x

    But typed closure conversion that you mentioned is a different kettle of fish. People do seem to use the existential type that you wrote, where [Γ,AB][\Gamma, A \vdash B] is something very intensional, and so the proper function types disappear (as in defunctionalization).

    Blurring that intensional side of it, Max New wrote a blog post about this a short while ago: blog.

    So I’m not really sure what to put in the nlab page.

    • CommentRowNumber11.
    • CommentAuthorMike Shulman
    • CommentTimeDec 5th 2018

    So what does defunctionalization say at a categorical level? The obvious/naive way to “get function spaces” in a category where you don’t have them already is to apply the Yoneda embedding y:C[C op,Set]y:C\to [C^{op},Set]. The function-space (yB) yA(y B)^{y A} is the presheaf defined by (yB) yA(X)=C(A×X,B)(y B)^{y A}(X) = C(A\times X,B). Is that at all related to defunctionalization? Is it saying something about some kind of embedding being fully faithful?

    • CommentRowNumber12.
    • CommentAuthorSam Staton
    • CommentTimeDec 5th 2018
    • (edited Dec 5th 2018)

    Yes, I think so. For (yB) yA(y B)^{y A} is a colimit of representables, and working through the usual formulation of that, we get (yB) yA(y B)^{y A} is a quotient of X,C(A×X,B)yX\sum_{X,C(A\times X,B)} y X. This looks quite like defunctionalization to me.

    This is close to the adjoint functor theorem formation of (yB) yA(y B)^{y A} as a right adjoint, to my mind at least.

    It is maybe another angle on the surprise, which many have pointed out: when we take a cocompletion we get function spaces, even though we didn’t ask for them.

    (Of course it may be that you already had some colimits, in which case you might not need to embed in the free cocompletion.)

    • CommentRowNumber13.
    • CommentAuthorMike Shulman
    • CommentTimeDec 5th 2018

    Thanks. Maybe now I understand #6: it’s related to the dichotomy between nice objects and nice categories. If you want a category of “concrete types” without first-class functions (“nice objects”), then to deal with functions you have to encode them up kind of kludgily by defunctionalizing. But by expanding your category to include first-class function types (“nice category”) you don’t have to deal with this, although the types in question become more abstract. And the point is that the resulting “notion of function” is not very different: defunctionalization is just a sort of intensional beta-reduction of the Yoneda embedding of a not-so-nice category of nice objects into a nice category of less-nice objects.

    • CommentRowNumber14.
    • CommentAuthoratmacen
    • CommentTimeDec 6th 2018

    Re #10, no, that doesn’t work. You need to be able to get the same type of closure from both identity and multiply, since they’re both used as a continuation for factcps.

    Try this:

    multiply (x,k,g) y = k g (x * y)
    identity _ x = x
    
    factcps x (k,g) = if x == 0 then k g 1 else factcps (x-1) (multiply,(x,k,g))
    
    fact :: int -> int
    fact x = factcps x (identity,0)

    (This is not valid Haskell; it’s pseudocode. I don’t know the Haskell for existential types.)

    The type of factcps’ continuation argument is Γ:type,[Γ,intint]×Γ\exists \Gamma:type, [\Gamma,int \vdash int] \times \Gamma. For identity, we use a dummy environment of type int. See that (identity,0) then has the right type, using an implicit existential intro.

    The environment type for multiply is more complicated, since it includes another continuation: Γ:type,int×[Γ,intint]×Γ\exists \Gamma':type, int \times [\Gamma',int \vdash int] \times \Gamma'. So in factcps, we do an existential elim to get some Γ\Gamma', and the type of g is Γ\Gamma', and so (multiply,(x,k,g)) is the right type of closure (using two existential intros).

    I cheat slightly: actual closure-converted code unpacks closures at the last moment, but I had factcps unpack its closure right away, since it’s notationally convenient.

    But typed closure conversion that you mentioned is a different kettle of fish.

    I wouldn’t say that typed closure conversion is a whole ’nother story. They just figured out a way to assign types to the implementation technique they were already using.

    • CommentRowNumber15.
    • CommentAuthoratmacen
    • CommentTimeDec 6th 2018

    I tested it in Coq with -impredicative-set. I’m a little surprised it passed the termination checker. Here’s the code:

    Set Implicit Arguments.
    Unset Elimination Schemes.
    
    (* Impredicative existential quantifier *)
    Inductive Ex (F:Set->Set) : Set := exI T (x:F T).
    
    Definition exE F (C:Set) (p:Ex F) (k:forall T,F T->C) :=
        match p with exI T x => k T x end.
    
    (* Coq didn't like it when I destructured the nested pair in one step. *)
    Definition multiply (g:Ex (fun G'=>nat * (G'->nat->nat) * G')%type) y :=
        exE g (fun G' g=>let (p,g) := g in let (x,k) := p in k g (x * y)).
    
    Definition identity (g x:nat) := x.
    
    Fixpoint factcps x (g:Ex (fun G=>(G->nat->nat) * G)%type) := exE g (fun G' g=>
    let (k,g) := g in match x with
        O => k g 1 |
        S x' => factcps x' (exI _ _ (multiply,exI _ G' (x,k,g)))
    end).
    
    Definition fact x := factcps x (exI _ _ (identity,O)).
    • CommentRowNumber16.
    • CommentAuthorSam Staton
    • CommentTimeDec 6th 2018
    • (edited Dec 6th 2018)

    Re #13 (Mike). Thanks, nicely put. I agree. I’d even say Set and wCpo are on the “nice categories” side of the fence, whereas categories of games, and free ccc’s are on the nice objects side.

    Re #15. Thanks! It can be done in Haskell too — the examples at haskell wiki are similar.

    Re #14. In #10 I didn’t write out factcps and fact; what I had in mind was simply

    factcps x k = if x == 0 then k 1 else factcps (x-1) (multiply x k)

    fact x = factcps x identity

    which do typecheck. This is what some people seem to call closure conversion + lambda lifting. The existentially-typed closure conversion of Minamide, Morrisett and Harper is quite a bit more than this, and seems close to defunctionalization — that’s all I meant.

    • CommentRowNumber17.
    • CommentAuthorDavid_Corfield
    • CommentTimeDec 6th 2018

    Re #13, it would be good to have a word for the process of shifting from nice objects to nice categories, something like enhancement to a stronger doctrine. Mike’s ideas on n-theories seem very much to the point here.

    • CommentRowNumber18.
    • CommentAuthoratmacen
    • CommentTimeDec 6th 2018

    Re #16, your factcps partially applies multiply, so I don’t see that it’s successfully eliminating first-class functions. So I don’t think that it’s closure conversion.

    • CommentRowNumber19.
    • CommentAuthorSam Staton
    • CommentTimeDec 6th 2018
    Oops, you're right. Perhaps some people would call it "lambda lifting" but I now agree it's not closure conversion.
    • CommentRowNumber20.
    • CommentAuthorSam Staton
    • CommentTimeDec 7th 2018
    • (edited Dec 7th 2018)

    Coming back to #9. atmacen, you said (roughly) “B AΓ.[Γ,AB]×ΓB^A \cong \exists \Gamma. [\Gamma,A \vdash B] \times \Gamma” looks like an adjoint functor theorem. I have been trying to understand what you meant.

    Maybe I should understand your sentence as saying that a small complete full subcategory of Set (roughly, one where we can interpret impredicative \exists) is necessarily cartesian closed, by a AFT-like argument? Just as every frame is a Heyting algebra? Is that what you had in mind?

    If we can interpret \exists as a coend we can use Max New’s co-Yoneda argument, but that seems difficult to get right.

    • CommentRowNumber21.
    • CommentAuthoratmacen
    • CommentTimeDec 8th 2018

    It could be that I was being too sloppy. Impredicative encodings had seemed similar to the adjoint functor theorem to me, and then I saw Mike Shulman and Steve Awodey talking about a “dream adjoint functor theorem” using impredicative encodings, so I felt bold enough to say that the existential encoding of function types was similar to an instance of some adjoint functor theorem. I don’t know how to work out the details. (Maybe Mike does.)

    Maybe I should understand your sentence as saying…

    I didn’t mean that much. It seems plausible though, if you additionally require that the subcategory has enough Hom sets (global function types). Oh wait, if it’s a full subcategory of Set, then the Hom sets would already be function types. So that’s probably not a good way to think about typed closure conversion.

    There’s a difference, in programming languages, between the global function types of e.g. C, and the fully-featured function types of e.g. ML. But I don’t know how to explain the difference categorically. The difference in typing rules is basically that global function types have an intro form that works only in the empty context. In any case, it seems like the cartesian closedness of ML implemented using closures is not the result of being a full subcategory of anything that’s already cartesian closed. Typed intermediate languages where you’d use typed closure conversion should not be cartesian closed by assumption. But they have impredicative quantifiers, and global function types by assumption.

    • CommentRowNumber22.
    • CommentAuthorMike Shulman
    • CommentTimeDec 8th 2018

    Let’s see. If cartesian products preserve colimits in each variable (which they do in type theory, because the universal property of colimits applies in any context), then any adjoint functor theorem that applies (in particular, including the ’dream’ one in an impredicative world) implies that the category is cartesian closed. The construction (in the dream world — the ordinary one requires some smallness futzing) is dual to the construction of inductive types as initial objects of categories of algebras: we regard B AB^A as the terminal object of the category whose objects are types XX together with morphisms X×ABX\times A\to B, and construct such a terminal object as the colimit of the identity functor. To construct such a colimit by coproducts and coequalizers, we would start with the coproduct X:𝒰(X×AB)×X\coprod_{X:\mathcal{U}} (X\times A\to B) \times X of all objects of the category and then quotient it by an appropriate relation (in an \infty-world that “quotient” would be more complex).

    The main caveat is that while an impredicative category of modest types has all limits, and hence also all colimits, unlike the limits, the colimits will not in general coincide with those in the ambient universe of types. That is, the impredicative universe is closed under Π\Pi-types with arbitrary domain, and it “hasΣ\Sigma-types with arbitrary domain but is not closed under such Σ\Sigma-types in the ambient universe. So I think your intuition is correct, Matt, as long as we are careful about what we mean by that \exists — it’s a sort of “universe-relative” Σ\Sigma-type, somewhat analogous to how the (-1)-truncated existential Σ x:XY\Vert \Sigma_{x:X} Y\Vert is a “Σ\Sigma-type relative to the universe of propositions”, except that in this case the universe is defined by impredicativity rather than by h-level restrictions. Defunctionalization is sort of “approximating” that by an actual Σ\Sigma-type in the ambient universe but over a restricted domain (kind of like the way the non-dream AFT restricts to a small generating family!). Of course in either case one is still missing the quotient to make the coproduct into a colimit.

    Comparing these two paragraphs I’m a little worried about whether these universe-relative colimits are also preserved by products in each variable so that we can use the adjoint functor theorem on them, but probably they are since “everything” in type theory can happen in an arbitrary context.

    • CommentRowNumber23.
    • CommentAuthoratmacen
    • CommentTimeDec 9th 2018

    These typed intermediate languages are usually like intensional type theory in that they don’t have equalizers or coequalizers, and universal mapping properties may not be unique. Function extensionality is too much to hope for. If the coproduct is a weak exponential, you’re done. At least from an implementation perspective; extensionality could be handled setoid style. (And you can’t express the setoids in these languages, so you’d be adding a refinement type system.)

    The main caveat is that while an impredicative category of modest types has all limits, and hence also all colimits, unlike the limits, the colimits will not in general coincide with those in the ambient universe of types.

    You and Sam seem to be thinking of modest sets in a realizability topos, but that’s probably not how typed intermediate languages were intended to be interpreted. The impredicative quantifiers can be interpreted as meets and joins in the subtyping order. It’s fine if they’re completely different from any Σ\Sigma-types you may or may not have from some ambient universe.

    I’m a little worried about whether these universe-relative colimits are also preserved by products in each variable so that we can use the adjoint functor theorem on them, but probably they are since “everything” in type theory can happen in an arbitrary context.

    The rules for existential types work in an arbitrary context, if that helps.

    • CommentRowNumber24.
    • CommentAuthorSam Staton
    • CommentTimeDec 9th 2018

    Thanks Mike and atmacen. Some very interesting points. I wanted to add something about typed closure conversion to the article, but I can’t quite see how to do it.

    Although the intermediate languages don’t have much structure, presumably one would still like to somehow show that closure conversion is correct, which may amount to giving an AFT-like argument. But I can’t see it immediately in the correctness proof of Minamide, Morrisett and Harper (p 14).

    • CommentRowNumber25.
    • CommentAuthorMike Shulman
    • CommentTimeDec 9th 2018

    Matt, your comment in #8 about AFTs seemed rather informal, so I was only going for an intuitive understanding of a relationship to AFTs.

    • CommentRowNumber26.
    • CommentAuthoratmacen
    • CommentTimeDec 10th 2018

    OK, Mike. I was happy to hear that they seem related to you too.

    Sam, I was originally just thinking you’d compare defunctionalization to “usual”, untyped (or unsoundly-typed) closure conversion. But I guess we can leave that to Wikipedia. I don’t know what you could add to the page, about typed closure conversion either.

    I haven’t read the proofs in that paper, but the introduction makes it sound like they have proofs that the translated programs simulate the source programs. In particular, the translation of a function application (to closure operations) simulates the original beta reduction. So buried in there somewhere is probably the essence of a proof that the existential encoding gets you weak exponentials, if you think of the languages as categories. But it doesn’t look like they’re taking a very categorical perspective on this at all.