Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I added some discussion at terminal coalgebra that the category of trees (equivalently, the category of forests $Set^{\omega^{op}}$) is a terminal coalgebra for the small-coproduct cocompletion (as endofunctor on $Cat$); this is a special case of Adamek’s theorem. I linked to this discussion by adding a section at tree. (There is of course closely related discussion at pure set as well.)
It seems to me that the nLab is a bit thin on general matters of recursion. I’ve been looking a bit at the discussion in Paul Taylor’s book, and I am becoming partial to the general idea that in some sense coalgebras and corecursion often come first; after that one may base recursive schemata on the notion of well-founded coalgebras. For example, (ill-founded) trees are really simple conceptually, or at least have a very simple uniform description: as presheaves $\omega^{op} \to Set$. This is just a simple-minded application of Adamek’s theorem. Later, one can peek inside and gets the initial algebra for the small-coproduct completion as the category of well-founded trees, but this is by no means as simple (one can’t just apply Adamek’s theorem for constructing initial algebras – the hypotheses don’t hold here!).
We don’t even have an article recursion, although I keep linking to it.
It seems to me that the nLab is a bit thin on general matters of recursion.
Indeed! This has also been on my to-do list for a while.
I am becoming partial to the general idea that in some sense coalgebras and corecursion often come first; after that one may base recursive schemata on the notion of well-founded coalgebras
It feels more natural to me to think in terms of recursion and corecursion being dual. Especially from a type-theoretic foundational point of view, where the emphasis is not as much on constructing inductive and coinductive types in set theory. Arguably, from a computational perspective, inductive types are more natural than coinductive ones.
one can’t just apply Adamek’s theorem for constructing initial algebras – the hypotheses don’t hold here
One can’t apply the sequential version, but doesn’t the transfinite version work just as well? It seems tome that you just have to iterate up to the size of the universe.
From a sufficiently abstract perspective, (co)inductive objects are of course the same, just dual. But in $Set$-like categories, it is telling that the elements of the inductive object are merely some of the elements of the coinductive object. This suggests first looking at the conductive objects and then remarking that each has an inductive subobject of well-founded elements.
(By abstract nonsense, we always have a morphism from the inductive object to the coinductive object. It’s because this is monic in $Set$ that Todd can see the coinductive object as more fundamental. If it were epic, then things would appear to go the other way. This is, by duality, the case in $Set^op$ and perhaps also in categories like $CommRing$.)
Philosophically, it seems easier to justify inductive types. I have read some constructivists with a predicativist bent who seem to believe in the former but not the latter. People try to do mathematics in a $\Pi$-$W$-pretopos more often than in a $\Pi$-$M$-pretopos (see W-type).
On the other hand, if by computation we mean what computers do rather than what Church and Turing suggested that they would do, that is $\pi$-calculus more than $\lambda$-calculus, then coinductive objects are just as computational as inductive ones; streams are as important in modern computing as lists are.
If it were epic, then things would appear to go the other way.
Really? The inductive object as a cover of the coinductive one could still make one think that the coinductive one was fundamental. Of course, you mean that the latter is a quotient of the former, but let me play devil’s advocate…
One can’t apply the sequential version, but doesn’t the transfinite version work just as well? It seems tome that you just have to iterate up to the size of the universe.
Mike – sorry, would you mind writing down what you mean more precisely, or telling me where it’s written down?
David, when I learnt about universal covers (probably the first time that anybody said the word “cover” to me in the mathematical sense), I thought «Ah, the universal cover is more fundamental.».
@Todd 6: Adamek’s theorem as you’ve used it here applies to constructing terminal coalgebras of a functor which preserves countable sequential limits, and its dual applies to constructing initial algebras of a functor which preserves countable sequential colimits. But if the functor preserves $\kappa$-filtered colimits for any regular cardinal $\kappa$, then there is a version of the theorem that applies, where instead of merely the sequential colimit
$0 \to T 0 \to T T 0 \to \dots$you extend this transfinitely, taking colimits at limit-ordinals, up until you get to a $\kappa$-filtered ordinal whose colimit the functor preserves. I guess the canonical reference for this sort of thing is Kelly’s monster “A unified treatment of transfinite constructions…”.
Anyway, I believe the “free small-coproduct completion” endofunctor of Cat preserves $\infty$-filtered colimits, where $\infty$ is the size of the universe used to define “small”, so this construction should work to produce its initial algebra. Is that sufficiently detailed?
@Toby 4: Maybe what I’m saying is that it feels to me like an accident that the map from the inductive object to the coinductive object happens to be monic in Set-like categories. Is there an abstract reason for it?
I think what I meant about computation is that an element of a coinductive object in Set cannot be represented as a single computational object, since it is infinite. A computer can store an entire list, but not an entire stream. The “coinductive objects” as used in computation are actually “thunk” or “lazy evaluation” objects which generate each next element of the stream as you ask them for it. This is certainly very useful! But it’s not exactly the same as a coinductive object in Set, since two different thunks might end up generating the same stream. In other words, coinductive data types don’t come with decidable extensional equality; to get an extensional equality you need to define another coinductive “bisimulation” data type. So maybe what I’m trying to say is just that inductive data types and inductive sets are more like the same thing than coinductive data types and coinductive sets are.
Mike, yes, I think that is sufficiently detailed, and the argument seems to work fine. Thanks. But – and I haven’t thought deeply about this yet – the large colimit construction doesn’t appeal to me aesthetically; it’s not a clean piece of categorical algebra so to speak, but seems to lean on some details of the set-theoretic background, such as the existence of class-sized colimits for class-sized categories.
@Toby,
I’m thinking of covers qua coverages/pretopologies. The universal cover is just one of many spaces equipped with a local isomorphism. But seeing as this is a universal arrow, your approach is probably more reasonable.
@ Mike #9
it feels to me like accident that the map from inductive object to the coinductive object happens to be monic in Set-like categories
To me, this feels like the sort of thing that can’t be just an accident, although admittedly I don’t know of a good explanation either. (Probably I should examine a proof that this is true for, say $W$-types and $M$-types in a $\Pi$-pretopos to see what the reason might be.)
two different thunks might end up generating the same stream
One might as well say that functions aren’t computational, since we really compute with algorithms, and two different algorithms may compute the same function (and whether they do is also undecidable, given by another function type).
Actually, one might well say that! Predicativists would put individual functions and ill-founded structures on the same level as entire inductive types. In the philosophical language of a hundred years ago, all of these are uncompleted infinities.
Here’s another argument for why inductive types are as fundamental as coinductive ones. You can characterize the elements of an inductive type as the “well-founded” elements of the corresponding coinductive type – but the notion of “well-foundedness” is an inductive one!
For instance, in Coq, well-foundedness is defined like this. Let A be a type and R a binary relation on A. “Accessibility” or “well-foundedness” is an inductively defined predicate $Acc\colon A \to Prop$, with one constructor which says that if $x\in A$ and $\forall y\in A, R(y,x) \Rightarrow Acc(y)$, then $Acc(x)$. Then A is itself well-founded if every element is accessible, and if A is a coinductive type with a suitable relation, then the subset defined by Acc should be the corresponding inductive type.
In set theory, we usually define well-foundedness impredicatively by a quantification over all subsets. That is, we say that an element of A is accessible or well-founded if it belongs to every “inductive subset”, i.e. to every $S\subseteq A$ with the property that for any $x\in A$, if $\forall y\in A, R(y,x) \Rightarrow y\in S$, then $x\in S$. This says exactly that the set of accessible elements is the initial algebra for a certain endofunctor of the powerset of A, which is exactly equivalent to the previous definition.
So, you can find inductive types inside coinductive types if you use the notion of well-foundedness, which is an inductive notion. It makes sense that well-foundedness is a “most basic” sort of inductive definition that everything else can be reduced to, but given that some inductive notion is still necessary, it seems to me to make more sense to consider inductive and coinductive types on an equal footing to start with.
Todd 10: It’s not surprising to me, or bothersome, that when you’re dealing with a notion like “free small-coproduct completion” which explicitly invokes notions of size, some set-theoretic background may end up playing a role. Sure, maybe in this particular case, the inductive type can be constructed with fewer set-theoretic assumptions by finding it inside the coinductive type. But I wouldn’t extrapolate from that to a general statement that coinduction comes before induction; I think most instances of induction and coinduction happen only in the world of small sets and require no unusual set theory.
Toby 12: Yes, you’re right. I think the idea that individual elements of coinductive types (and function types) are on the same level as entire inductive types is maybe something of what I was trying to get at.
But I wouldn’t extrapolate from that to a general statement that coinduction comes before induction
I didn’t. I said often (which really means, “sometimes”). I didn’t say always.
I was merely reading Paul Taylor’s treatment and found it kinda cute, and thought that some of it could be usefully labbified.
I started the article on recursion. I'd be quite interested to see how it categorifies!
Very nice entry, formatting and all!
I have added instiki-theorem environments.
What would be the right “Context”-TOC for entries like this? I’d like to add one, but am not sure which one or which one to create. Is it foundations - contents?
At this point, we could probably create recursion - contents (putting corecursion in there too).
Hm, okay. I am usually thinking of these floating TOCs as collecting members of a more general context. But of course if there are lots of entries on just recursion and its variants, that should go with a floating TOC, too.
But still: what is the ambient context that you think of “recursion” as being a central concept within. Is it “Foundations” or is it “Universal Algebra”, maybe?
After an email exchange with an anonymous person whose opinion I consider to have weight, I have decided to add the Foundations-TOC to recursion.
@Zhen Lin: thanks for this! Further discussion of this page probably should be carried out in another thread.
I have added more content to terminal coalgebra, completing the proof of Adámek’s theorem, adding the example of a real interval as terminal coalgebra (which, I discovered later, is discussed at coalgebra for an endofunctor, but I added a proof-sketch here – feel free to port or copy it over if it belongs there). Also added a few remarks and references.
Off-hand I don’t know who wrote in the bit about the extension of Adámek’s theorem (with a link to an uncreated page). If it’s just the type of transfinite extension Mike alluded to above, I could probably create a page myself. If it’s something else, it would be good to know what exactly.
I came across a more recent paper of Freyd's a few weeks ago: Algebraic real analysis. It might worth adding as a reference for the interval-as-a-coalgebra material.
@Zhen Lin: thanks; I’ll add it in.
@ Todd #21:
It looks to me like you added it.
Yes, the discussion at coalgebra for an endofunctor on the real line really should be at terminal coalgebra.
Todd 15: Sorry for jumping on you (again). It was a long time ago that I last looked through Paul Taylor’s book, and I don’t remember the discussion of recursion/corecursion in this light. I’d be interested to see it labified!
Thanks, Mike (and no worries). Part of my motivation for wanting to record here what Taylor does is that I personally often have trouble reading his book and want to rewrite some things so that I have less trouble. (This by the way should be taken more as self-criticism than criticism of Taylor’s style. Elsewhere Toby has expressed an appreciation for Taylor’s work, and it’s clear that this book in particular is filled with treasures. I’ve dipped into it many a time, so much that just last night I had to repair the binding.)
Don’t worry, I sometimes have trouble with Taylor’s style too. :-)
1 to 28 of 28