In computer science, a Monad is always an Applicative Functor. It’s regarded as intermediate structure between functors and monads.

And it is the programming equivalent of lax monoidal functors with tensorial strength in category theory. So a Monad in programming is always a lax monoidal functor with tensorial strength.

Exactly how is this possible? What I know about Monad in programming is that they are equivalent to a monad on a ’type category’ which has ’data types’ as objects and ’pure functions’ as morphisms. Since data types can always be seen as a set of values, the type category is like a subcategory of Set. Actually an arbitrary subcategory because languages can support any types as they want.

I got to know that every monad in Set are strong monad, which has tensorial strength by definition. I tried to prove that they eventually form a monoidal functor, but I failed. I could make two different possible candidate for coherence maps from given tensorial strength and costrength, but couldn’t prove that they satisfy the commutation condition for the coherence maps.

The only thing I could find was that strong monads are monoidal monad (and thus monoidal functor) if they are commutative. However, monads are not commmutative in general even in Set. There are many monad which are not commutative in Set, including free monoid monad. (I think they are equivalent to List in programming.)

Just proving that every monad in Set are monoidal functor may not be that hard. I rather want to know more fundamental, or generalizable reason behind that (If exist).

If I’m not wrong, it seems that every monad in Set are indeed monoidal functor even when they are not commutative. What makes this behavior? What makes every monad or strong monad in certain category be always monoidal functor, even when they are not commutative?

I am not a math major and still new to this area. I just got curious about some fundamental things behind tools I have used. I apologize if I said something stupid here.

]]>I’ve realised that in my article recently posted to the cafe, I make some assumptions about the existence of some pullbacks right before I show said pullbacks are isomorphic to a known object $P$ (making that known object actually the pullback). At first glance it seems wrong, but then if I think about it in terms of generalised elements (writing them as ordinary elements), I think I’ve actually shown that

$Hom(-,A)\times_{Hom(-,C)}Hom(-,B) \simeq Hom(-,P)$hence the LHS is representable, and so the pullback exists and is equal to $P$. Is this a valid argument?

]]>I’m not sure where to ask this. I thought the n-forum might have the right audience. I’m not sure if this is how the n-forum is supposed to be used, and I apologize if this question is inappropriate. Anyway I’m looking for a reference for some version of the following claim:

Claim: The forgetful functor from 2-groups to categories has a (weak) left adjoint and the corresponding weak adjunction is monadic, by which I mean there is a 2-monad on categories whose algebras are precisely the 2-groups.

It seems pretty clear to me that this should be the case and I am sure this has been thought about extensively already. I just don’t know where to look. I need references. Thanks!

]]>I’m beginning to regret my foray into MO, but tried asking another question.

If you have any feedback, feel free to keep it here if you prefer. Here’s the question:

Given a category $C$ with two objects and one non-identity morphism

$a\to b$and another similar category $D$

$x\to y$we can define two functors $F,G:C\to D$ with

$F:a\mapsto x, b\mapsto y$and

$G:a\mapsto x, b\mapsto x$with morphisms doing the only thing they possibly can.

A natural transformation $\alpha:F\Rightarrow G$ would require a component $\alpha_b:F(b)\to G(b)$, but there is no morphism $y\to x$, so if I understand this correctly, there is no natural transformation from $F$ to $G$.

Is that correct? Is there a clear set of criteria required for there to exist a natural transformation?

]]>I was reading MacLane (like a good student) and when I saw the diagram for a universal arrow, it made me think of natural transformations.

It looked like we could describe a comma category $(d\downarrow F)$ as a natural transformation $\tau:\Delta d\to F$, where $\Delta d:C\to D$ is a constant functor.

If so, then the universal arrow, i.e. initial object of the comma category, would be like the “initial component” of a natural transformation.

Does that make any sense?

]]>Lemma: Any left anodyne map in SSet/S is a covariant equivalence.

Proof: We can consider the case of a left horn inclusion, since these generate all left anodyne maps.

Then we must show that any map

$i: LeftCone(\Lambda^n_j) \coprod_{\Lambda^n_j} S \to LeftCone(\Delta^n) \coprod_{\Delta^n} S$is a categorical equivalence. **However, $i$ is a pushout of of the map $\Lambda^{n+1}_{j+1} \to \Delta^{n+1}$, which is inner anodyne, so we’re done.**

Questions:

How do we show that $i$ is a pushout as described in the bolded sentence?

]]>at Gleason’s theorem we have a result that holds for Hilbert spaces of dimension* $\gt 2$. There is a heuristic explanation that this is due to the difference between the symmetry groups of 2- and 3-dimensional (and presumably higher-dimensional) Hilbert spaces, and gives a counterexample using $\mathbb{R}^2$. Can we say this is due to the fact that (the component of the identity of) $B(\mathbb{R}^2)$ is not 1-connected? I’m assuming $B(\mathbb{R}^2) = O(2)$ here.

(*)Do we need to take real dimension, as the counterexample seems to indicate, or does the theorem also hold for complex dimension? What about other base fields?

]]>I’ve been looking a little bit at diffeological spaces and am becoming a fan of the idea. I’ve got some notes on my personal web too.

Quick question…

Given any two smooth spaces $X,Y$, we can form something of a canonical mapping “T” diagram:

$\array{ X & \stackrel{\pi_1}{\leftarrow} & X\times [X,Y] & \stackrel{ev}{\rightarrow} & Y \\ {} & {} & \mathrlap{\;\;\;\scriptsize{\pi_2}}{\downarrow} & {} & {} \\ {} & {} & [X,Y] & {} & {} }$which is also discussed on transgression.

I drew this with “plots” in mind, so I had arrows like

$\array{ {} & {} & U & {} & {} \\ {} & \swarrow & \downarrow & \searrow & {} \\ X & \stackrel{\pi_1}{\leftarrow} & X\times [X,Y] & \stackrel{ev}{\rightarrow} & Y \\ {} & {} & \mathrlap{\;\;\;\scriptsize{\pi_2}}{\downarrow} & {} & {} \\ {} & {} & [X,Y] & {} & {} }$with another arrow (not shown) from $U\to [X,Y]$.

This made me think that $U$ was actually the limit of the mapping “T”

$\array{ X & \stackrel{\pi_1}{\leftarrow} & X\times [X,Y] & \stackrel{ev}{\rightarrow} & Y \\ {} & {} & \mathrlap{\;\;\;\scriptsize{\pi_2}}{\downarrow} & {} & {} \\ {} & {} & [X,Y] & {} & {} }$So, my question is “What is the limit of this diagram?” Is it $\mathbb{R}^\infty$ or something?

Thanks for any words of wisdom.

]]>