Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I like to add to the entry monad in computer science a more useful Idea-section.
A first version of the kind of exposition that I have in mind is now in the Sandbox – comments are welcome.
I’ll try to polish this up a little more, but it seems I will keep being interrupted a lot today.
added pointer to:
have added pointer to
for substantiation of what in the entry so far remains just a vague comment.
Okay, I have now put in a new Idea-section.
It starts (here) with a slightly polished adjustment of the previous idea-paragraph, which is (and was) really too short to serve as an actual exposition, but which may serve the purpose of quickly showing the impatient reader much of the relevant terminology.
The main addition is a new sub-section Basic Idea: External monads (here) which means to be more like an actual exposition to a reader who does not already know.
(This is a little longer than the terse paragraph that we had before, but meant to still be right to the point. I find most introductions in the literature are too long-winded and too mysterious-sounding. My motivation in writing this new subsection was to improve on this. You judge if it worked.)
After this there is now one more subsection Refined idea: enriched monads (here), which means to admit to the technical subtleties while keeping them out of the way of the previous conceptual explanation.
Compared to the rather mysterious-sounding side-remark that we had on this point before, this sub-section should be an improvement already, but I have not been ambitious about this subsection and don’t regard it as finished. But I’ll leave it as is for the time being.
Next, the Examples-section really deserves to be polished up. But I haven’t looked into that yet.
[edit: Have begun to bring at least some structure into the Exaples-section ]
added prominent pointer to extension system that had been missing all along
I have prefixed the list of references (here) with pointer to extension systems:
Ernest G. Manes, Sec 3, Ex. 12 (p. 32) of: Algebraic Theories, Springer (1976) [doi:10.1007/978-1-4612-9860-1]
F. Marmolejo, Richard J. Wood, Monads as extension systems – no iteration is necessary TAC 24 4 (2010) 84-113 [24-04]
I have started an analogous Idea-subsection on algebras/modules (here) and began some sketch on the comonad version of it all (here).
In the course of this I changed the notation for the monad from “$T$” to “$\mathcal{E}$”, throughout (for “Effect”) so that I can use “$\mathcal{C}$” for the comonads (“Cause”!?)
But I am running out of steam now, and, more importantly, I need to be looking into preparing slides for a talk I give tomorrow, on a completely different subject.
I wonder what the computer science interpretation (in terms of cause and effect on programs) of the cohesive modalities are. Sharp and shape are both monadic, while flat is comonadic.
I believe that for idempotent monads one may think of their effect as that of forgetting or “making invisible” some aspects of the data, or more precisely as projecting out a certain sub-aspect of the data (and in that way, their presence witnesses that the computer system potentially has a richer data structure than usual, so that it is possible to forget some of it, in the first place).
So if a program produces ʃ$D$-data, then that can be thought of as $D$-data but with the extra effect that it becomes impossible to see some aspects usually carried by $D$-data (namely the cohesive aspect).
Similarly, a program reading in $\flat D$-data needs, to run, an agent that strips some cohesive qualities off of full $D$-data.
I guess one can see the effect-picture of the “axiomatic” modalites rather vividly from the fact that existing proof assistants do not implement them: Much like IO in Haskell, they are (or will be) an extra feature (effect) that needs to be provided by the ambient computer system.
I have now added a graphics showing all the monad algebra structure and properties in extension/Kleisli form (see here and scroll down a little).
In doing do, I realized that my square-bracket notion for binding, clever as it seemed in itself, is not sustainable as more such operations come into consideration, and so I have rewritten the whole entry (I hope, let me know if you spot remaining glitches) using
$bind^{\mathcal{E}}$ for “binding”
$ret^{\mathcal{E}}$ for “returning”
and now
This way there is now also room for the comonadfc versions
etc., and then something like
or maybe
for a comonad co-action. Still undecided which term to best use (and the literature seems not to offer suggestions on this point?)
Added
Perhaps there’s a better way to describe what it does.
added this pointer:
Marco Gaboardi, Shin-ya Katsumata, Dominic Orchard, Flavien Breuvart, Tarmo Uustalu, Combining effects and coeffects via grading, ICFP 2016: Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming (2016) 476–489 [doi:10.1145/2951913.2951939, talk abstract, video rec]
Just a trivial question on how people speak:
What is the terminology, in programming practice, for the dual of monadic effect handling?
I.e. given a comonadic co-effect on a data type system, what is the terminology for the co-action on co-algebras?
I gather the comonadic co-effect itself is addressed as providing “context”. Now a co-effect co-handler could reasonably be said to “produce” or “provide” such a context.
For instance, it seems to make perfect and immediate sense to say that a co-algebra for the co-reader co-monad is a routine/method that “provides a global parameter value”.
But does anyone speak this way, in the actual (functional) programming practice? Or is there other common terminology for this purpose?
Or is there maybe not much of an established tradition of talking about comonadic co-effect co-handlers at all?
added pointer to
re #17:
Uustalu on slide 9 of part 3 of his 2021 lecture (as linked to in the entry, can’t easily hyperlink from my phone here) says:
coeffect producer (should we call it a cohandler?)
So with “co-effect” = “context” this is the first of my two suggestions in #17: “context producer”.
Hm, somehow I’d like “context provider” better…
Some thoughts over here. Apparently, they speak of ’capabilities’ being provided.
It seems that
introduces this idea of ’capability’.
Thanks!
I have now glanced over the provided references using the word “capabilities”, but so far I have trouble seeing that these are really referring to coalgebras over comonads. Maybe I find time to have a closer loock.
But I did find Petricek et al’s articles using “contexts” for “coeffects” and they are clear enough. While this is not what I asked (and the interpretation of comonads as “contexts” is already in Uustalu & Vene), I am adding this pointer to the entry now, for completeness:
This is a question on whether the following monad has been considered – as a tool in stochastic programming.
I am considering the following in a context of quantum programming, but the general principle should be more general and I’d be happy to hear that people have considered classical variants.
Namely, to model repeat-until-success algorithms, we need a way to “discard” (forget, ignore, disregard) quantum measurement results (modeled by the quantum reader monad $\bigcirc_B$) without however dropping the quantum state that we are in after the measurement.
For this purpose consider the functor on linear types $\mathcal{H}$ that assigns
$\mathcal{H} \;\; \mapsto \;\; \underset{ B \colon FinType }{\coprod} \bigcirc_B \mathcal{H}$which boils down to forming one big direct sum
$\cdots \;=\; \underset{ B \colon FinType }{\bigoplus} \oplus_B \mathcal{H} \,.$This construction should become a monad with multiplication given by
$\underset{B \colon FinType}{\coprod} \,\bigcirc_B\, \underset{B' \colon FinType}{\coprod} \,\bigcirc_{B'}\, \;\simeq\; \underset{B \colon FinType}{\coprod} \underset{B' \colon FinType}{\coprod} \,\bigcirc_B\, \,\bigcirc_{B'}\, \;\simeq\; \underset{B, B' \colon FinType}{\coprod} \,\bigcirc_{B \times B'}\, \;\longrightarrow\; \underset{B'' \colon FinType}{\coprod} \,\bigcirc_{B''}\,$where the last step maps any summand indexed by $B, B'$ on the left identically onto the summand indexed by $B'' = B \times B'$ on the right.
This monad comes with natural transformations to “absorb” instances of $\bigcirc_B$ in the evident sense. Repeat-until-success algorithms are recursive functions with values in this monad which on a given input produce the infinite tree of all possible sequences of syndrome=fail measurements followed by a last syndrome=success measurement, each parameterizing the corresponding quantum state after that number of measurements.
(If you see what I mean. I am in the process of typing this up and can try to show more details on what I am after, later.)
I have played with other variants before arriving at the above formula, simple as it all is. But I still feel like I may be missing a more profound underlying principle that I should be invoking here.
added pointer to:
added pointer to:
have added (here) a slide from Overton 2014 tabulating (co)monad-terminology in Haskell
only now discovered this review:
I am going to add a section explaining “do notation”.
Just a question on referencing:
Is there any canonical/citable systematic account of do-notation, such as in a textbook?
I see many webpages and online tutorials on the matter (all of them opt for explanation-by-example). The closest to a textbook account that I am aware of is Milewski 2019, Sec. 20.3.
(NB.: I am not asking for an explanation of the matter, but about how to cite and attribute it.)
Started a pagraph on do-notation (here)
(also found the original references, from HHPW07, p. 25)
all of them opt for explanation-by-example
“do notation” is just a clever syntactic sugar hack and it was introduced by example.
It showed up in Haskell 1.3 (May 1996) which added monadic IO and do-notation.
https://www.haskell.org/definition/from12to13.html#do.
A history of Haskell: being lazy with class (2007)
https://doi.org/10.1145/1238844.1238856
says the sugar was invented in 1994 by Mark Jones.
Moving to take a post-doctoral post at Yale in 1992, [Mark] Jones continued to develop and maintain Gofer, adding support for constructor classes (Section 6.4) in 1992–93 and producing the first implementation of the do-notation in 1994
Thanks! – but we overlapped here. I had just found these references and added them above.
I do think there is room to explain this stuff better. But I understand that in the community it is being picked up by osmosis.
off topic but maybe related. The article currently says:
in general it may be misleading to think of Kleisli composition as being about “reading out” data. What Kleisli composition is really about is acting on data that has generators and defining a program by what it does on generators, hence for a given generator.
In a category of presheaves $Set^{C^op}$, morphisms between presheaves have to agree on all of the morphisms of $C$. However if $C$ has a simple presentation as generators and possibly relations one only has to check presheaf morphisms for agreement on the generator morphisms - if they agree then automatically so do the generated ones. e.g. If $M1$ is the free monoid on one generator $s$ then morphisms in $Set^{M1^op}$ will agree on $s$ and one can safely ignore $s^2$, $s^3$, … as clutter.
Even though this suppression often appears in pictures I can’t find it explicitly stated in the nLab.
Rod, I don’t follow. $C$ is any category I guess. What do you mean by “morphisms between presheaves have to agree on all of the morphisms of $C$”. The only meaning I can think of for a value of a morphism $\phi$ between presheaves on $C$ at a morphism $f$ of $C$ is the diagonal of a naturality square formed from the data $\phi, f$. But two morphisms $\phi, \psi$ of presheaves don’t have to agree (in this sense of value) at a morphism $f$. You must be trying to say something else.
You must be trying to say something else.
I was trying to concisely express things from a category of elements perspective. The category of elements of a presheaf can be regarded as a category with its objects and morphisms labeled by the objects and morphisms of $C$. A morphism of presheafs on $C$ must agree on these labels - it can’t map a $s^2$ element morphism to a $s^1$ one.
Much clearer. You are referring to the functor $el: Set^C \to Cat/C$, where natural transformations $\phi: F \to G$ induce functors fitting in commutative squares
$\array{ el(F) & \stackrel{el(\phi)}{\to} & el(G) \\ \downarrow & & \downarrow \\ C & \stackrel{1_C}{\to} & C}$where the “agreement” in #34 implicitly refers $el(F)$ lying over the identity functor at the bottom. I believe $el: Set^C \to Cat/C$ is full and faithful, although the article category of elements doesn’t seem to say that. Maybe you meant to suggest that as well: the criterion for when a functor $el(F) \to el(G)$ “is” a presheaf morphism $F \to G$ is this lying over $C$, and maybe you are suggesting a further strengthening of this criterion in terms of generating morphisms.
The question in #34 is about functors (presheaves) on a free category over a graph $\mathcal{G}$, observing that morphisms between them (natural transformations) are fixed as soon as they are fixed on the underlying graph.
This is just the free category adjunction $F \colon Grph \to Cat$ applied to natural transformations understood as functors out of the product with the interval category $I$:
$Mor\Big( Func\big( F(\mathcal{G}) ,\, Set \big) \Big) \;\;=\;\; Hom_{Cat}\Big( F(\mathcal{G}) \times I ,\, Set \Big) \;\;=\;\; Hom_{Cat}\Big( F(\mathcal{G}) ,\, Func(I, Set) \Big) \;\;=\;\; Hom_{Grph}\Big( \mathcal{G} ,\, Func(I, Set) \Big) \,.$$\,$
Now, this question was apparently prompted by me highlighting (here, in a context of discussing appropriate programming syntax for them) that Kleisli morphisms are to be understood as operating on generators — which is of course just the same kind of free/forgetful phenomenon, now in the case of free modules over a given monad.
There is a lot of interesting material in the entry of this thread (which is monads in computer science), that would be fun to discuss.
Further discussion of maps of presheaves should be taken elsewhere. You could take this to the thread category of presheaves or, if really needed, category of elements.
added pointer to:
added what seem to be original references on effect handling via monad modules:
Gordon D Plotkin, Matija Pretnar, Handling Algebraic Effects, Logical Methods in Computer Science, 9 4 (2013) lmcs:705 [arXiv;1312.1399, doi:10.2168/LMCS-9(4:23)2013]
Ohad Kammar, Sam Lindley, Nicolas Oury, Handlers in action, ACM SIGPLAN Notices 48 9 (2013) 145–158 [doi:10.1145/2544174.2500590]
added pointer to:
1 to 44 of 44