Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I like to add to the entry monad in computer science a more useful Idea-section.
A first version of the kind of exposition that I have in mind is now in the Sandbox – comments are welcome.
I’ll try to polish this up a little more, but it seems I will keep being interrupted a lot today.
added pointer to:
have added pointer to
for substantiation of what in the entry so far remains just a vague comment.
Okay, I have now put in a new Idea-section.
It starts (here) with a slightly polished adjustment of the previous idea-paragraph, which is (and was) really too short to serve as an actual exposition, but which may serve the purpose of quickly showing the impatient reader much of the relevant terminology.
The main addition is a new sub-section Basic Idea: External monads (here) which means to be more like an actual exposition to a reader who does not already know.
(This is a little longer than the terse paragraph that we had before, but meant to still be right to the point. I find most introductions in the literature are too long-winded and too mysterious-sounding. My motivation in writing this new subsection was to improve on this. You judge if it worked.)
After this there is now one more subsection Refined idea: enriched monads (here), which means to admit to the technical subtleties while keeping them out of the way of the previous conceptual explanation.
Compared to the rather mysterious-sounding side-remark that we had on this point before, this sub-section should be an improvement already, but I have not been ambitious about this subsection and don’t regard it as finished. But I’ll leave it as is for the time being.
Next, the Examples-section really deserves to be polished up. But I haven’t looked into that yet.
[edit: Have begun to bring at least some structure into the Exaples-section ]
added prominent pointer to extension system that had been missing all along
I have prefixed the list of references (here) with pointer to extension systems:
Ernest G. Manes, Sec 3, Ex. 12 (p. 32) of: Algebraic Theories, Springer (1976) [doi:10.1007/978-1-4612-9860-1]
F. Marmolejo, Richard J. Wood, Monads as extension systems – no iteration is necessary TAC 24 4 (2010) 84-113 [24-04]
I have started an analogous Idea-subsection on algebras/modules (here) and began some sketch on the comonad version of it all (here).
In the course of this I changed the notation for the monad from “$T$” to “$\mathcal{E}$”, throughout (for “Effect”) so that I can use “$\mathcal{C}$” for the comonads (“Cause”!?)
But I am running out of steam now, and, more importantly, I need to be looking into preparing slides for a talk I give tomorrow, on a completely different subject.
I wonder what the computer science interpretation (in terms of cause and effect on programs) of the cohesive modalities are. Sharp and shape are both monadic, while flat is comonadic.
I believe that for idempotent monads one may think of their effect as that of forgetting or “making invisible” some aspects of the data, or more precisely as projecting out a certain sub-aspect of the data (and in that way, their presence witnesses that the computer system potentially has a richer data structure than usual, so that it is possible to forget some of it, in the first place).
So if a program produces ʃ$D$-data, then that can be thought of as $D$-data but with the extra effect that it becomes impossible to see some aspects usually carried by $D$-data (namely the cohesive aspect).
Similarly, a program reading in $\flat D$-data needs, to run, an agent that strips some cohesive qualities off of full $D$-data.
I guess one can see the effect-picture of the “axiomatic” modalites rather vividly from the fact that existing proof assistants do not implement them: Much like IO in Haskell, they are (or will be) an extra feature (effect) that needs to be provided by the ambient computer system.
I have now added a graphics showing all the monad algebra structure and properties in extension/Kleisli form (see here and scroll down a little).
In doing do, I realized that my square-bracket notion for binding, clever as it seemed in itself, is not sustainable as more such operations come into consideration, and so I have rewritten the whole entry (I hope, let me know if you spot remaining glitches) using
$bind^{\mathcal{E}}$ for “binding”
$ret^{\mathcal{E}}$ for “returning”
and now
This way there is now also room for the comonadfc versions
etc., and then something like
or maybe
for a comonad co-action. Still undecided which term to best use (and the literature seems not to offer suggestions on this point?)
Added
Perhaps there’s a better way to describe what it does.
added this pointer:
Marco Gaboardi, Shin-ya Katsumata, Dominic Orchard, Flavien Breuvart, Tarmo Uustalu, Combining effects and coeffects via grading, ICFP 2016: Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming (2016) 476–489 [doi:10.1145/2951913.2951939, talk abstract, video rec]
Just a trivial question on how people speak:
What is the terminology, in programming practice, for the dual of monadic effect handling?
I.e. given a comonadic co-effect on a data type system, what is the terminology for the co-action on co-algebras?
I gather the comonadic co-effect itself is addressed as providing “context”. Now a co-effect co-handler could reasonably be said to “produce” or “provide” such a context.
For instance, it seems to make perfect and immediate sense to say that a co-algebra for the co-reader co-monad is a routine/method that “provides a global parameter value”.
But does anyone speak this way, in the actual (functional) programming practice? Or is there other common terminology for this purpose?
Or is there maybe not much of an established tradition of talking about comonadic co-effect co-handlers at all?
added pointer to
re #17:
Uustalu on slide 9 of part 3 of his 2021 lecture (as linked to in the entry, can’t easily hyperlink from my phone here) says:
coeffect producer (should we call it a cohandler?)
So with “co-effect” = “context” this is the first of my two suggestions in #17: “context producer”.
Hm, somehow I’d like “context provider” better…
Some thoughts over here. Apparently, they speak of ’capabilities’ being provided.
It seems that
introduces this idea of ’capability’.
Thanks!
I have now glanced over the provided references using the word “capabilities”, but so far I have trouble seeing that these are really referring to coalgebras over comonads. Maybe I find time to have a closer loock.
But I did find Petricek et al’s articles using “contexts” for “coeffects” and they are clear enough. While this is not what I asked (and the interpretation of comonads as “contexts” is already in Uustalu & Vene), I am adding this pointer to the entry now, for completeness:
This is a question on whether the following monad has been considered – as a tool in stochastic programming.
I am considering the following in a context of quantum programming, but the general principle should be more general and I’d be happy to hear that people have considered classical variants.
Namely, to model repeat-until-success algorithms, we need a way to “discard” (forget, ignore, disregard) quantum measurement results (modeled by the quantum reader monad $\bigcirc_B$) without however dropping the quantum state that we are in after the measurement.
For this purpose consider the functor on linear types $\mathcal{H}$ that assigns
$\mathcal{H} \;\; \mapsto \;\; \underset{ B \colon FinType }{\coprod} \bigcirc_B \mathcal{H}$which boils down to forming one big direct sum
$\cdots \;=\; \underset{ B \colon FinType }{\bigoplus} \oplus_B \mathcal{H} \,.$This construction should become a monad with multiplication given by
$\underset{B \colon FinType}{\coprod} \,\bigcirc_B\, \underset{B' \colon FinType}{\coprod} \,\bigcirc_{B'}\, \;\simeq\; \underset{B \colon FinType}{\coprod} \underset{B' \colon FinType}{\coprod} \,\bigcirc_B\, \,\bigcirc_{B'}\, \;\simeq\; \underset{B, B' \colon FinType}{\coprod} \,\bigcirc_{B \times B'}\, \;\longrightarrow\; \underset{B'' \colon FinType}{\coprod} \,\bigcirc_{B''}\,$where the last step maps any summand indexed by $B, B'$ on the left identically onto the summand indexed by $B'' = B \times B'$ on the right.
This monad comes with natural transformations to “absorb” instances of $\bigcirc_B$ in the evident sense. Repeat-until-success algorithms are recursive functions with values in this monad which on a given input produce the infinite tree of all possible sequences of syndrome=fail measurements followed by a last syndrome=success measurement, each parameterizing the corresponding quantum state after that number of measurements.
(If you see what I mean. I am in the process of typing this up and can try to show more details on what I am after, later.)
I have played with other variants before arriving at the above formula, simple as it all is. But I still feel like I may be missing a more profound underlying principle that I should be invoking here.
1 to 23 of 23