updated link to Martin Hyland preprint

Anonymouse

]]>am taking the liberty of adding pointer to:

]]>have improved the first slides-like graphics on bind/return operations (here)

]]>added pointer to:

]]>added pointer to:

- Sergei Winitzki, Section 10 of:
*The Science of Functional Programming – A tutorial, with examples in Scala*(2022) [leanpub:sofp, github:sofp]

added pointer to:

- Eugenio Moggi,
*An abstract View of Programming Languages*, LFCS report ECS-LFCS-90-113 [web, pdf]

(which considers also monad transformations, Def. 4.0.8)

]]>added what seem to be original references on effect handling via monad modules:

Gordon D Plotkin, Matija Pretnar,

*Handling Algebraic Effects*, Logical Methods in Computer Science,**9**4 (2013) lmcs:705 [arXiv;1312.1399, doi:10.2168/LMCS-9(4:23)2013]Ohad Kammar, Sam Lindley, Nicolas Oury,

*Handlers in action*, ACM SIGPLAN Notices**48**9 (2013) 145–158 [doi:10.1145/2544174.2500590]

added pointer to:

- Dominic Orchard, Alan Mycroft,
*A Notation for Comonads*, in:*Implementation and Application of Functional Languages. IFL 2012*, Lecture Notes in Computer Science**8241**[doi:10.1007/978-3-642-41582-1_1]

The question in #34 is about functors (presheaves) on a free category over a graph $\mathcal{G}$, observing that morphisms between them (natural transformations) are fixed as soon as they are fixed on the underlying graph.

This is just the free category adjunction $F \colon Grph \to Cat$ applied to natural transformations understood as functors out of the product with the interval category $I$:

$Mor\Big( Func\big( F(\mathcal{G}) ,\, Set \big) \Big) \;\;=\;\; Hom_{Cat}\Big( F(\mathcal{G}) \times I ,\, Set \Big) \;\;=\;\; Hom_{Cat}\Big( F(\mathcal{G}) ,\, Func(I, Set) \Big) \;\;=\;\; Hom_{Grph}\Big( \mathcal{G} ,\, Func(I, Set) \Big) \,.$$\,$

Now, this question was apparently prompted by me highlighting (here, in a context of discussing appropriate programming syntax for them) that Kleisli morphisms are to be understood as operating on generators — which is of course just the same kind of free/forgetful phenomenon, now in the case of free modules over a given monad.

There is a lot of interesting material in the entry of this thread (which is *monads in computer science*), that would be fun to discuss.

Further discussion of maps of presheaves should be taken elsewhere. You could take this to the thread *category of presheaves* or, if really needed, *category of elements*.

Much clearer. You are referring to the functor $el: Set^C \to Cat/C$, where natural transformations $\phi: F \to G$ induce functors fitting in commutative squares

$\array{ el(F) & \stackrel{el(\phi)}{\to} & el(G) \\ \downarrow & & \downarrow \\ C & \stackrel{1_C}{\to} & C}$where the “agreement” in #34 implicitly refers $el(F)$ lying over the identity functor at the bottom. I believe $el: Set^C \to Cat/C$ is full and faithful, although the article category of elements doesn’t seem to say that. Maybe you meant to suggest that as well: the criterion for when a functor $el(F) \to el(G)$ “is” a presheaf morphism $F \to G$ is this lying over $C$, and maybe you are suggesting a further strengthening of this criterion in terms of generating morphisms.

]]>You must be trying to say something else.

I was trying to concisely express things from a category of elements perspective. The category of elements of a presheaf can be regarded as a category with its objects and morphisms labeled by the objects and morphisms of $C$. A morphism of presheafs on $C$ must agree on these labels - it can’t map a $s^2$ element morphism to a $s^1$ one.

]]>Rod, I don’t follow. $C$ is any category I guess. What do you mean by “morphisms between presheaves have to agree on all of the morphisms of $C$”. The only meaning I can think of for a value of a morphism $\phi$ between presheaves on $C$ at a morphism $f$ of $C$ is the diagonal of a naturality square formed from the data $\phi, f$. But two morphisms $\phi, \psi$ of presheaves don’t have to agree (in this sense of value) at a morphism $f$. You must be trying to say something else.

]]>off topic but maybe related. The article currently says:

in general it may be misleading to think of Kleisli composition as being about “reading out” data. What Kleisli composition is really about is acting on data that has generators and defining a program by what it does on generators, hence for a given generator.

In a category of presheaves $Set^{C^op}$, morphisms between presheaves have to agree on all of the morphisms of $C$. However if $C$ has a simple presentation as generators and possibly relations one only has to check presheaf morphisms for agreement on the generator morphisms - if they agree then automatically so do the generated ones. e.g. If $M1$ is the free monoid on one generator $s$ then morphisms in $Set^{M1^op}$ will agree on $s$ and one can safely ignore $s^2$, $s^3$, … as clutter.

Even though this suppression often appears in pictures I can’t find it explicitly stated in the nLab.

]]>have now expanded the section on Do-notation (here)

]]>Thanks! – but we overlapped here. I had just found these references and added them above.

I do think there is room to explain this stuff better. But I understand that in the community it is being picked up by osmosis.

]]>all of them opt for explanation-by-example

“do notation” is just a clever syntactic sugar hack and it was introduced by example.

It showed up in Haskell 1.3 (May 1996) which added monadic IO and do-notation.

https://www.haskell.org/definition/from12to13.html#do.

A history of Haskell: being lazy with class (2007)

https://doi.org/10.1145/1238844.1238856

says the sugar was invented in 1994 by Mark Jones.

]]>Moving to take a post-doctoral post at Yale in 1992, [Mark] Jones continued to develop and maintain Gofer, adding support for constructor classes (Section 6.4) in 1992–93 and producing the first implementation of the do-notation in 1994

Started a pagraph on do-notation (here)

(also found the original references, from HHPW07, p. 25)

]]>I am going to add a section explaining “*do notation*”.

Just a question on referencing:

Is there any canonical/citable systematic account of *do-notation*, such as in a textbook?

I see many webpages and online tutorials on the matter (all of them opt for explanation-by-example). The closest to a textbook account that I am aware of is Milewski 2019, Sec. 20.3.

(NB.: I am not asking for an explanation of the matter, but about how to cite and attribute it.)

]]>only now discovered this review:

- Nick Benton, John Hughes, Eugenio Moggi,
*Monads and Effects*, in:*Applied Semantics*, Lecture Notes in Computer Science**2395**, Springer (2002) 42-122 [doi:10.1007/3-540-45699-6_2]

finally added to the subsection on dual comonadic contexts (here, scroll down to the end of the subsection) the dual diagram to the very first one in the entry

]]>have added (here) a slide from Overton 2014 tabulating (co)monad-terminology in Haskell

]]>added pointer to:

- Gordon Plotkin, John Power,
*Notions of Computation Determine Monads*, in:*Foundations of Software Science and Computation Structures*FoSSaCS 2002, Lecture Notes in Computer Science**2303**, Springer (2002) [doi:10.1007/3-540-45931-6_24, era:1842/196]

added pointer to:

- Eugenio Moggi,
*Computational lambda-calculus and monads*, in:*Proceedings of the Fourth Annual Symposium on Logic in Computer Science*(1989) 14-23 [doi:10.1109/LICS.1989.39155]

This is a **question** on whether the following monad has been considered – as a tool in stochastic programming.

I am considering the following in a context of quantum programming, but the general principle should be more general and I’d be happy to hear that people have considered classical variants.

Namely, to model repeat-until-success algorithms, we need a way to “discard” (forget, ignore, disregard) quantum measurement results (modeled by the quantum reader monad $\bigcirc_B$) without however dropping the quantum state that we are in after the measurement.

For this purpose consider the functor on linear types $\mathcal{H}$ that assigns

$\mathcal{H} \;\; \mapsto \;\; \underset{ B \colon FinType }{\coprod} \bigcirc_B \mathcal{H}$which boils down to forming one big direct sum

$\cdots \;=\; \underset{ B \colon FinType }{\bigoplus} \oplus_B \mathcal{H} \,.$This construction should become a monad with multiplication given by

$\underset{B \colon FinType}{\coprod} \,\bigcirc_B\, \underset{B' \colon FinType}{\coprod} \,\bigcirc_{B'}\, \;\simeq\; \underset{B \colon FinType}{\coprod} \underset{B' \colon FinType}{\coprod} \,\bigcirc_B\, \,\bigcirc_{B'}\, \;\simeq\; \underset{B, B' \colon FinType}{\coprod} \,\bigcirc_{B \times B'}\, \;\longrightarrow\; \underset{B'' \colon FinType}{\coprod} \,\bigcirc_{B''}\,$where the last step maps any summand indexed by $B, B'$ on the left identically onto the summand indexed by $B'' = B \times B'$ on the right.

This monad comes with natural transformations to “absorb” instances of $\bigcirc_B$ in the evident sense. Repeat-until-success algorithms are recursive functions with values in this monad which on a given input produce the infinite tree of all possible sequences of syndrome=fail measurements followed by a last syndrome=success measurement, each parameterizing the corresponding quantum state after that number of measurements.

(If you see what I mean. I am in the process of typing this up and can try to show more details on what I am after, later.)

I have played with other variants before arriving at the above formula, simple as it all is. But I still feel like I may be missing a more profound underlying principle that I should be invoking here.

]]>