Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorceciliaflori
    • CommentTimeMar 2nd 2013

    There is a category in which an object is a single-variable probability distribution P(a)P(a) (say, finitely supported) and a morphism from P(a)P(a) to P(b)P(b) is a two-variable distribution P(a,b)P(a,b) which recovers the source and target distributions as marginals: P(a)= bP(a,b)P(a)=\sum_b P(a,b) and P(b)= aP(a,b)P(b)=\sum_a P(a,b).

    These morphisms can be composed as

    P(a,c)= bP(a,b)P(b,c)P(b) P(a,c) = \sum_b \frac{P(a,b)P(b,c)}{P(b)}

    This makes aa and cc conditionally independent given bb and therefore corresponds to the usual composition of conditional distributions P(c|a)= bP(c|b)P(b|a)P(c|a)=\sum_b P(c|b)P(b|a) as it occurs e.g. in Markov processes.

    In this way, we obtain a category of single-variable and two-variable distributions.

    Now a natural question is: is it possible to introduce a higher category which contains distributions over any number of variables? In fact, the above data of composable morphisms actually yields a three-variable distribution

    P(a,b,c)=P(a,b)P(b,c)P(b) P(a,b,c) = \frac{P(a,b)P(b,c)}{P(b)}

    More generally, probability distributions can be “composed” as follows: let XX be a simplicial complex on vertices {a 1,,a n}\{a_1,\ldots,a_n\} with maximal faces {C 1,,C k}\{C_1,\ldots,C_k\}. We say that XX has the running intersection property (RIP) if the ordering of the C jC_j can be chosen such that for every jj there is an i<ji\lt j with

    S j:=C j(C 1C j1)C i. S_j := C_j \cap (C_1\cup\ldots\cup C_{j-1}) \subseteq C_i .

    Now suppose we have given distributions P(C i)P(C_i) over the variables in C iC_i with compatible marginal distributions. If XX has the RIP, then a joint distribution of all variables {a 1,,a n}\{a_1,\ldots,a_n\} can be constructed as

    P(a 1,,a n)= i=1 kP(C i) i=1 kP(S i) P(a_1,\ldots,a_n) = \frac{\prod_{i=1}^k P(C_i)}{\prod_{i=1}^k P(S_i)}

    where P(S i)P(S_i) represents the marginal distribution. This we interpret as a filler of XX to a simplex which represents the composition of the given P(C i)P(C_i) to a joint distribution.

    There are other examples of simplicial complexes, e.g. the three edges of a triangle, for which joint distributions cannot always be constructed.

    Question: is there a higher categorical structure similar to quasi-categories, but with fillers for simplicial sets with the RIP rather than fillers for inner horns? Does every weak Kan complex have fillers for simplicial sets with the RIP?

    • CommentRowNumber2.
    • CommentAuthorTim_Porter
    • CommentTimeMar 3rd 2013
    • (edited Mar 3rd 2013)

    @Cecilia I think the direct answer to your question is no, not yet, but that the sort of structure you are sketching may go in that sort of direction. I did look some time ago at at related problem that may help you at least to see other possible contexts that may lead to your situation. In the context of the Chu construction one has, to quote from Pratt’s notes,

    A Chu space A=(A,r,X)A = (A, r, X ) over a set Σ\Sigma, called the alphabet , consists of a set AA of points constituting the carrier , a set XX of states constituting the cocarrier , and a function r:A×XΣr : A \times X\to \Sigma constituting the matrix .

    In the case that Σ\Sigma is 2 (the two element set), then such a thing is just a relation from AA to XX. Any such relation leads to a simplicial complex using either of the two constructions outlined in Dowker’s theorem. Of more challenge is to replace Σ=2\Sigma = 2 by some other structure such as Σ=[0,1]\Sigma = [0,1], with conditions on the function rr, which correspond loosely to probability. The result interprets something like a simplicial complex for which each possible simplex is given a probability of existing or perhaps better of being a simplex. If that probability is 1 then the simplex exists! If 0 it does not, but it might exist with probability one half or whatever.

    In this sort of context you can imagine the Kan filler condition would be replaced by a horn, hh, (which would exist with probability p hp_h) would have a filler, ff, (with probability p fp_f) and there would be some condition such as 0<p fp h0\lt p_f\leq p_h, so a filler could not be more probable than the horn.

    I thought this sort of setting would arise if AA was a set of states, XX a (sub)set of P(A)P(A), the powerset of AA and rr assigned a probability of membership axa\in x to satisfy some reasonable conditions. I also wondered about this as a possible means of generalising formal concept analysis-type ideas where AA is a set of objects, XX a set of ’attributes’ and r(a,x)r(a,x) interprets as being the probability that aa has attribute xx.

    • CommentRowNumber3.
    • CommentAuthorjim_stasheff
    • CommentTimeMar 4th 2013
    This brings to mind higher structures in Homotopy ProbabilityTtheory I and II by Drummond-Cole and Terilla and earlier work by J.S. Park
    • CommentRowNumber4.
    • CommentAuthorUrs
    • CommentTimeMar 4th 2013
    • (edited Mar 4th 2013)

    A while back I had made a minimal note on that at cumulant, but have no time for more. Maybe somebody feels inspired to expand a bit.

    • CommentRowNumber5.
    • CommentAuthorzskoda
    • CommentTimeMay 12th 2013
    • (edited May 12th 2013)

    I have reworked the entry probability theory, by splitting much on the Giry’s monad approach to a separate entry (most references moved along). I added many links to other entries.

    Lawvere-Giry’s monad and its cousins are not the only (nor all encompassing, as it involves Markov kernels and Chapman-Kolmogorov property which are not always relevant) categorical approach to probability. I moved much of that stuff to a separate entry to include other approaches as well, categorical and not so categorical, including von Neumann algebras, and generalizations like free probability.

    I have an interesting thought today on all of this. Namely, in measure theory one all the time thinks of measures giving real values to elements of a sigma algebra. Modern probability says we don’t care on probability space, we care of distributions of important random variables. But this looks to me parallel to two approaches to integration. One is to work with nice functions and nice integration areas where one postulates the value of the integral and then one generalizes to more complicated domains and functions by various approximation procedures. Another is to collect all nice functions and domains at once and then quotients out that ring by all “obvious” relations between different domains and functions (like change of variables, adding the domains etc.). The 1st theory basically tells you that the quotient in the second case boils down to the field of real numbers (if the set of relations is defined carefully enough to capture everything). The second method has been generalized for more exotic integration procedures, but the domains and functions are more limited. For example, one work with constructible sets and functions and gets various Grothendieck rings as quotients. Euler integration and motivic integration are examples. Now let me go back. How much of essental probability theory in the style of random distributions rather than probability spaces is actually captured by formal quotienting by (essential) relations replacing real-valued integration ?

    • CommentRowNumber6.
    • CommentAuthorUrs
    • CommentTimeMay 12th 2013

    Thanks, that looks good. Only a minor editorial comment: I have reduced the number of hash signs


    in your section outline at Giry’s monad all by one. It seems to me that for the table of contents to display correctly main section headlines need two hash symbols, subsections of those three, and so on. (That’s at least how it is done throughout the nnLab.).

    Adding the toc, as I did, also reveals that the section outline may not be optimal as it currently is. But I won’t interfere with that now.

    • CommentRowNumber7.
    • CommentAuthorceciliaflori
    • CommentTimeAug 27th 2013

    Getting back to the original question, we have now found one answer and written a draft on the resulting notion of higher category. The distinctive feature of our “compositories” is that the composition of an mm-morphism and an nn-morphism along a common kk-morphism results in an (m+nk)(m+n-k)-morphism.

    Quoting the abstract:

    Sheaves are objects of a local nature: a global section is determined by how it looks like locally. Hence, a sheaf cannot describe mathematical structures which contain global or nonlocal geometric information. To fill this gap, we introduce the theory of “gleaves”, which are presheaves equipped with an additional “gluing operation” of compatible pairs of local sections. Examples include the gleaf of metric spaces and the gleaf of joint probability distributions. A result of Johnstone shows that a category of gleaves may have a subobject classifier despite not being cartesian closed.

    Gleaves over the simplex category Δ\Delta, which we call compositories, can be interpreted as a new kind of higher category in which the composition of an mm-morphism and an nn-morphism along a common kk-morphism face results in an (m+nk)(m+n-k)-morphism. The distinctive feature of this composition operation is that the original morphisms can be recovered from the composite morphism as initial and final faces. Examples of compositories include nerves of categories and compositories of higher spans.

    Any feedback will be very welcome! We are planning to put this on the arXiv soon.

    • CommentRowNumber8.
    • CommentAuthorUrs
    • CommentTimeAug 28th 2013
    • (edited Aug 28th 2013)

    Trivial comment: several times you have the combination “how it looks like”. But it must be either “how it looks” or “what it looks like”. (I know this from own bad experience ;-) See here.

    • CommentRowNumber9.
    • CommentAuthorceciliaflori
    • CommentTimeAug 30th 2013

    Thanks, Urs! The arXiv version is now available as well.