Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory internal-categories k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorUrs
    • CommentTimeMay 18th 2021
    • (edited May 18th 2021)

    Elsewhere we proven that all fundamental weight systems are quantum states. Now we want to find the decomposition of these quantum states as mixtures of pure states and compute the resulting probability distribution.

    Specifically, I suppose the pure states in the mixture are labelled by Young diagrams, and so we should get a god-given probability distribution on the set of Young diagrams (of n(\mathcal{X}_i, \mathcal{O}_{\mathcal{X_n boxes for the fundamental 𝔤𝔩(n)\mathfrak{gl}(n)-weight system). It’s bound to be something classical (probably the normalized numbers of some sort of Young tableaux) but it will still be interesting to know that this classical notion is in fact the probability distribution secretly encoded by fundamental weight systems.

    I had started to sketch out how to approach this over here in the thread on the Cayley distance kernel. Now I’d like to flesh this out.

    The broad strategy is to first consider the group algebra [Sym(n)]\mathbb{C}[Sym(n)] as the star-algebra of observables, the Cayley distance kernel in the form (e,[e βd C]())\big(e, [e^{- \beta \cdot d_C}] \cdot (-) \big) as a state on this algebra, and then find its convex combination into pure states using representation theory.

    Once this is done, the result for the actual weight systems on the algebra of horizontal chord diagram should follow immediately by pullback along perm:𝒜 n pb[Sym(n)]perm : \mathcal{A}^{pb}_n \longrightarrow \mathbb{C}[Sym(n)].

    It seems straightforward to get a convex decomposition of the fundamental-weight-system quantum state into “purer states” this way. I am still not sure how to formally prove that the evident purer-states are already pure, but that will probably reveal itself in due time.

    Eventually this should go to its own entry. For the moment I started making unpolished notes in the Sandbox.

    • CommentRowNumber2.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021
    • (edited May 19th 2021)

    In fact, the pure states must be labelled not just by a partition/Young diagram λ\lambda, but also by i{1,,dim(S (λ))}i \in \{ 1, \cdots, dim(S^{(\lambda)}) \}, at least:

    The elements

    {σSym(n)S¯ (λ)(σ) ki λσ[Sym(n)]} 1kdim(S (λ)) \Big\{ \underset{ \sigma \in Sym(n) }{\sum} \bar S^{(\lambda)}(\sigma)_{k i_{\lambda}} \, \sigma \; \in \mathbb{C}[Sym(n)] \Big\}_{1 \leq k \leq dim(S^{(\lambda)})}

    span a left ideal P i λ (λ)([Sym])[Sym(n)]P^{(\lambda)}_{i_\lambda}\big( \mathbb{C}[Sym]\big) \subset \mathbb{C}[Sym(n)], and the direct sum of all these ideals decomposes [Sym(n)]\mathbb{C}[Sym(n)], whence the projection operators P i λ (λ)P^{(\lambda)}_{i_\lambda} onto these ideals commute with all left multiplication operators in the group algebra. Since they also commute with the Cayley distance kernel (being projection to direct sums of its eigenspaces) and are self-adjoint (this follows by grand Schur orthogonality), the computation shown (currently) in the Sandbox gives that

    β,(λ,i λ)1P i λ (λ)(e) βP i λ (λ)() β \big\langle - \big\rangle_{ \beta, (\lambda, i_\lambda) } \;\coloneqq\; \frac{1}{ \big\langle P^{(\lambda)}_{i_\lambda}(e) \big\rangle_{ \beta } } \, \big\langle P^{(\lambda)}_{i_\lambda}(-) \big\rangle_{ \beta }

    are states on [Sym(n)]\mathbb{C}[Sym(n)], and that we have a decomposition of the Cayley kernel state

    β(e,[e βd C]()) \Big\langle - \Big\rangle _\beta \;\coloneqq\; \big( e, [e^{-\beta \cdot d_C}] \cdot (-) \big)

    as

    β=λPart(n)1i λdim(S (λ))p (λ,i λ) β,(λ,i λ) \big\langle - \big\rangle_{ \beta } \;=\; \underset{ { \lambda \in Part(n) } \atop { 1 \leq i_\lambda \leq dim(S^{(\lambda)}) } }{\sum} p_{(\lambda, i_\lambda)} \cdot \big\langle - \big\rangle_{ \beta, (\lambda, i_\lambda) }

    for non-negative real numbers

    p (λ,i λ)P i λ (λ)(e) β p_{(\lambda, i_\lambda)} \;\coloneqq\; \left\langle P^{(\lambda)}_{i_\lambda}(e) \right\rangle_\beta

    It feels intuitively clear that the above decomposition

    [Sym(n)]=λPart(n)1i λdim(S (λ))P i λ (λ)([Sym(n)]) \mathbb{C}[Sym(n)] = \underset{ {\lambda \in Part(n)} \atop {1 \leq i_\lambda \leq dim(S^{(\lambda)})} }{\oplus} P^{(\lambda)}_{i_\lambda}(\mathbb{C}[Sym(n)])

    into left ideals is the finest possible, and that this implies that the states β,(λ,i λ)\big\langle - \big\rangle_{\beta, (\lambda, i_\lambda)} are pure. But I don’t have formal proof of this yet. This ought be some elementary argument, though.

    • CommentRowNumber3.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    Only coming in rather briefly here, since my stack of marking is going down very slowly. So to show that these states are pure, we just need to know they can’t be non-extreme convex combinations.

    Not sure if it would help, but does the bijection between the dimension of an irrep and standard YTs of that Young diagram matter? I.e, instead of labelling by (λ,i)(\lambda, i), label by a sYT.

    • CommentRowNumber4.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021
    • (edited May 19th 2021)

    Right. While I don’t see yet how it helps concretely, at the very least it optimizes the notionation.

    But now that you say it, it makes me wonder: Maybe that’s what relates to the seminormal basis for those Jucys-Murphy elements, which are also labelled by sYTs?

    Hm, how in fact is that seminormal basis {v U} UsYT n\{v_U\}_{U \in sYT_n} (here) not undercounting the dimension of [Sym(n)]\mathbb{C}[Sym(n)]: For each λ\lambda there is a (dim(S (λ))) 2\big(dim(S^{(\lambda)})\big)^2-dimensional subspace of [Sym(n)]\mathbb{C}[Sym(n)], not just dim(S (λ))dim(S^{(\lambda)})-dimensional.

    So I guess those v Uv_U need to carry a further degeneracy index?

    • CommentRowNumber5.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    That undercounting was in the back of my mind too.

    Should we be talking about ’Young symmetrizers’?

    A SYT TT of shape λ\lambda has an associated group algebra element y T𝒞[S n]y_T \in \mathcal{C}[S_n], called the Young symmetrizer. y Ty_T has a key role: It is an idempotent, and its principal right ideal y T[S n]y_T\mathbb{C}[S_n] is an irreducible module of S nS_n. arXiv:1408.4497

    • CommentRowNumber6.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    Of course that MO question was looking at Young symmetrizers.

    • CommentRowNumber7.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021
    • (edited May 19th 2021)

    Wasn’t the answer there enough anyway

    you can now prove that {F S|Sstandard}\{F_S \vert S\; standard\} is a complete set of pairwise orthogonal primitive idempotents in S n\mathbb{Q} S_n.

    F SF_S is a multiple of the Young symmetrizer for SS.

    • CommentRowNumber8.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    Thanks, right, that’s the first statement we need, that these projectors are “a complete set”, which we need to mean maximal such set.

    I wonder if the idea is that this is all contained in Okounkov&Vershik.

    • CommentRowNumber9.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    So what would it take to show it’s a maximal set? Isn’t the word ’primitive’ enough? So that’s that the idempotent can’t be decomposed into a sum of orthogonal non-zero idempotents.

    • CommentRowNumber10.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    Hmm, this answer suggests Young symmetrizers are not necessarily orthogonal. What I wrote at the end of #7 isn’t right, is it.

    So where do people discuss that F SF_S construction of #7?

    • CommentRowNumber11.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    Yes, true, “complete set of primitive projectors” means what we need it to mean. Still, it needs a proof.

    • CommentRowNumber12.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    This seems to be from pp. 505-506 of

    • G.E. Murphy, On the representation theory of the symmetric groups and associated Hecke algebras, J. Algebra 152 (1992) 492–513 (pdf)
    • CommentRowNumber13.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    Thanks! I’ll extract that as a note to the nnLab now.

    (It’s most curious how star-algebras show up now in the “seminormal” Gelfand-Tsetlin basis! But still need to absorb the exact role they play here.)

    • CommentRowNumber14.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    In the Intro he point back to earlier work

    In [9, 10], this last step was simplified by the demonstration that the seminormal basis constitutes a complete set of eigenfunctions of a certain set of commuting elements of the group algebra, denoted {L,,}; moreover, the corresponding projection operators comprise a complete set of primitive idempotents.

    1. G. E. MURPHY, A new construction of Young’s seminormal representation of the symmetric groups, J. Algebra 69 (1981). 287-297.

    2. G. E. MURPHY, On the idempotents of the symmetric group and Nakayama’s conjecture. J. Algebra 81 ( 1983), 258-264.

    • CommentRowNumber15.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021
    • (edited May 19th 2021)

    Thanks again. So that settles the question about the completeness of the ideal decomposition: Even without going into the details of Murphy’s construction, the fact alone that he finds the same number of ideals for each eigenvector of the Cayley distance kernel (namely dim(S (λ))dim(S^{(\lambda)}) many) implies that the decomposition I gave in the Sandbox must be maximal/primitive, too.

    Now it just remains to see how this implies that the corresponding states β,(λ,i λ)1P (λ) i(e) βP i λ (λ)() β\langle -\rangle_{\beta, (\lambda,i_\lambda)} \coloneqq \frac{1}{\left\langle P^{(\lambda)_i(e)}\right\rangle_{\beta}} \cdot \left\langle P^{(\lambda)}_{i_\lambda}(-) \right\rangle_\beta in the Sandbox are pure! Once we have this we just need to compute p (λ,i λ)P (λ) i(e) βp_{(\lambda,i_\lambda)} \coloneqq \left\langle P^{(\lambda)_i(e)}\right\rangle_{\beta} and we are done with producing the desired probability distribution.

    For proving that β,(λ,i λ)\langle -\rangle_{\beta, (\lambda,i_\lambda)} is a pure state: We probably want to argue that every state defines an ideal, and that pure states correspond to the minimal ideals. Hm…

    • CommentRowNumber16.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    So that settles the question

    Actually, the same conclusion follows more elementarily with the fact that any complex group algebra is the direct sum of endomorphism algebras of all irreps. E.g. Fulton&Harris Prop. 3.29; will make a note at group algebra.

    • CommentRowNumber17.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    Is it not the case that

    a primitive idempotent ee defines a pure state (https://arxiv.org/abs/1202.4513) ?

    • CommentRowNumber18.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    Yes, must be true. I was hoping to see a proof that directly checks the impossibility of non-trivial convex combination. But I suppose the standard proof is to invoke the GNS construction and observe that under this correspondence a primitive idempotent becomes a projector onto a 1-dimensional subspace of the Hilbert space.

    Okay, good!, so we are reduced now to the last of three steps: We just have to compute the probabilities

    p (λ,i λ)=P i λ (λ)(e) β. p_{(\lambda, i_\lambda)} \;=\; \left\langle P^{(\lambda)}_{i_\lambda} (e) \right\rangle_\beta \,.
    • CommentRowNumber19.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021
    • (edited May 19th 2021)

    Presumably our Cayley density matrix, C, is some sum T:sYTp TF T\sum_{T: sYT} p_T F_T, and so Cv T=p Tv TC v_T = p_T v_T, for v Tv_T in the seminormal basis.

    Is this seminormal basis constructed anywhere?

    • CommentRowNumber20.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    So once we have the actual formula for these primitive idempotent elements in group algebra, these probabilities are their exponentiated Cayley distance from the neutral element.

    • CommentRowNumber21.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    But then these are just the eigenvalues of CC, no?

    • CommentRowNumber22.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    I’d still think it’s that eigenvalue times the coefficient of the neutral element in the idempotent! No?

    • CommentRowNumber23.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    Just reflecting back on here, I see I was grouping sYTs with the same diagram. When separating them out, we’d have (1027,827,827,127)\big(\frac{10}{27}, \frac{8}{27}, \frac{8}{27}, \frac{1}{27}\big).

    Shouldn’t be guessing like this, but on borrowed time. So conjecture that for sYT, SS, of shape λ\lambda, then p S=|ssYT λ(N)|N np_S = \frac{\left\vert ssYT_{\lambda}(N)\right\vert}{N^n}.

    • CommentRowNumber24.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    That sounds plausible!

    I am thinking now: it should be easy to guess the expansion of the neutral element in our eigenvector basis. Must be some normalized sum over absolutely all indices of S (λ)(σ) ijS^{(\lambda)}(\sigma)_{i j}, by Schur orthogonality. But once we have this, we can read off the component in each eigenspace. Multiplying by the eigenvalue, that’s the answer.

    • CommentRowNumber25.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    Yes, so here is an expansion of the neutral element, by Schur orthogonality

    1n!λ,σ,iχ (λ)(e)S λ(σ) iiσ =1n!λ,σχ (λ)(e)χ λ(σ)σ =e \begin{aligned} \frac{1}{n!} \underset{\lambda, \sigma, i}{\sum} \chi^{(\lambda)}(e) S^{\lambda}(\sigma)_{i i} \, \sigma & \;=\; \frac{1}{n!} \underset{\lambda, \sigma}{\sum} \chi^{(\lambda)}(e) \chi^{\lambda}(\sigma) \, \sigma \\ & \;=\; e \end{aligned}

    and this implies that

    (e,P i (λ)(e))=1n!χ (λ)(e)S (λ)(e) ii \big( e, P^{(\lambda)}_i(e)\big) \;=\; \frac{1}{n!} \chi^{(\lambda)}(e) S^{(\lambda)}(e)_{i i}
    • CommentRowNumber26.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021
    =χ (λ)(e)n!. \cdots = \frac{\chi^{(\lambda)(e)}}{n!} \,.

    Which would mean that

    p (λ,i)=χ (λ)(e)n!EigVals[e βd C] λ p_{(\lambda, i)} \;=\; \frac{\chi^{(\lambda)(e)}}{n!} EigVals[e^{- \beta \cdot d_C}]_\lambda
    • CommentRowNumber27.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    As a cross-check: This finally does sum to unity! by the previous discussion jere

    • CommentRowNumber28.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    That ends up the same as my #23, I think.

    Good. So what next, asymptotics?

    • CommentRowNumber29.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    Before we do anything else, let’s compute the entropy as a function of nn!

    • CommentRowNumber30.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    (Read “as a function of nn – exclamation mark” :-)

    • CommentRowNumber31.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    Do we only care about n=Nn = N?

    Just need to wield the hook-content formula. Or maybe the easier proof by Google.

    • CommentRowNumber32.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    I’d foremost care about e β=2e^\beta = 2 and next about e β +e^{\beta} \in \mathbb{N}_+.

    What are you going to google for, here?

    • CommentRowNumber33.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    Something closely related is considered here:

    • CommentRowNumber34.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021
    • (edited May 19th 2021)

    Also around (2.6) in:

    • M. S. Boyko and N. I. Nessonov, Entropy of the Shift on II 1II_1-representations of the Group S()S(\infty) (pdf)
    • CommentRowNumber35.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    A good search term seems to be “random Young diagram”. I see various probability distribitions on sets of Young diagrams being discussed (Plancherel-, Schur-Weyl-, Jack-, and also some without name). Haven’t seen our Cayley distribution been discussed yet.

    • CommentRowNumber36.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021
    • (edited May 19th 2021)

    Hm, maybe this here subsumes our probability distribution:

    In (1.1) and (1.7) they consider probability distributions on Young diagrams given by the coefficients of the expansion of any given class function into irreducible characters.

    But our Cayley state is a class function (by the non-manifest right-invariance of the Cayley distance, which becomes manifest under Cayley’s formula).

    Might its expansion into pure states that we found be just the expansion of that class function into irreducible characters? I don’t see it right now, but it sounds plausible. If so, we’d connect to the above article.

    • CommentRowNumber37.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    It’s amazing what people get up to. Can’t see anything quite right.

    Perhaps try staring at the hook-content theorem for N=2N =2.

    • CommentRowNumber38.
    • CommentAuthorUrs
    • CommentTimeMay 19th 2021

    You will have to help me: I don’t see what you are getting at here.

    • CommentRowNumber39.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021

    I just added other thread that

    p λ,i=|ssYT λ(N)|N n. p_{\lambda, i} \;=\; \tfrac{\left\vert ssYT_{\lambda}(N)\right\vert}{N^n} \,.

    And this is given by the hook-content formula.

    • CommentRowNumber40.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 19th 2021
    • (edited May 19th 2021)

    Oh, so your paper in #33 is quite close. It’s just that they have bundled things up so it’s a distribution on Young diagrams rather than Young tableaux. Still, the latter would be a refinement of the former, so I guess the former provides a lower bound.

    • CommentRowNumber41.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021
    • (edited May 20th 2021)

    Re #39; I know that the eigenvalues are given, in one form, by the hook-content formula over χ (λ)(e)\chi^{(\lambda)}(e) this way – after all we have been proving this at some length some days ago and made it section 3.4 of the file. But do you see a way how that helps in computing the entropy? I thought your suggestion to “try staring” at the formula in #37 was hinting at some helpful pattern you spotted?

    Namely, to my mind, what we are after now is an idea for a clever way two plug in two chosen forms of our various formulas for the eigenvalues, hence for p λ,ip_{\lambda,i}, into the formula for the entropy

    S=λ,ip λ,iln(p λ,i) S \;=\; \underset{ \lambda, i }{\sum} p_{\lambda,i} \cdot ln( p_{\lambda,i} )

    such that the resulting expression may be reduced in some useful way.

    It seems suggestive that for the first p λ,ip_{\lambda,i} we choose a “sum-expression” for the eigenvalues (such as our original character formula, or maybe the counting formula for tableaux) while for the second p λ,ip_{\lambda,i} – the one inside the logarithm – we’d use one of the product formulas; because the logarithm will turn that product into a sum so that we’d be left with one big nested sum.

    But I don’t see yet how to further simplify that big sum. Maybe another approach is needed.

    • CommentRowNumber42.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021
    • (edited May 20th 2021)

    re #40: Yes, our probability distribution is, up to rescaling, pulled back from one on the set of Young diagrams. Since lots of probability distributions on sets of Young diagrams have been studied, it seems worthwhile to check if the one corresponding to ours is among them.

    I still think that the approach in #36 could subsume ours, namely if we take their χ\chi to be the Cayley state, hence set χ β\chi \coloneqq \big\langle - \big\rangle_\beta. For this to work it would have to be true that our expansion of the Cayley state into pure states actually coincides with its expansion as a class function into irreducible characters. That seems to be an interesting question in itself, worth settling. It would give a tight link between the quantum-state-perspective on the Cayley state and classical character theory.

    For #33 to be relevant, we’d need that their probability distribution corresponds (under pullback) to ours. Are you saying you see that this is the case? We’d need to know that the “dimension of the isotypic component of S λS^{\lambda} in ( N) n(\mathbb{C}^N)^{\otimes n}” is given by the number of semistandard Young tableaux – is that so?

    • CommentRowNumber43.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021
    • (edited May 20th 2021)

    I doubt an exact approach will proceed very far. As you said somewhere, the largest probability is associated with the single-rowed diagram. That might be treatable.

    At N=2N=2, there are n+1n+1 ssYTs, so the probability is n+12 n\frac{n+1}{2^n}.

    • CommentRowNumber44.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021
    • (edited May 20th 2021)

    I said this wrong about pullback: it’s pushforward.

    The pushforward of our probability distribution to the set of Young diagrams is the Schur-Weyl measure!

    That’s more explicit in the way it’s written on slide 76 here

    • CommentRowNumber45.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    Re #42, yes slide 76 of those notes you mentioned:

    N n= λ#SYT(λ)#SSYT(λ).N^n = \sum_{\lambda} #SYT(\lambda) \cdot #SSYT(\lambda).
    • CommentRowNumber46.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    Messages crossing.

    • CommentRowNumber47.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021

    Great! I have added a brief note on this to the kernel entry here.

    Now I need to be doing something else this morning. But this is excellent, let’s keep pushing.

    Yes, as you say, this now means that our entropy is bounded below by the entropy of the Schur-Weyl measure. Which seems to have received some attention.

    Maybe we should next ask for how the entropy scales asymptotically with nn (i.e. Sym(n)Sym(n)).

    • CommentRowNumber48.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021
    • (edited May 20th 2021)

    In the case of #43, min-entropy, as negative logarithm of the probability of the most likely outcome, is log(n+12 n)-log(\frac{n+1}{2^n}).

    We could probably do this for general NN. Ways of arranging up to N1N-1 changes in nn boxes. So (n+N1N1)\binom{n+N-1}{N-1}.

    • CommentRowNumber49.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021

    Do we know if the min-entropy dominates the entropy for large nn?

    From its title, one would think that the the theorem in Mkrtchyan 14 is highly relevant, but I can’t make out yet what the take-home message is. It’s not even clear what it says about actual entropy, since on the constant HH that is actually being computed we only learn that (p. 3)

    By analogy [...][...] suggested to call the constant HH the entropy

    • CommentRowNumber50.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    What do you mean by ’dominates’?

    The min-entropy is never greater than the ordinary or Shannon entropy (wikipedia)

    We can see how this probability of the most likely outcome varies: 1N n(n+N1N1)\frac{1}{N^n} \cdot \binom{n+N-1}{N-1}.

    • CommentRowNumber51.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021

    I mean whether for large nn the entropy equals the min-entropy up to sub-leading powers of nn.

    That’s anyways what I would like to know next: The dominant scaling of the entropy with nn.

    • CommentRowNumber52.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021

    By the way, I just see why elsewhere you may have asked me whether I am looking at N=nN = n. I had this wrong in the file: The eigenvalues are expressed in terms of dim(V (λ))dim(V^{(\lambda)}) only for N=nN = n, I suppose. Have fixed this now (p. 17).

    • CommentRowNumber53.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    … and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the number of outcomes with nonzero probability.

    Number of sYTs with at most NN rows.

    • CommentRowNumber54.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021

    Sounds good; let’s record these facts about our min- and max-entropies!

    But I have to dash now, do something else. Will come back later today.

    (BTW we overlapped, check out a side remark in #52.)

    • CommentRowNumber55.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021
    • (edited May 20th 2021)

    So bounds either side from #50 and #53. But how tight are they?

    For N= 2, the min-entropy is nlog2log(n+1)n log 2 - log (n+1).

    The max-entropy is log of the number of sYTs with at most 2 rows, so hook length formula for that case.

    • CommentRowNumber56.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    Have to dash too. Will check on #52. I keep swapping nn and NN, so easy to get confused.

    • CommentRowNumber57.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    Funny how ideas pop into your head when doing something else. Just to note that if we have n=Nn=N and see what happens as this grows, then the max-entropy is easy, as there’s no restriction on the sYTs, so max-entropy =logn!= log n!.

    • CommentRowNumber58.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    Which is nlognn log n to leading order.

    • CommentRowNumber59.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021

    Interesting. So that actually holds for all NnN \geq n, right?

    • CommentRowNumber60.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    Right.

    • CommentRowNumber61.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    Could do a Stirling approximation on #50, too.

    • CommentRowNumber62.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021

    I have seen nlog(n)n log(n)-scaling of entropy for black holes in the “microscopic” realm where the geometric description breaks down: very last paragraph here. And in that case it’s also two different parameters that need to scale jointly: the number of branes, and the energy they carry (KK-momentum)

    That’s very vague a relation, of course. But maybe we can spot some such relation, as we proceed.

    • CommentRowNumber63.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021

    The central binomial coefficient (nn/2)\binom{n}{n/2} is approximately (2/πn)2 n\sqrt(2/ \pi n) \cdot 2^n (here, p.4).

    So log((2nn))log(\binom{2n}{n}) is approximately 2nlog22n log 2. So min-entropy for N = n, is approximately nlogn2nlog2n log n - 2n log 2 (the shift to (2n1n1)\binom{2n-1}{n-1} can be neglected).

    It looks like the leading order is nlognn log n.

    • CommentRowNumber64.
    • CommentAuthorUrs
    • CommentTimeMay 20th 2021

    Thanks! Interesting. Later when I have some time, I’ll try to record these facts on the Cayley kernel page.

    • CommentRowNumber65.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 20th 2021
    • (edited May 21st 2021)

    Perhaps then working up to order of lognlog n, the entropy for n=Nn = N is between nlogn2nlog2n log n - 2n log 2 and nlognnnlog n - n.

    • CommentRowNumber66.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021

    What I wrote in #57 is wrong. The number of sYTs of size nn in not n!n!. There’s an OEIS sequence for this.

    And why was I so sure in #43 that the most likely outcome corresponds to the single-rowed Young diagram?

    Still, either change makes the bounds for Shannon entropy tighter.

    • CommentRowNumber67.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    I, too, thought that it is evident that the Young diagram λ=(n)\lambda = (n) poses clearly the least constraints on its ssYT labels and hence has the largest number of associated ssYTs. It seems pretty clear, but we should think of giving a real proof. (Maybe next week, though, busy now.)

    • CommentRowNumber68.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021
    • (edited May 21st 2021)

    Applying the hook-content formula for diagrams of size 4, with N=4N=4, the diagram (4)(4) has 35 ssYTS, while the diagram (3,1)(3, 1) has 45 ssYTs.

    So the lower bound in #63 can perhaps be improved.

    As for the max-entropy, OEIS gives a plot of the log of the sequence. Looks suspiciously close to linear in nn. Strange.

    But marking beckons.

    • CommentRowNumber69.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021

    Of course, the greater max #ssYTs, the lower the lower bound. Hmm, how to find the λ\lambda for which #ssYTs peaks?

    • CommentRowNumber70.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    Applying the hook-content formula for diagrams of size 4, with N=4N=4, the diagram (4)(4) has 35 ssYTS, while the diagram (3,1)(3, 1) has 45 ssYTs.

    Hm, okay, thanks, I stand corrected. Apparently I am not thinking about this the right way then.

    I see that a similar question on MO is here.

    • CommentRowNumber71.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    For standard Young tableaux, the question is addressed on the bottom of p. 57 here.

    • CommentRowNumber72.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021

    That upper bound (#68) is probably simplest to work on. That’s the case where NN is always large enough.

    Someone might also have figured out #sYTs with at most two rows, for N=2N = 2.

    • CommentRowNumber73.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021

    Ah, regarding the N=2N=2 case,

    The total number of SYT of size nn and at most 22 rows is (nfloorn/2)\binom{n}{\floor{n/2}}. (Cor 3.4, arXiv:1408.4497)

    In which case, max-entropy would be approximately nlog2n log2.

    Max-entropy linear in nn tallies with the the third line in #68.

    • CommentRowNumber74.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    Do you want to be looking at SYTs instead of SSYTs here?

    • CommentRowNumber75.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021

    For max-entropy you need to know the number of outcomes with non-zero probability. Outcomes are given by SYTs.

    If N=2, any at most 2 row SYT will have a non-zero number of SSYTs with the same underlying diagram.

    • CommentRowNumber76.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    I see. I’ll better shut up until I have more time to concentrate and/or you have notes on what you are puzzling out.

    • CommentRowNumber77.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021
    • (edited May 21st 2021)

    I’m just throwing out things in breaks between marking.

    So we know that for any nn and NN there’s a distribution over SYTs over diagrams of size nn, given by the number of SSYTs with entries up to NN on that diagram.

    And we know min-entropy is the negative logarithm of the most likely outcome, and is a lower bound of ordinary entropy.

    Max-entropy is logarithm of the number of outcomes with non-zero probability, and is an upper bound for ordinary entropy.

    And I’ve been giving some asymptotic values for nn \to \infty in the case that N=nN = n and the case that N=2N =2.

    Hopefully the min- and max-entropy bounds will be sufficiently indicative.

    Are there particular limits of interest?

    • CommentRowNumber78.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    I have been wondering if we might be seeing the chordial version of holographic entanglement entropy discussed here:

    This says that given a round chord diagram and a choice of interval of its boundary, the entanglement entropy represented by this chord diagram is

    12nln(2), \tfrac{1}{2} n ln(2) \,,

    where nn is the number of chords with one endpoint in that interval, and the other endpoint in its complement.

    I have the vague idea that we are relating to this as follows:

    The single permutations with the least expectation value in the Cayley state are the cyclic permutations.

    We may say that the Cayley state regards these as horizontal chord diagrams with no chords and then closes these to a round chord diagram by attaching all the strands to each other in the form of a circle.

    So let’s regard the cyclic permutations as ground “states” of sorts. From that perspective, every other permutation should be thought of as a cyclic permutation times some permutation. With that picture in mind, we may think of the Cayley state applied to any permutation as producing the round chord diagram which has one chord for every transposition in the corresponding permutation-relative-cyclic permutation.

    In conclusion, there is a perspective where our state actually evaluates on round chord diagrams.

    And maybe we want to think of these as divided into two semicircles.

    So far, all this is just a certain change of perspective. Now comes a speculation:

    Maybe the average number (in some sense) of chords in these round chord diagrams that cross from one half-circle to the other turns out to be 2n2 n, where nn is the number of strands in the horizontal chord diagram.

    If that were the case, and if you find that the entropy of the Cayley state is nln(2) \sim n ln(2), then we might be able to interpret that entropy as holographic entanglement entropy.

    Just an idea.

    • CommentRowNumber79.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021
    • (edited May 21st 2021)

    Will try to absorb. Just to check, what role does NN play in this account?

    Just continuing with the N=2N=2 picture. Then of course in this situation the intuition that (n)(n) is the most likely SYT is correct, with probability (n+1)/2 n(n+1)/2^n.

    So min-entropy is nlog2log(n+1)n log 2 - log(n+1). So that’s working nicely with the max-entropy of nlog2n log 2 (or more precisely nlog212lognn log 2 - \frac{1}{2} log n) in #73.

    So ignoring lognlog n, it looks as though nlog2n log 2 is the entropy of the Cayley state for N=2N = 2.

    • CommentRowNumber80.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    In all of the interpretations offered in our arXiv:1912.10425 we always find (what in our Cayley notation reads) e β=2e^{\beta} = 2, nothing else.

    In the discussion there, this keeps coming back to the fact that the Nahm equations which govern brane intersections pick 𝔰𝔲(2)\mathfrak{su}(2)/𝔤𝔩(2)\mathfrak{gl}(2) Lie algebra structure.

    You may remember that last year, when we embarked (in hindsight) on this project, I was only conjecturing that w (𝔤𝔩(2),2)w_{(\mathfrak{gl}(2), \mathbf{2})} is a quantum state, didn’t even consider the more general w (𝔤𝔩(n),n)w_{(\mathfrak{gl}(n), \mathbf{n})}, until suddenly we had mathematical tools in place to say something about these.

    The actual further parameter that does appear in these stringy considerations is not e βe^\beta but the dimension of the irrep of 𝔰𝔲(2)/𝔤𝔩(2)\mathfrak{su}(2)/\mathfrak{gl}(2). Which means that if we do want to speak about more coincident M5-branes here, we would need to understand the quantum-state nature of the non-fundamental 𝔤𝔩(2)\mathfrak{gl}(2)-weight systems – which currenty we have nothing to say about.

    • CommentRowNumber81.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    it looks as though nlog2n log 2 is the entropy of the Cayley state for N=2N = 2.

    That’s really interesting.

    Are you saying in #73 that this is actually specific to N=2N = 2, i.e. that you don’t expect the general result to be nlog(N)n log(N)?

    • CommentRowNumber82.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021

    We are in luck

    The total number of SYT of size nn and at most 3 rows is… the nn-th Motzkin number

    Turn to A001006. Looks like a power from the log graph and

    a(n)/a(n-1) tends to 3.0 as N->infinity

    Right, so looking like nlog3n log 3 for the max-entropy bound. Would that be disappointing if it just results in nlog(N)n log(N)? Does this perhaps follow from the dimension of the Schur-Weyl set-up, N nN^n?

    By the way

    Motzkin numbers: number of ways of drawing any number of nonintersecting chords joining nn (labeled) points on a circle.

    • CommentRowNumber83.
    • CommentAuthorUrs
    • CommentTimeMay 21st 2021

    Fascinating. Also that you spotted the clause “tends to…” on that page!

    Incidentally, since β=1/T\beta = 1/T this would mean that S=n/T S = n/T.

    • CommentRowNumber84.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 21st 2021
    • (edited May 21st 2021)

    Just noting this:

    S 2n1 4=C nC nS^4_{2n-1} = C_n \cdot C_n, C n=(2n)!/n!(n+1)!C_n = (2n)!/n!(n + 1)!. Think that suggests S 4~nlog4S^4 ~ n log 4. Needs checking.

    • CommentRowNumber85.
    • CommentAuthorUrs
    • CommentTimeMay 22nd 2021

    Looks like you discovered a new pattern in counting of SYTs.

    It’s remarkable that people seem to have entirely different formulas/approaches for each NN for counting SYTs with at most NN rows.

    • CommentRowNumber86.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 22nd 2021

    I’m thinking maybe ultimately it’s just a question of a coarsening of the uniform distribution over the set of size

    N n= λ#SYT(λ)#SSYT(λ).N^n = \sum_{\lambda} #SYT(\lambda) \cdot #SSYT(\lambda).

    And the coarsening’s not enough to take one far from that distribution, so that entropy is approximately log(N n)log(N^n).

    • CommentRowNumber87.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 23rd 2021
    • (edited May 23rd 2021)

    Further max-entropy values from SYTs of height \leq N=5,6,7N= 5,6,7

    a(n) ~ 3 * 5^(n+5)/(8 * Pi *n^5) A049401

    a(n) ~ 3/4 * 6^(n+15/2)/(Pi^(3/2)*n^(15/2)) A007579

    a(n) ~ 45/32 * 7^(n+21/2)/(Pi^(3/2)*n^(21/2)) A007578

    So looks like approximately nlog(N)N(N1)2log(n)n log(N) - \frac{N(N-1)}{2}\cdot log(n). (Tallies also with N=2N=2 case in #79.)

    • CommentRowNumber88.
    • CommentAuthorDavidRoberts
    • CommentTimeMay 23rd 2021
    • (edited May 23rd 2021)

    (deleted)

    • CommentRowNumber89.
    • CommentAuthorUrs
    • CommentTimeMay 23rd 2021

    Oh, so you see a leading 𝒪(N 2)\mathcal{O}(N^2)-term at fixed nn?!

    That would be excellent.

    Would love to dive into this now, but am absorbed otherwise. I hope to be able to get back to this next week.

    • CommentRowNumber90.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 23rd 2021

    Well those are approximations of max-entropy for large nn at fixed values of NN. Don’t think it tells us about the other limit direction.

    • CommentRowNumber91.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 24th 2021
    • (edited May 24th 2021)

    Maybe I’m confusing you with my nn and NN. I’ve been using nn for the size of the Young diagram and NN for the highest number to be used filling in a SSYT.

    To sum up, regarding max-entropy derived from the logarithm of the number of SYTs, fixing NN, it’s looking like this is approximately nlog(N)N(N1)2log(n)n log(N) - \frac{N(N-1)}{2}\cdot log(n).

    If we want a result for n=Nn = N, then we use A000085, (where it is claimed that a(n) ~ sqrt(2)/2 * exp(sqrt(n)-n/2-1/4) * n^(n/2) * (1 + 7/(24*sqrt(n))), suggesting (n/2)logn(n/2)log n).

    As for min-entropy, we need for nn to find the λ\lambda where #SSYT peaks. We know this typically won’t be the diagram (n)(n), although it is for N=2N=2, where min-entropy is nlog2log(n+1)n log 2 - log(n+1).

    • CommentRowNumber92.
    • CommentAuthorUrs
    • CommentTimeMay 24th 2021

    You are not confusing me. I just said that it’s interesting to see a square term now. That would be interesting in either of the two variables. But the way it comes out maybe gives us a hint now as to the interpretation (e.g. D-brane entropy goes with the square of their numbers). I am looking forward to investigating further, but still occupied otherwise for the time being…

    • CommentRowNumber93.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 24th 2021

    For a fixed nn, the number of SYTs limited to NN rows of course reaches its limit at N=nN = n, but perhaps we learn something from the term.

    • CommentRowNumber94.
    • CommentAuthorUrs
    • CommentTimeMay 25th 2021

    Here is an observation (from slide 7 here):

    The “topological” zero-point contribution to entropy due to “topological order”/”long-range entanglement of the ground state” (references now here) of 2d quantum materials with

    • nn disconnected boundaries

    • /N\mathbb{Z}/N-gauge symmetry

    is though to go as:

    nln(N). - n ln(N) \,.

    More generally, their entanglement entropy with a subregion bounded by a curve of length LL is thought to go as

    αLnln(N). \alpha L \;-\; n ln(N) \,.
    • CommentRowNumber95.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 25th 2021

    Does that LL make any sense in the context of chord diagrams?

    • CommentRowNumber96.
    • CommentAuthorUrs
    • CommentTimeMay 25th 2021

    It seems to naturally suggests itself as the length of the Ryu-Takayanagi curve (called γ A\gamma_A in the two graphics here).

    But to sort this out more unambiguously we would probably need to pass from the plain entropy of the Cayley state to its more general entanglement entropy with respect to choices of “subsystems”.

    From the chord diagrammatics for the holographic entanglement entropy (still as in the discussion there) it would seem suggestive that a “subsystem” here should be a subset of set {1,,n}\{1, \cdots, n\} of strands, hence of elements that the symmetric group acts on, hence should correspond to a choice of subgroup inclusion

    Sym(nk)Sym(n). Sym(n-k) \hookrightarrow Sym(n) \,.

    That, in turn, would clearly be natural also from the representation-theoretic perspective.

    So maybe we should ask for a generalization of the entropy of the Cayley state that depends on such subgroup inclusions.

    • CommentRowNumber97.
    • CommentAuthorUrs
    • CommentTimeMay 25th 2021

    By the way, speaking of confusion: I admit needing to sort out the sign in S=±nln(N)+S = \pm n ln(N) + \cdots in the above, which matters.

    • CommentRowNumber98.
    • CommentAuthorDavidRoberts
    • CommentTimeMay 25th 2021
    • CommentRowNumber99.
    • CommentAuthorDavid_Corfield
    • CommentTimeMay 25th 2021

    With the equivalent of series A003040 for GL(n)GL(n) rather than S nS_n, it would be possible to say something about max-entropy.

    • CommentRowNumber100.
    • CommentAuthorUrs
    • CommentTimeMay 25th 2021

    re #98: I’d thinkit really should be entanglement entropy: Instead of computing the entropy of the Cayley state, we’d first “trace out” the span of the subgroup Sym(nk)Span(n)Sym(n-k) \hookrightarrow Span(n) and then compute the vN entropy of what remains.