Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Elsewhere we proven that all fundamental weight systems are quantum states. Now we want to find the decomposition of these quantum states as mixtures of pure states and compute the resulting probability distribution.
Specifically, I suppose the pure states in the mixture are labelled by Young diagrams, and so we should get a god-given probability distribution on the set of Young diagrams (of boxes for the fundamental -weight system). It’s bound to be something classical (probably the normalized numbers of some sort of Young tableaux) but it will still be interesting to know that this classical notion is in fact the probability distribution secretly encoded by fundamental weight systems.
I had started to sketch out how to approach this over here in the thread on the Cayley distance kernel. Now I’d like to flesh this out.
The broad strategy is to first consider the group algebra as the star-algebra of observables, the Cayley distance kernel in the form as a state on this algebra, and then find its convex combination into pure states using representation theory.
Once this is done, the result for the actual weight systems on the algebra of horizontal chord diagram should follow immediately by pullback along .
It seems straightforward to get a convex decomposition of the fundamental-weight-system quantum state into “purer states” this way. I am still not sure how to formally prove that the evident purer-states are already pure, but that will probably reveal itself in due time.
Eventually this should go to its own entry. For the moment I started making unpolished notes in the Sandbox.
In fact, the pure states must be labelled not just by a partition/Young diagram , but also by , at least:
The elements
span a left ideal , and the direct sum of all these ideals decomposes , whence the projection operators onto these ideals commute with all left multiplication operators in the group algebra. Since they also commute with the Cayley distance kernel (being projection to direct sums of its eigenspaces) and are self-adjoint (this follows by grand Schur orthogonality), the computation shown (currently) in the Sandbox gives that
are states on , and that we have a decomposition of the Cayley kernel state
as
for non-negative real numbers
It feels intuitively clear that the above decomposition
into left ideals is the finest possible, and that this implies that the states are pure. But I don’t have formal proof of this yet. This ought be some elementary argument, though.
Only coming in rather briefly here, since my stack of marking is going down very slowly. So to show that these states are pure, we just need to know they can’t be non-extreme convex combinations.
Not sure if it would help, but does the bijection between the dimension of an irrep and standard YTs of that Young diagram matter? I.e, instead of labelling by , label by a sYT.
Right. While I don’t see yet how it helps concretely, at the very least it optimizes the notionation.
But now that you say it, it makes me wonder: Maybe that’s what relates to the seminormal basis for those Jucys-Murphy elements, which are also labelled by sYTs?
Hm, how in fact is that seminormal basis (here) not undercounting the dimension of : For each there is a -dimensional subspace of , not just -dimensional.
So I guess those need to carry a further degeneracy index?
That undercounting was in the back of my mind too.
Should we be talking about ’Young symmetrizers’?
A SYT of shape has an associated group algebra element , called the Young symmetrizer. has a key role: It is an idempotent, and its principal right ideal is an irreducible module of . arXiv:1408.4497
Of course that MO question was looking at Young symmetrizers.
Wasn’t the answer there enough anyway
you can now prove that is a complete set of pairwise orthogonal primitive idempotents in .
is a multiple of the Young symmetrizer for .
Thanks, right, that’s the first statement we need, that these projectors are “a complete set”, which we need to mean maximal such set.
I wonder if the idea is that this is all contained in Okounkov&Vershik.
So what would it take to show it’s a maximal set? Isn’t the word ’primitive’ enough? So that’s that the idempotent can’t be decomposed into a sum of orthogonal non-zero idempotents.
Hmm, this answer suggests Young symmetrizers are not necessarily orthogonal. What I wrote at the end of #7 isn’t right, is it.
So where do people discuss that construction of #7?
Yes, true, “complete set of primitive projectors” means what we need it to mean. Still, it needs a proof.
This seems to be from pp. 505-506 of
Thanks! I’ll extract that as a note to the Lab now.
(It’s most curious how star-algebras show up now in the “seminormal” Gelfand-Tsetlin basis! But still need to absorb the exact role they play here.)
In the Intro he point back to earlier work
In [9, 10], this last step was simplified by the demonstration that the seminormal basis constitutes a complete set of eigenfunctions of a certain set of commuting elements of the group algebra, denoted {L,,}; moreover, the corresponding projection operators comprise a complete set of primitive idempotents.
G. E. MURPHY, A new construction of Young’s seminormal representation of the symmetric groups, J. Algebra 69 (1981). 287-297.
G. E. MURPHY, On the idempotents of the symmetric group and Nakayama’s conjecture. J. Algebra 81 ( 1983), 258-264.
Thanks again. So that settles the question about the completeness of the ideal decomposition: Even without going into the details of Murphy’s construction, the fact alone that he finds the same number of ideals for each eigenvector of the Cayley distance kernel (namely many) implies that the decomposition I gave in the Sandbox must be maximal/primitive, too.
Now it just remains to see how this implies that the corresponding states in the Sandbox are pure! Once we have this we just need to compute and we are done with producing the desired probability distribution.
For proving that is a pure state: We probably want to argue that every state defines an ideal, and that pure states correspond to the minimal ideals. Hm…
So that settles the question
Actually, the same conclusion follows more elementarily with the fact that any complex group algebra is the direct sum of endomorphism algebras of all irreps. E.g. Fulton&Harris Prop. 3.29; will make a note at group algebra.
Is it not the case that
a primitive idempotent defines a pure state (https://arxiv.org/abs/1202.4513) ?
Yes, must be true. I was hoping to see a proof that directly checks the impossibility of non-trivial convex combination. But I suppose the standard proof is to invoke the GNS construction and observe that under this correspondence a primitive idempotent becomes a projector onto a 1-dimensional subspace of the Hilbert space.
Okay, good!, so we are reduced now to the last of three steps: We just have to compute the probabilities
Presumably our Cayley density matrix, C, is some sum , and so , for in the seminormal basis.
Is this seminormal basis constructed anywhere?
So once we have the actual formula for these primitive idempotent elements in group algebra, these probabilities are their exponentiated Cayley distance from the neutral element.
But then these are just the eigenvalues of , no?
I’d still think it’s that eigenvalue times the coefficient of the neutral element in the idempotent! No?
Just reflecting back on here, I see I was grouping sYTs with the same diagram. When separating them out, we’d have .
Shouldn’t be guessing like this, but on borrowed time. So conjecture that for sYT, , of shape , then .
That sounds plausible!
I am thinking now: it should be easy to guess the expansion of the neutral element in our eigenvector basis. Must be some normalized sum over absolutely all indices of , by Schur orthogonality. But once we have this, we can read off the component in each eigenspace. Multiplying by the eigenvalue, that’s the answer.
Yes, so here is an expansion of the neutral element, by Schur orthogonality
and this implies that
Which would mean that
As a cross-check: This finally does sum to unity! by the previous discussion jere
That ends up the same as my #23, I think.
Good. So what next, asymptotics?
Before we do anything else, let’s compute the entropy as a function of !
(Read “as a function of – exclamation mark” :-)
Do we only care about ?
Just need to wield the hook-content formula. Or maybe the easier proof by Google.
I’d foremost care about and next about .
What are you going to google for, here?
Something closely related is considered here:
Also around (2.6) in:
A good search term seems to be “random Young diagram”. I see various probability distribitions on sets of Young diagrams being discussed (Plancherel-, Schur-Weyl-, Jack-, and also some without name). Haven’t seen our Cayley distribution been discussed yet.
Hm, maybe this here subsumes our probability distribution:
In (1.1) and (1.7) they consider probability distributions on Young diagrams given by the coefficients of the expansion of any given class function into irreducible characters.
But our Cayley state is a class function (by the non-manifest right-invariance of the Cayley distance, which becomes manifest under Cayley’s formula).
Might its expansion into pure states that we found be just the expansion of that class function into irreducible characters? I don’t see it right now, but it sounds plausible. If so, we’d connect to the above article.
It’s amazing what people get up to. Can’t see anything quite right.
Perhaps try staring at the hook-content theorem for .
You will have to help me: I don’t see what you are getting at here.
I just added other thread that
And this is given by the hook-content formula.
Oh, so your paper in #33 is quite close. It’s just that they have bundled things up so it’s a distribution on Young diagrams rather than Young tableaux. Still, the latter would be a refinement of the former, so I guess the former provides a lower bound.
Re #39; I know that the eigenvalues are given, in one form, by the hook-content formula over this way – after all we have been proving this at some length some days ago and made it section 3.4 of the file. But do you see a way how that helps in computing the entropy? I thought your suggestion to “try staring” at the formula in #37 was hinting at some helpful pattern you spotted?
Namely, to my mind, what we are after now is an idea for a clever way two plug in two chosen forms of our various formulas for the eigenvalues, hence for , into the formula for the entropy
such that the resulting expression may be reduced in some useful way.
It seems suggestive that for the first we choose a “sum-expression” for the eigenvalues (such as our original character formula, or maybe the counting formula for tableaux) while for the second – the one inside the logarithm – we’d use one of the product formulas; because the logarithm will turn that product into a sum so that we’d be left with one big nested sum.
But I don’t see yet how to further simplify that big sum. Maybe another approach is needed.
re #40: Yes, our probability distribution is, up to rescaling, pulled back from one on the set of Young diagrams. Since lots of probability distributions on sets of Young diagrams have been studied, it seems worthwhile to check if the one corresponding to ours is among them.
I still think that the approach in #36 could subsume ours, namely if we take their to be the Cayley state, hence set . For this to work it would have to be true that our expansion of the Cayley state into pure states actually coincides with its expansion as a class function into irreducible characters. That seems to be an interesting question in itself, worth settling. It would give a tight link between the quantum-state-perspective on the Cayley state and classical character theory.
For #33 to be relevant, we’d need that their probability distribution corresponds (under pullback) to ours. Are you saying you see that this is the case? We’d need to know that the “dimension of the isotypic component of in ” is given by the number of semistandard Young tableaux – is that so?
I doubt an exact approach will proceed very far. As you said somewhere, the largest probability is associated with the single-rowed diagram. That might be treatable.
At , there are ssYTs, so the probability is .
I said this wrong about pullback: it’s pushforward.
The pushforward of our probability distribution to the set of Young diagrams is the Schur-Weyl measure!
That’s more explicit in the way it’s written on slide 76 here
Re #42, yes slide 76 of those notes you mentioned:
Messages crossing.
Great! I have added a brief note on this to the kernel entry here.
Now I need to be doing something else this morning. But this is excellent, let’s keep pushing.
Yes, as you say, this now means that our entropy is bounded below by the entropy of the Schur-Weyl measure. Which seems to have received some attention.
Maybe we should next ask for how the entropy scales asymptotically with (i.e. ).
In the case of #43, min-entropy, as negative logarithm of the probability of the most likely outcome, is .
We could probably do this for general . Ways of arranging up to changes in boxes. So .
Do we know if the min-entropy dominates the entropy for large ?
From its title, one would think that the the theorem in Mkrtchyan 14 is highly relevant, but I can’t make out yet what the take-home message is. It’s not even clear what it says about actual entropy, since on the constant that is actually being computed we only learn that (p. 3)
By analogy suggested to call the constant the entropy
What do you mean by ’dominates’?
The min-entropy is never greater than the ordinary or Shannon entropy (wikipedia)
We can see how this probability of the most likely outcome varies: .
I mean whether for large the entropy equals the min-entropy up to sub-leading powers of .
That’s anyways what I would like to know next: The dominant scaling of the entropy with .
By the way, I just see why elsewhere you may have asked me whether I am looking at . I had this wrong in the file: The eigenvalues are expressed in terms of only for , I suppose. Have fixed this now (p. 17).
… and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the number of outcomes with nonzero probability.
Number of sYTs with at most rows.
Sounds good; let’s record these facts about our min- and max-entropies!
But I have to dash now, do something else. Will come back later today.
(BTW we overlapped, check out a side remark in #52.)
So bounds either side from #50 and #53. But how tight are they?
For N= 2, the min-entropy is .
The max-entropy is log of the number of sYTs with at most 2 rows, so hook length formula for that case.
Have to dash too. Will check on #52. I keep swapping and , so easy to get confused.
Funny how ideas pop into your head when doing something else. Just to note that if we have and see what happens as this grows, then the max-entropy is easy, as there’s no restriction on the sYTs, so max-entropy .
Which is to leading order.
Interesting. So that actually holds for all , right?
Right.
Could do a Stirling approximation on #50, too.
I have seen -scaling of entropy for black holes in the “microscopic” realm where the geometric description breaks down: very last paragraph here. And in that case it’s also two different parameters that need to scale jointly: the number of branes, and the energy they carry (KK-momentum)
That’s very vague a relation, of course. But maybe we can spot some such relation, as we proceed.
The central binomial coefficient is approximately (here, p.4).
So is approximately . So min-entropy for N = n, is approximately (the shift to can be neglected).
It looks like the leading order is .
Thanks! Interesting. Later when I have some time, I’ll try to record these facts on the Cayley kernel page.
Perhaps then working up to order of , the entropy for is between and .
What I wrote in #57 is wrong. The number of sYTs of size in not . There’s an OEIS sequence for this.
And why was I so sure in #43 that the most likely outcome corresponds to the single-rowed Young diagram?
Still, either change makes the bounds for Shannon entropy tighter.
I, too, thought that it is evident that the Young diagram poses clearly the least constraints on its ssYT labels and hence has the largest number of associated ssYTs. It seems pretty clear, but we should think of giving a real proof. (Maybe next week, though, busy now.)
Applying the hook-content formula for diagrams of size 4, with , the diagram has 35 ssYTS, while the diagram has 45 ssYTs.
So the lower bound in #63 can perhaps be improved.
As for the max-entropy, OEIS gives a plot of the log of the sequence. Looks suspiciously close to linear in . Strange.
But marking beckons.
Of course, the greater max #ssYTs, the lower the lower bound. Hmm, how to find the for which #ssYTs peaks?
Applying the hook-content formula for diagrams of size 4, with , the diagram has 35 ssYTS, while the diagram has 45 ssYTs.
Hm, okay, thanks, I stand corrected. Apparently I am not thinking about this the right way then.
I see that a similar question on MO is here.
For standard Young tableaux, the question is addressed on the bottom of p. 57 here.
That upper bound (#68) is probably simplest to work on. That’s the case where is always large enough.
Someone might also have figured out #sYTs with at most two rows, for .
Ah, regarding the case,
The total number of SYT of size and at most rows is . (Cor 3.4, arXiv:1408.4497)
In which case, max-entropy would be approximately .
Max-entropy linear in tallies with the the third line in #68.
Do you want to be looking at SYTs instead of SSYTs here?
For max-entropy you need to know the number of outcomes with non-zero probability. Outcomes are given by SYTs.
If N=2, any at most 2 row SYT will have a non-zero number of SSYTs with the same underlying diagram.
I see. I’ll better shut up until I have more time to concentrate and/or you have notes on what you are puzzling out.
I’m just throwing out things in breaks between marking.
So we know that for any and there’s a distribution over SYTs over diagrams of size , given by the number of SSYTs with entries up to on that diagram.
And we know min-entropy is the negative logarithm of the most likely outcome, and is a lower bound of ordinary entropy.
Max-entropy is logarithm of the number of outcomes with non-zero probability, and is an upper bound for ordinary entropy.
And I’ve been giving some asymptotic values for in the case that and the case that .
Hopefully the min- and max-entropy bounds will be sufficiently indicative.
Are there particular limits of interest?
I have been wondering if we might be seeing the chordial version of holographic entanglement entropy discussed here:
This says that given a round chord diagram and a choice of interval of its boundary, the entanglement entropy represented by this chord diagram is
where is the number of chords with one endpoint in that interval, and the other endpoint in its complement.
I have the vague idea that we are relating to this as follows:
The single permutations with the least expectation value in the Cayley state are the cyclic permutations.
We may say that the Cayley state regards these as horizontal chord diagrams with no chords and then closes these to a round chord diagram by attaching all the strands to each other in the form of a circle.
So let’s regard the cyclic permutations as ground “states” of sorts. From that perspective, every other permutation should be thought of as a cyclic permutation times some permutation. With that picture in mind, we may think of the Cayley state applied to any permutation as producing the round chord diagram which has one chord for every transposition in the corresponding permutation-relative-cyclic permutation.
In conclusion, there is a perspective where our state actually evaluates on round chord diagrams.
And maybe we want to think of these as divided into two semicircles.
So far, all this is just a certain change of perspective. Now comes a speculation:
Maybe the average number (in some sense) of chords in these round chord diagrams that cross from one half-circle to the other turns out to be , where is the number of strands in the horizontal chord diagram.
If that were the case, and if you find that the entropy of the Cayley state is , then we might be able to interpret that entropy as holographic entanglement entropy.
Just an idea.
Will try to absorb. Just to check, what role does play in this account?
Just continuing with the picture. Then of course in this situation the intuition that is the most likely SYT is correct, with probability .
So min-entropy is . So that’s working nicely with the max-entropy of (or more precisely ) in #73.
So ignoring , it looks as though is the entropy of the Cayley state for .
In all of the interpretations offered in our arXiv:1912.10425 we always find (what in our Cayley notation reads) , nothing else.
In the discussion there, this keeps coming back to the fact that the Nahm equations which govern brane intersections pick / Lie algebra structure.
You may remember that last year, when we embarked (in hindsight) on this project, I was only conjecturing that is a quantum state, didn’t even consider the more general , until suddenly we had mathematical tools in place to say something about these.
The actual further parameter that does appear in these stringy considerations is not but the dimension of the irrep of . Which means that if we do want to speak about more coincident M5-branes here, we would need to understand the quantum-state nature of the non-fundamental -weight systems – which currenty we have nothing to say about.
it looks as though is the entropy of the Cayley state for .
That’s really interesting.
Are you saying in #73 that this is actually specific to , i.e. that you don’t expect the general result to be ?
We are in luck
The total number of SYT of size and at most 3 rows is… the -th Motzkin number
Turn to A001006. Looks like a power from the log graph and
a(n)/a(n-1) tends to 3.0 as N->infinity
Right, so looking like for the max-entropy bound. Would that be disappointing if it just results in ? Does this perhaps follow from the dimension of the Schur-Weyl set-up, ?
By the way
Motzkin numbers: number of ways of drawing any number of nonintersecting chords joining (labeled) points on a circle.
Fascinating. Also that you spotted the clause “tends to…” on that page!
Incidentally, since this would mean that .
Just noting this:
, . Think that suggests . Needs checking.
Looks like you discovered a new pattern in counting of SYTs.
It’s remarkable that people seem to have entirely different formulas/approaches for each for counting SYTs with at most rows.
I’m thinking maybe ultimately it’s just a question of a coarsening of the uniform distribution over the set of size
And the coarsening’s not enough to take one far from that distribution, so that entropy is approximately .
(deleted)
Oh, so you see a leading -term at fixed ?!
That would be excellent.
Would love to dive into this now, but am absorbed otherwise. I hope to be able to get back to this next week.
Well those are approximations of max-entropy for large at fixed values of . Don’t think it tells us about the other limit direction.
Maybe I’m confusing you with my and . I’ve been using for the size of the Young diagram and for the highest number to be used filling in a SSYT.
To sum up, regarding max-entropy derived from the logarithm of the number of SYTs, fixing , it’s looking like this is approximately .
If we want a result for , then we use A000085, (where it is claimed that a(n) ~ sqrt(2)/2 * exp(sqrt(n)-n/2-1/4) * n^(n/2) * (1 + 7/(24*sqrt(n))), suggesting ).
As for min-entropy, we need for to find the where #SSYT peaks. We know this typically won’t be the diagram , although it is for , where min-entropy is .
You are not confusing me. I just said that it’s interesting to see a square term now. That would be interesting in either of the two variables. But the way it comes out maybe gives us a hint now as to the interpretation (e.g. D-brane entropy goes with the square of their numbers). I am looking forward to investigating further, but still occupied otherwise for the time being…
For a fixed , the number of SYTs limited to rows of course reaches its limit at , but perhaps we learn something from the term.
Here is an observation (from slide 7 here):
The “topological” zero-point contribution to entropy due to “topological order”/”long-range entanglement of the ground state” (references now here) of 2d quantum materials with
disconnected boundaries
-gauge symmetry
is though to go as:
More generally, their entanglement entropy with a subregion bounded by a curve of length is thought to go as
Does that make any sense in the context of chord diagrams?
It seems to naturally suggests itself as the length of the Ryu-Takayanagi curve (called in the two graphics here).
But to sort this out more unambiguously we would probably need to pass from the plain entropy of the Cayley state to its more general entanglement entropy with respect to choices of “subsystems”.
From the chord diagrammatics for the holographic entanglement entropy (still as in the discussion there) it would seem suggestive that a “subsystem” here should be a subset of set of strands, hence of elements that the symmetric group acts on, hence should correspond to a choice of subgroup inclusion
That, in turn, would clearly be natural also from the representation-theoretic perspective.
So maybe we should ask for a generalization of the entropy of the Cayley state that depends on such subgroup inclusions.
By the way, speaking of confusion: I admit needing to sort out the sign in in the above, which matters.
Re #96 something like https://en.wikipedia.org/wiki/Quantum_relative_entropy, or even just the classical https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence? Just a random guess…
With the equivalent of series A003040 for rather than , it would be possible to say something about max-entropy.
re #98: I’d thinkit really should be entanglement entropy: Instead of computing the entropy of the Cayley state, we’d first “trace out” the span of the subgroup and then compute the vN entropy of what remains.