Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Are we entering the world of skew tableaux?
Aha, that’s a plausible possibility!
Checking this is essentially straightforward, though it requires some work: Using our discussion at Cayley state on the group algebra we have its “density matrix” realization and “just” need to work out a tractable expression for the partial trace of that over the span of a Sym-subgroup inclusion.
Maybe next week…
Finally finding time to look into some of these counting formulas for the number $\left\vert sYT_n(N)\right\vert$ standard Young tableaux with $n$ boxes and at most $N$ rows. So coming back to around #87:
Looking now at the literature, I see that an asymptotic formula for $\left\vert sYT_n(N)\right\vert$ is the main result of Regev 81, whose theorem 2.10 says that:
$\begin{aligned} \left\vert sYT_n(N) \right\vert & \; \overset{ n \to \infty }{\sim} \; \underset{ \gamma_N }{ \underbrace{ (2 \pi)^{ -(N-1)/2 } \cdot N^{ N^2 / 2 } } } \cdot n^{ - (N - 1)(N + 2)/4 } \cdot N^{ n } \cdot n^{ (N - 1)/2 } \cdot \underset{ \mathclap{ x_1 \geq \cdots \geq x_N } }{\int} \;\;\;\;\; \underset{ D(x_1, \cdots, x_N) }{ \underbrace{ \underset{i \lt j}{\prod} \big( x_i - x_j \big) } } e^{ - N \left\vert x\right\vert^2 /2 } d x_1 \cdots d x_{N} \\ & \;=\; (2 \pi)^{ -\tfrac{1}{2}( N - 1 ) } \cdot N^{ \tfrac{1}{2} N^2 + n} \cdot n^{ - \tfrac{1}{4}( N^2 + N ) } \cdot \underset{ \mathclap{ x_1 \geq \cdots \geq x_N } }{\int} \;\;\;\;\; \underset{i \lt j}{\prod} \big( x_i - x_j \big) e^{ - N \left\vert x\right\vert^2 /2 } d x_1 \cdots d x_{N} \end{aligned} \,.$Here under the braces in the first row I am indicating Regev’s notation which I have expanded out (using the definitions given on the bottom of his p. 3), and in the second line I have collected exponents.
This would mean that the leading contributions in $N$ to the max-entropy of the Cayley state in the limit of large $n$ should be
$S_0\big(p^{Cayley}\big) \;\;\underset{n \to \infty}{\sim}\;\; \frac{1}{2} N^2 ln(N) - \frac{1}{4}N^2 ln(n) + \mathcal{O}(N)$Ah, the last conclusion is not quite correct, as there is $N$-dependence also in the integration domain of the last factor. Since ${\vert x \vert}^2 \;=\; \sum_{i = 1}^N x_i^2 \;\sim\; \mathcal{O}(N)$, this last terms probably gives one more contribution at order $N^2$ and independent of $n$.
We may compute the order of the integral term in #103 by decomposing it into an integral over a sector of a unit sphere (which gives a constant) times a Gaussian moment:
$\int_0^{\infty} R^{ N(N-1)/2 } e^{ - N R^2 /2 } \, d R \;\propto\; (N^{-1/2})^{ N(N - 1)/2 } \;=\; N^{ - N(N - 1)/4 }$Hope I have the factors right.
Therefore, it looks like we get a max-entropy at large $n$ of the form
$S_0(p^{Cayley}) \;\;\underset{n \to \infty}{\sim}\;\; a_0 + a_2(n) \cdot N + a_4(n) \cdot N^2 + a'_4(n) \cdot N^2 ln(N) \,,$Curiously, this would be the form of the entropy of Yang-Mills theory with $N$ colors (e.g. (1.1) on p. 3 in arXiv:2105.02101) – the point being that besides plain powers of $N$, the leading contribution $\sim N^2 ln(N)$ is, in both cases, the second power of $N$ times the logarithm of $N$.
I’ll stop doing incremental computations here in the comments, and instead put the computation into the entry (here).
Following through along the above lines, I now get
$\begin{aligned} \left\vert sYT_n(N) \right\vert & \; \overset{ n \to \infty }{\sim} \; const \cdot (2 \pi)^{ -\tfrac{1}{2}( N - 1 ) } \cdot N^{ \tfrac{1}{4} (N^2 + N) + \tfrac{1}{2} n} \cdot n^{ - \tfrac{1}{4}( N^2 - N ) } \end{aligned}$Adding one more to the list
a(n) ~ 3 * 5^(n+5)/(8 * Pi *n^5) A049401
a(n) ~ 3/4 * 6^(n+15/2)/(Pi^(3/2)*n^(15/2)) A007579
a(n) ~ 45/32 * 7^(n+21/2)/(Pi^(3/2)*n^(21/2)) A007578
a(n) ~ 135/16 * 8^(n+14)/(Pi^2*n^14) A007580
So you’d imagine it was
a(n) ~ $const \cdot N^{(n+\frac{N \cdot N-1}{4})}/(\pi^{c(N)}\cdot n^{\frac{N \cdot N-1}{4}})$
So $log(a(n)) ~ n log N + \frac{N \cdot N-1}{4} log N - \frac{N \cdot N-1}{4} log n$.
Hmm, not quite tallying with yours.
That $\tfrac{1}{2} n$ in your exponent of $N$ should just be $n$. Fixed that on the page.
Getting closer.
So why do you have $\tfrac{1}{4} (N^2 + N)$ where I have $\tfrac{1}{4} (N^2 - N)$?
Is it that my ’const’ is $N$-dependent?
And my powers of $\pi$ are going up by $1/2$ every other step. So that will because there’s a gamma function contributing a $\sqrt(\pi)$ for odd arguments.
Those estimates in #107 are due to Václav Kotěšovec who conjectures:
$a_N(n) \sim \frac{N^n}{\pi^{N/2}} \cdot \biggl( \frac{N}{n} \biggr)^{\frac{N(N-1)}{4}} \cdot \prod_{j = 1}^N \Gamma \biggl( \frac{j}{2} \biggr) .$But F.4.5.1 of the paper by Regev you mention in #105 concerns this value and looks slightly different. He also has a product of gamma function values at half-integers.
$a_N(n) \sim \frac{N^n}{\pi^{N/2}} \cdot \biggl( \frac{N}{n} \biggr)^{\frac{N(N-1)}{4}} \cdot \prod_{j = 1}^N \Gamma \biggl(1+ \frac{j}{2} \biggr) \cdot 2^N \cdot \frac{1}{N!}.$Might be the same. (The shift in those two gamma terms would make for an extra $\frac{N!}{2^N}$.)
In any case we’d need to know how the product of gamma functions varies with $N$. Perhaps the hint to use the Barnes G function helps. I got to some multiple of $N^2 ln N$ with it.
Continuing #109, Kotěšovec writes that
$\prod_{j = 1}^N \Gamma \biggl( \frac{j}{2} \biggr) = \frac{G(N/2 + 1)G(N/2 + 1/2)}{G(1/2)},$where $G$ is the Barnes G-function.
(14) there suggests that the greatest term in $logG(1+z)$ is $\frac{1}{2} z^2 log z$.
Thanks for the replies! And for spotting F.4.5.1 in Regev, I hadn’t seen that.
So I must have been making a mistake in extracting the powers of $N$ ($= l$) from Regev’s F.2.10. But I don’t see my mistake yet – this here was my reasoning:
There are $N(N-1)/2$ factors in $D = \underset{i \lt j}{\prod} (\cdots)$. With each factor proportional to $R$, these give a global factor of $R^{N (N-1)/2}$ in the integrand of a Gaussian integral over $R \in \mathbb{R}_{\geq 0}$ with standard deviation $\sigma = N^{-1/2}$, which hence yields “half” a Gaussian moment $\propto (N^{-1/2})^{N(N-1)/2} = N^{-\tfrac{1}{4}(N^2 - N)}$. Multiplied with the $N^{\tfrac{1}{2}N^2}$ hidden in Regev’s “$\gamma$”, this yields $N^{ \tfrac{1}{4}( N^2 + N ) }$. Or so it seems. (?)
Well aren’t I seeing (#111) an extra $\sim \frac{1}{2}(N/2)^2 log(N/2) + \frac{1}{2}((N-1)/2)^2 log((N-1)/2) \sim \frac{1}{4} N^2 log N$?
We would need it to be twice this.
Oh, I see, right. (On the other hand, what “we need” would be a term $\tfrac{1}{2} N ln(N)$, not $\tfrac{1}{2} N^2 ln(N)$, no?)
But maybe better to go with Regev’s theorem than with Kotěšovec’s conjecture: To Regev’s product of Gamma functions the Legendre relation applies, which should be helpful.
I was going on your
Multiplied with the $N^{\tfrac{1}{2}N^2}$ hidden in Regev’s “$\gamma$“…
so adding on $\tfrac{1}{2}N^2 ln(N)$.
But maybe better to go with Regev’s theorem than with Kotěšovec’s conjecture
Regev’s F 4.5.1 is equal to Kotěšovec’s conjecture, right? The former’s product is shifted so we lose a $\Gamma(1/2)$ and $\Gamma(1)$ and gain $\Gamma(1+ N/2)$ and $\Gamma(1 + (N-1)/2)$. The first two make $\sqrt{\pi}$. The latter two are this multiplied by a product of half-integers to $N/2$ which is $N!/2^N$.
The latter fact is just an instance of the Legendre relation you just mentioned, for $z = (N+1)/2$.
This product of $\Gamma$ functions at half-integers is a product of these products taken at odd and then even half-integers. That’s why Kotěšovec has them in terms of the Barnes $G$-function which is just an extended form of double factorial.
I just meant that this “hidden” factor made the $N^2 ln(N)$-term come out as expected, but left the $N ln(N)$-term with the wrong sign. Anyways, as you observed, all this is besides the point, as there are more $N^k ln(N)$-term hidden in the “combinatorial prefactors”, which I hadn’t appreciated.
Regev’s F 4.5.1 is equal to Kotěšovec’s conjecture, right?
Oh, okay, I hadn’t seen this yet.
Maybe in Regev’s form the Gauss multiplication formula lends itself more naturally than Barne’s G-function to get all the terms, but I haven’t worked it out yet. Need to go offline now for a bit.
Vague idea: I was wondering whether the ’Lorentzian’ qualifier in Lorentzian polynomial should point us to something physics-related, when I recalled this thread from last year.
Since we were looking at probability distributions over Young tableaux, it’s interesting to see that Lorentzian polynomials (which are used to establish log-concavity of sequences, $a_k^2 \geq a_{k+1}a_{k-1}$, and so unimodality of sequences, as explained here) are cropping up in this area, as in
They’re touching on some physics directly in