Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I have a question on that categorified Gram-Schmidt process here
From an exchange with Simon Burton, I get the impression that we’d like to compute the first matrix there by using the multiplication table of the virtual permutation reps. I supppose one argues as follows:
The representation category presumably is compact closed for all fields (all fields?).
This means that we may identify the tensor product of a dual permutation rep with another permutation rep, hence their Burnside ring product, as the internal hom between them;
what we really want is the external hom, because that’s the “2-inner product”. But the external hom is the space of G-invariants in the internal hom.
So what one can do to get the matrix of 2-inner products of permutation reps via the Burnside ring multiplication table is the following:
find for all transitive permutation reps their dual rep. Maybe they are always self-dual?
form the table of Burnside products of the (dualized) transitive G-actions, think of it as their internal hom, by the above,
find the space of invariants in there by observing that every transitive G-action contributes precisely a 1d space of invariants (namely that of equal coefficient sums of all basis elements – is that right?), so that we get a new table whose entries are the numbers of copies of transitive G-actions appearing in the previous table.
Then we apply row-reduction to that.
Is that right?
I hope you don’t mind if I try to simplify the description of what is going on.
Yes, you are right that we are trying to get our hands on the external hom between two left -modules , just as a vector space. I’m assuming is a finite group. First, if , then by Maschke’s theorem all exact sequences in the category of left modules split, so that every module is projective. If is in addition finite-dimensional, then preserves all colimits, so that by general free cocompletion nonsense, we have
where is the right module given by the composite (here is short-hand for the group algebra). Thus ; with a little work one can show is the linear dual with the right module structure inherited from the left module structure on .
Diagrammatically speaking, I find it helpful to situate ourselves in the bicategory of groups and bimodules/profunctors between them, so that a left -module is an arrow and a right module is an arrow . Then the tensor product over of and is just the arrow composite , a bimodule over the trivial group , i.e., an ordinary vector space. The external hom can be viewed as a right Kan extension of along , yielding an arrow . In particular, .
Before linearizing, is a locally connected topos, so each object is a coproduct of connected objects which are given by left coset spaces , where is a subgroup of . Linearization is a left adjoint, and so preserves coproducts, so for finite-dimensional representations, it suffices to understand . I claim that , where we linearize the right coset space with the right action. A nice way to look at this is to write , where we view the set (here denoted ) as a bimodule and the terminal set as a bimodule , and we compose these arrows. Thus
which is .
Thus the calculations boil down to looking at . The right side is the linearization applied to an evident coequalizer in :
where the coequalizer is the space of double cosets. So really the calculations on the Gram-Schmidt page come down to combinatorial (set-theoretic) counting of such double cosets.
Thanks, Todd! That’s nice.
Especially nice that only characteristic zero is needed, not algebraic completeness.
Actually, part of your question in #2
find for all transitive permutation reps their dual rep. Maybe they are always self-dual?
caught my interest. In the sorts of calculations I’m describing, I have generally found it “hygienic” to maintain the distinction between left and right modules, but of course their categories are equivalent (even isomorphic) by the group inversion isomorphism , and so indeed we can always ask whether modules are self-dual.
It seems undeniable that group inversion sends right cosets to left cosets, so that we have a group inversion isomorphism , and we can say that is indeed self-dual. As every permutation representation is a coproduct of reps of this type, we can say every permutation representation is self-dual.
But perhaps more interestingly, I think this gives a necessary criterion for when is surjective: every -rep must be self-dual. That’s not true for all finite groups, but it’s true for a good class of them. In the case of symmetric groups, if we consider the character of a rep, it follows from basic character theory that the character of the dual rep is the complex conjugate where we have , and since belong to the same conjugacy class in (have the same cycle type), the characters are the same, and we get self-duality.
Thanks again, Todd!
So this was the core of my question in #2: I was trying to understand if, instead of starting with the matrix of dims of external homs, as you do, we could start
1) with the Burnside ring product matrix, whose -entry is
(where the tensor product is that of , i.e. the Cartesian product of underlying -sets, hence where are the structure constants of the Burnside ring in the basis of transitive -sets)
2) or rather with the resulting matrix of total multiplicities, whose -entry is
This matrix would coincice with “your” matrix if
1) all are self-dual in
2) is compact closed
because then we would have
So by I guess you mean the smc category where the tensor product is the ordinary tensor product of the underlying vector spaces, where we use the diagonal comultiplication on to define the -action on the tensor product:
This is surely compact closed: the forgetful functor reflects isomorphisms, so all one has to do is check that the canonical linear map respects the -actions (and it surely does).
And then of course is the representable functor for the -invariants functor , so that allows us to go from internal Homs to external homs.
But I guess I’m a little confused by the role played by self-duality in your question. If you drop the self-duality assumption but start your general construction instead with
then you and I still wind up at the same place.
ah, great! Thanks.
Right, I could of course just add in the duality by hand. Right now my motivation was this:
Simon Burton has a computer program that reads in a finite group, and then spits out the multiplication table of its Burnside ring, then its associated matrix of total multiplicities, and then performs the row-reduction, all by itself. From eyeballing the output of that program in various examples, it seems clear that it computes exactly that categorified Gram-Schmidt thing. So I wanted to understand why that works!
(But if indeed all permutation reps are self-dual, that is good to know and make use of.)
The only remaining question I have then is: What when we are not working over an algebraically closed field? The algorithm still applies (row reduction etc.) but are we guaranteed that we still read off exact multiplicities of irreps inside permutation reps then?
(Need to go offline now, thanks for chatting about this.)
Quick comment from my phone:
I see now that the self-duality of permutation reps is just what is being exhibited by the Hecke operators: They are: the units of the (self-)duality.
That should be nice.
And with those keywords in hand (self-dual, Hecke) things fall into place, e.g. https://qchu.wordpress.com/2015/11/07/hecke-operators-are-also-relative-positions/#more-19764
But now to bed…
Yes. Normally though I think of “Hecke operator” as just referring to maps in . It’s all good though. :-)
(I remember when you and I first met, I was trying to talk about these things in light of the groupoidification project. I learned a lot of it through conversation with Jim Dolan, and there is indeed some lovely mathematics here. I ought to try and record more of it in the Lab.)
All right, thanks.
For the longest time the “groupoidification” program looked like idle entertainment to me, but now that I understand the importance for fundamental physics of computing the image of equivariant stable cohomotopy in equivariant complex K-theory, I have a different feeling for it.
But now to bed… :-)
Hi,
if I may, I have another question on the “categorified Gram-Schmidt process”.
So I would like to mechanically compute, as much as possible, the image of beta
To start with, I’ll be content with having an algorithm that just outputs “yes” if beta is surjective, and “no” otherwise.
(In the following, I’ll assume for the moment that the ground field is algebraically closed.)
As we discussed, we have an algorithm that, given , outputs the square matrix over
of “total multiplicities” of the Burnside product, equivalently of Schur-inner products, of the corresponding permutation reps.
Now, we want to apply row reduction to this, to extract the multiplicites of irreps. But just saying “row reduction” is not quite sufficient, is it, because we also want to satisfy the constraint that the result still has coefficients in .(?)
After looking around a bit, I am thinking that we want to really say that we bring to its Hermite normal form
via an invertible integer matrix . That’s our row reduction.
In general, I gather that Hermite normal form only ensures that the “pivot” entries are positive, not that all non-zero entries are non-negative. But since Hermite normal form is unique, and since we do know that our permutation reps must have non-negative inner product with the irreps, it must be that the Hermite normal form of matrices as considered here is guaranteed to have entries in .
If this is right so far, then it seems we can continue as follows:
Next, strip off the block of zero rows from . Call the result .
This should now actually be the matrix that represents , with respect to the canonical bases on both sides.
Finally, pass to the Smith normal form of , the closest approximation to representing by a diagonal integer matrix.
If I didn’t get myself mixed up here, then
Does that sound right?
In summary, I am thinking that the following algorithm should work:
0) start with the matrix of Burnside multiplicites;
1) compute the Hermite normal form of ;
2) call the result of deleting the zero-rows from ;
3) compute the Smith normal of
4) is surjective iff all non-vanishing entries of are .
Does that sound right?
Sorry: before we go down this road too far, I want to check some things with you.
I was under the impression that and were rings, not just rigs. (They are both obtained by applying the ringification functor to rigs.) Not so?
If so, then I’m not sure why we’re worried about coefficients being in (where you put a question mark).
Sure, they are rings. But we are after expressing beta not in any bases, but in bases such that it is guaranteed to have matrix coefficients in , specifically in the bases of transitive G-sets and linear irreps.
If we start with and apply any row reduction, we would end up with a matrix over , signalling that we failed to discover the expansion of the transitive G-sets in terms of their irrep multiplicities.
But we know from the nature of that there must be one way to row-reduce that produces a matrix in . Furthermore, from uniqueness of Hermite forms, this must be essentially unique. And so that must be how we deduce that we really transformed from the basis of transitive -sets to the basis of linear irreps, instead of to some other basis.
It looks from what I said in #14 that for just determining the surjectivity or not of beta, one could just compute the Smith normal form of directly. But the intermediate Hermite normal form will determine the dimensions of the irreps, thus will help to identify the image of beta in case that it is not surjective.
Urs, let me put it a different way. Looking at Hermite normal form, I get a sense we might have the same thing in mind, but there’s a slight communication gap.
Row reduction in linear algebra boils down to starting with a matrix and left-multiplying by elementary matrices in a certain way until we reach an echelon form. If we are working over any commutative ring , then “elementary matrices” are defined to be matrices of the following form:
where is a strictly lower triangular matrix with a single nonzero entry , whose left-multiplying effect is to add times the row to the row
a permutation matrix, which we could take to be a transposition matrix whose effect is to swap two adjacent rows,
a diagonal matrix all of whose entries are invertible in (WLOG, assume but one is ).
In the case , we’re never going to pop out of to entries this way (so I didn’t understand why you brought this up). The third type of elementary matrix has just ’s and ’s down the diagonal. In general, if you start with a matrix over and left-multiply by elementary matrices defined this way, you will end with a matrix over .
Now it could be that you wish to disallow the third type of elementary matrix, the diagonal ones. It’s true that whenever I’ve tried to apply the categorified Gram-Schmidt process, I’ve never had to make use of that third type. But if you want to disallow it, then I’d like to understand the theoretical grounds for why. Or, to put it more positively, why we don’t actually need it (which I have some small faith is the case, but I don’t know the reason).
Sorry if I’m being dense.
I am happy to admit that I haven’t thought about Diophantine linear algebra before, and need a moment of reflection on the gcd-techniques needed to force row reduction to stay integral. Once one thinks it its obvious (e.g. here), as it goes.
But is there really an integer version of Gram-Schmidt? Above I tried to convince myself, by appeal to uniqueness of Hermite normal forms, that applied to the particular case of Burnside multiplicity matrices there is a general reason why we are guarenteed to end up reading off multiplicities of irreps in permutation reps from the matrix entries. But not sure if this really works.
In the entry you say at the point where the irreps come in: “…will turn out to be…”. Is that meant to be by inspection in that particular example? Or is there a general argument that the integrally row-reduced matrix exhibits the inner products of our permutation reps against irreps?
Oh, oh, I see your concern; thanks for spelling it out. Let me think about it some more before trying to put together a response.
Thanks, Todd.
Maybe to make it more concrete:
For our multiplicities matrix,
its Hermite normal form, and
the result of deleting the zero-rows in , it seems to happen in all examples checked that
where is the categorified norm square of the th rational irrep, hence that happens to be the basis transformation onto the rational irreps, hence to be as close to orthonormalization as is possible over the rationals.
That’s fantastic, that’s what I want to have. But this is not manifest in the way was constructed. So why does it work??
I’ll voice some more thoughts, please feel free to ignore.
The new basis elements obtained from passage of to Hermite normal form satisfy
where the right hand side is independent of the ground field.
In all the examples that I looked at in detail, the happen to be the irreps if the ground field is taken to be .
From this and Schur’s lemma I deduce that the number is the number of -irreps into which the rational irrep decomposes when tensored with an algebraically closed field . Again, this must be a basic fact of representation theory.(?)
Finally something irritating:
When we compute the image of explicitly for the example , we currenly still seem to get that… is surjective over . But this example is that one counterexample for surjectivity of over that everyone cites, apparently due to Serre 77, p. 104 (haven’t actually looked at that source yet).
So something is going on. I’ll carefully check for mistakes in my logic now. Otherwise I’ll have to ring-up Serre :-)
I think I see the resolution of the apparent contradiction to Serre’s counter-example to surjectivity of over in the case that :
While in that case the Hermite normal form of does have as many rows as there are rational irreps, it must be that one of these rows corresponds to a multiple of an irrep by a non-invertible integer (and there are indeed two rows where this could be the case).
This will of course be detected by the Smith normal form, where that non-invertible factor will show up on the diagonal. Still need to compute that…
So I think I was right with the strategy in #13 and #16: Computing just the Hermite normal form is not sufficient, in general, we also need the Smith normal form.
[edit: Ah, no, Smith normal form of itself won’t help, since that computes not the image of , but the image of the corestriction of to its image! Which is a bit pointless. ]
added a section “Via Gaussian elimination” (here) with a quick digest of Pursell-Trimble 91.
In the process I turned the paragraph that used to be its own subsection “Application to non-bases” to a Remark inside the section “Gram-Schmidt process on Hilbert spaces”.
Since the factorization depends smoothly on the parameters, the Gram–Schmidt procedure enables the reduction of the structure group of an inner product vector bundle (e.g., the tangent bundle of a Riemannian manifold or a Kähler manifold) from to the orthogonal group (or the unitary group ).
Aren’t the assumptions superfluous ? I mean for any continuous vector bundle over a paracompact Hausdorff space one chooses the inner products by the partition of unity and hence can reduce to abstract group. Thus the inner product does not need to be there at the start, it is just an auxiliary/intermediate construction (unless one wants a canonical choice of the reduction).
added pointer to
for the (evident) generalization of the Gram-Schmidt-process to indefinite inner product spaces.
Relation of the QR decomposition (and some other matrix decompositions) to flows of integrable systems is in
Inserted redirects for QR decomposition and QR factorization.
I’ll make a stand-alone entry for QR decomposition…
1 to 28 of 28