Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics comma complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorTodd_Trimble
    • CommentTimeApr 5th 2018

    Corrected an arithmetic error in the last section.

    diff, v14, current

    • CommentRowNumber2.
    • CommentAuthorUrs
    • CommentTimeSep 15th 2018
    • (edited Sep 15th 2018)

    I have a question on that categorified Gram-Schmidt process here

    From an exchange with Simon Burton, I get the impression that we’d like to compute the first matrix there by using the multiplication table of the virtual permutation reps. I supppose one argues as follows:

    1. The representation category Rep k(G)Rep_k(G) presumably is compact closed for all fields kk (all fields?).

    2. This means that we may identify the tensor product of a dual permutation rep with another permutation rep, hence their Burnside ring product, as the internal hom between them;

    3. what we really want is the external hom, because that’s the “2-inner product”. But the external hom is the space of G-invariants in the internal hom.

    So what one can do to get the matrix of 2-inner products of permutation reps via the Burnside ring multiplication table is the following:

    1. find for all transitive permutation reps their dual rep. Maybe they are always self-dual?

    2. form the table of Burnside products of the (dualized) transitive G-actions, think of it as their internal hom, by the above,

    3. find the space of invariants in there by observing that every transitive G-action contributes precisely a 1d space of invariants (namely that of equal coefficient sums of all basis elements – is that right?), so that we get a new table whose entries are the numbers of copies of transitive G-actions appearing in the previous table.

    4. Then we apply row-reduction to that.

    Is that right?

    • CommentRowNumber3.
    • CommentAuthorTodd_Trimble
    • CommentTimeSep 16th 2018

    I hope you don’t mind if I try to simplify the description of what is going on.

    Yes, you are right that we are trying to get our hands on the external hom hom G(M,N)\hom_G(M, N) between two left GG-modules M,NM, N, just as a vector space. I’m assuming GG is a finite group. First, if char(k)=0char(k) = 0, then by Maschke’s theorem all exact sequences in the category of left modules Vect GVect^G split, so that every module is projective. If MM is in addition finite-dimensional, then hom G(M,):Vect GVect\hom_G(M, -): Vect^G \to Vect preserves all colimits, so that by general free cocompletion nonsense, we have

    hom G(M,N)M * GN\hom_G(M, N) \cong M^\ast \otimes_G N

    where M *M^\ast is the right module given by the composite G opyVect GVectG^{op} \stackrel{y}{\to} Vect^G \to Vect (here GG is short-hand for the group algebra). Thus M *=hom G(M,kG)M^\ast = \hom_G(M, k G); with a little work one can show M *M^\ast is the linear dual hom(M,k)\hom(M, k) with the right module structure inherited from the left module structure on MM.

    Diagrammatically speaking, I find it helpful to situate ourselves in the bicategory of groups and bimodules/profunctors between them, so that a left GG-module is an arrow G1G \to 1 and a right module is an arrow 1G1 \to G. Then the tensor product over GG of M *:1GM^\ast: 1 \to G and N:G1N: G \to 1 is just the arrow composite 1G11 \to G \to 1, a bimodule over the trivial group 11, i.e., an ordinary vector space. The external hom hom G(M,N)\hom_G(M, N) can be viewed as a right Kan extension of N:G1N: G \to 1 along M:G1M: G \to 1, yielding an arrow 111 \to 1. In particular, M *=Ran M1 GM^\ast = Ran_M 1_G.

    Before linearizing, Set GSet^G is a locally connected topos, so each object is a coproduct of connected objects which are given by left coset spaces G/HG/H, where HH is a subgroup of GG. Linearization is a left adjoint, and so preserves coproducts, so for finite-dimensional representations, it suffices to understand hom G(kG/H,kG/J)(kG/H) * kG(kG/J)\hom_G(k G/H, k G/J) \cong (k G/H)^\ast \otimes_{k G} (k G/J). I claim that (kG/H) *=kH\G(k G/H)^\ast = k H\backslash G, where we linearize the right coset space with the right action. A nice way to look at this is to write G/H=[G] H[1]G/H = [G] \otimes_H [1], where we view the set GG (here denoted [G][G]) as a bimodule [G]:GH[G]: G \to H and the terminal set [1][1] as a bimodule [1]:H1[1]: H \to 1, and we compose these arrows. Thus

    (kG/H) * = Ran kG/H1 G = Ran [1][G]1 G Ran [1]Ran [G]1 G [1] * H[G] * = k kHkG\array{ (k G/H)^\ast & = & Ran_{k G/H} 1_G \\ & = & Ran_{[1][G]} 1_G \\ & \cong & Ran_{[1]} Ran_{[G]} 1_G \\ & \cong & [1]^\ast \otimes_H [G]^\ast \\ & = & k \otimes_{k H} k G }

    which is kH\Gk H\backslash G.

    Thus the calculations boil down to looking at hom G(kG/H,kG/J)(kH\G) kG(kG/J)\hom_G(k G/H, k G/J) \cong (k H \backslash G) \otimes_{k G} (k G/J). The right side is the linearization applied to an evident coequalizer in SetSet:

    (H\G)×G×(G/J)(H\G)×(G/J)H\G/J(H \backslash G) \times G \times (G/J) \rightrightarrows (H \backslash G) \times (G/J) \to H \backslash G/J

    where the coequalizer is the space of double cosets. So really the calculations on the Gram-Schmidt page come down to combinatorial (set-theoretic) counting of such double cosets.

    • CommentRowNumber4.
    • CommentAuthorUrs
    • CommentTimeSep 16th 2018

    Thanks, Todd! That’s nice.

    Especially nice that only characteristic zero is needed, not algebraic completeness.

    • CommentRowNumber5.
    • CommentAuthorTodd_Trimble
    • CommentTimeSep 16th 2018

    Actually, part of your question in #2

    find for all transitive permutation reps their dual rep. Maybe they are always self-dual?

    caught my interest. In the sorts of calculations I’m describing, I have generally found it “hygienic” to maintain the distinction between left and right modules, but of course their categories Vect G,Vect G opVect^G, Vect^{G^{op}} are equivalent (even isomorphic) by the group inversion isomorphism G opGG^{op} \to G, and so indeed we can always ask whether modules are self-dual.

    It seems undeniable that group inversion sends right cosets to left cosets, so that we have a group inversion isomorphism H\GG/HH \backslash G \to G/H, and we can say that kG/Hk G/H is indeed self-dual. As every permutation representation is a coproduct of reps of this type, we can say every permutation representation is self-dual.

    But perhaps more interestingly, I think this gives a necessary criterion for when β:A(G)R(G)\beta: A(G) \to R(G) is surjective: every GG-rep must be self-dual. That’s not true for all finite groups, but it’s true for a good class of them. In the case of symmetric groups, if we consider the character χ:G\chi: G \to \mathbb{C} of a rep, it follows from basic character theory that the character of the dual rep is the complex conjugate χ¯\widebar{\chi} where we have χ(g)¯=χ(g 1)\widebar{\chi(g)} = \chi(g^{-1}), and since g,g 1g, g^{-1} belong to the same conjugacy class in S nS_n (have the same cycle type), the characters are the same, and we get self-duality.

    • CommentRowNumber6.
    • CommentAuthorUrs
    • CommentTimeSep 16th 2018
    • (edited Sep 16th 2018)

    Thanks again, Todd!

    So this was the core of my question in #2: I was trying to understand if, instead of starting with the matrix of dims of external homs, as you do, we could start

    1) with the Burnside ring product matrix, whose (i,j)(i,j)-entry is

    V iV j=kn ij kV k V_i \otimes V_j = \underset{k}{\oplus} n_{i j}^{k} V_k

    (where the tensor product is that of GRepG Rep, i.e. the Cartesian product of underlying GG-sets, hence where n ij kn_{i j}^{k} are the structure constants of the Burnside ring in the basis of transitive GG-sets)

    2) or rather with the resulting matrix of total multiplicities, whose (i,j)(i,j)-entry is

    kn ij k=dim((kn ij kV k) G)=dim((V iV j) G) \underset{k}{\sum} n_{i j}^{k} \;=\; dim\left(\left( \underset{k}{\oplus} n_{i j}^{k} V_k \right)^G\right) \;=\; dim\left( (V_i \otimes V_j)^G \right) \;\in\; \mathbb{N}

    This matrix would coincice with “your” matrix if

    1) all V iV_i are self-dual in (GRep,)(G Rep, \otimes)

    2) (GRep,)(G Rep, \otimes) is compact closed

    because then we would have

    (V iV j) G =(V i *V j) G =(InternalHom(V i,V j)) G =ExternalHom(V i,V j) \begin{aligned} (V_i \otimes V_j )^G & = (V_i^\ast \otimes V_j )^G \\ & = \left( InternalHom(V_i, V_j)\right)^G \\ & = ExternalHom(V_i, V_j) \end{aligned}
    • CommentRowNumber7.
    • CommentAuthorTodd_Trimble
    • CommentTimeSep 16th 2018

    So by GRepG Rep I guess you mean the smc category where the tensor product is the ordinary tensor product of the underlying vector spaces, where we use the diagonal comultiplication on kGk G to define the GG-action on the tensor product:

    kGVWΔ11kGkGVW1σ1(kGV)(kGW)VW.k G \otimes V \otimes W \stackrel{\Delta \otimes 1 \otimes 1}{\to} k G \otimes k G \otimes V \otimes W \stackrel{1 \otimes \sigma \otimes 1}{\to} (k G \otimes V) \otimes (k G \otimes W) \to V \otimes W.

    This is surely compact closed: the forgetful functor GRepVectG Rep \to Vect reflects isomorphisms, so all one has to do is check that the canonical linear map V *WHom(V,W)V^\ast \otimes W \to Hom(V, W) respects the GG-actions (and it surely does).

    And then of course hom G(k,)\hom_G(k, -) is the representable functor for the GG-invariants functor GRepVectG Rep \to Vect, so that hom G(k,Hom(V,W))hom G(V,W)\hom_G(k, Hom(V, W)) \cong \hom_G(V, W) allows us to go from internal Homs to external homs.

    But I guess I’m a little confused by the role played by self-duality in your question. If you drop the self-duality assumption but start your general construction instead with

    V i *V j= kn ij jV kV_i^\ast \otimes V_j = \bigoplus_k n_{i j}^j V_k

    then you and I still wind up at the same place.

    • CommentRowNumber8.
    • CommentAuthorUrs
    • CommentTimeSep 16th 2018
    • (edited Sep 16th 2018)

    ah, great! Thanks.

    Right, I could of course just add in the duality by hand. Right now my motivation was this:

    Simon Burton has a computer program that reads in a finite group, and then spits out the multiplication table of its Burnside ring, then its associated matrix of total multiplicities, and then performs the row-reduction, all by itself. From eyeballing the output of that program in various examples, it seems clear that it computes exactly that categorified Gram-Schmidt thing. So I wanted to understand why that works!

    (But if indeed all permutation reps are self-dual, that is good to know and make use of.)

    The only remaining question I have then is: What when we are not working over an algebraically closed field? The algorithm still applies (row reduction etc.) but are we guaranteed that we still read off exact multiplicities of irreps inside permutation reps then?

    (Need to go offline now, thanks for chatting about this.)

    • CommentRowNumber9.
    • CommentAuthorUrs
    • CommentTimeSep 16th 2018
    • (edited Sep 16th 2018)

    Quick comment from my phone:

    I see now that the self-duality of permutation reps is just what is being exhibited by the Hecke operators: They are: the units of the (self-)duality.

    That should be nice.

    • CommentRowNumber10.
    • CommentAuthorUrs
    • CommentTimeSep 16th 2018

    And with those keywords in hand (self-dual, Hecke) things fall into place, e.g. https://qchu.wordpress.com/2015/11/07/hecke-operators-are-also-relative-positions/#more-19764

    But now to bed…

    • CommentRowNumber11.
    • CommentAuthorTodd_Trimble
    • CommentTimeSep 16th 2018

    Yes. Normally though I think of “Hecke operator” as just referring to maps kG/HkG/Jk G/H \to k G/J in GRepG Rep. It’s all good though. :-)

    (I remember when you and I first met, I was trying to talk about these things in light of the groupoidification project. I learned a lot of it through conversation with Jim Dolan, and there is indeed some lovely mathematics here. I ought to try and record more of it in the Lab.)

    • CommentRowNumber12.
    • CommentAuthorUrs
    • CommentTimeSep 16th 2018
    • (edited Sep 16th 2018)

    All right, thanks.

    For the longest time the “groupoidification” program looked like idle entertainment to me, but now that I understand the importance for fundamental physics of computing the image of equivariant stable cohomotopy in equivariant complex K-theory, I have a different feeling for it.

    But now to bed… :-)

    • CommentRowNumber13.
    • CommentAuthorUrs
    • CommentTimeSep 22nd 2018
    • (edited Sep 22nd 2018)

    Hi,

    if I may, I have another question on the “categorified Gram-Schmidt process”.

    So I would like to mechanically compute, as much as possible, the image of beta

    A(G)βR(G) A(G) \overset{\beta}{\longrightarrow} R(G)

    To start with, I’ll be content with having an algorithm that just outputs “yes” if beta is surjective, and “no” otherwise.

    (In the following, I’ll assume for the moment that the ground field is algebraically closed.)

    As we discussed, we have an algorithm that, given GG, outputs the square matrix MM over \mathbb{N}

    M ijn ij M_{i j} \coloneqq \underset{\ell}{\sum} n_{i j}^\ell

    of “total multiplicities” of the Burnside product, equivalently of Schur-inner products, of the corresponding permutation reps.

    Now, we want to apply row reduction to this, to extract the multiplicites of irreps. But just saying “row reduction” is not quite sufficient, is it, because we also want to satisfy the constraint that the result still has coefficients in \mathbb{N}.(?)

    After looking around a bit, I am thinking that we want to really say that we bring MM to its Hermite normal form

    H=UM H = U M

    via an invertible integer matrix UU. That’s our row reduction.

    In general, I gather that Hermite normal form only ensures that the “pivot” entries are positive, not that all non-zero entries are non-negative. But since Hermite normal form is unique, and since we do know that our permutation reps must have non-negative inner product with the irreps, it must be that the Hermite normal form of matrices MM as considered here is guaranteed to have entries in \mathbb{N}.

    If this is right so far, then it seems we can continue as follows:

    Next, strip off the block of zero rows from HH. Call the result H˜\tilde H.

    This H˜\tilde H should now actually be the matrix that represents β\beta, with respect to the canonical bases on both sides.

    Finally, pass to the Smith normal form SS of H˜\tilde H, the closest approximation to representing β\beta by a diagonal integer matrix.

    If I didn’t get myself mixed up here, then

    • β\beta is surjective if and only if the total number of ±1\pm 1-s in SS is maximal, hence equal to the number of rows, hence equal to the number of irreps.

    Does that sound right?

    In summary, I am thinking that the following algorithm should work:

    0) start with the matrix MM of Burnside multiplicites;

    1) compute the Hermite normal form HH of MM;

    2) call H˜\tilde H the result of deleting the zero-rows from HH;

    3) compute the Smith normal SS of H˜\tilde H

    4) β\beta is surjective iff all non-vanishing entries of SS are ±1\pm 1.

    Does that sound right?

    • CommentRowNumber14.
    • CommentAuthorTodd_Trimble
    • CommentTimeSep 23rd 2018

    Sorry: before we go down this road too far, I want to check some things with you.

    I was under the impression that R(G)R(G) and A(G)A(G) were rings, not just rigs. (They are both obtained by applying the ringification functor \mathbb{Z} \otimes_\mathbb{N} - to rigs.) Not so?

    If so, then I’m not sure why we’re worried about coefficients being in \mathbb{N} (where you put a question mark).

    • CommentRowNumber15.
    • CommentAuthorUrs
    • CommentTimeSep 23rd 2018
    • (edited Sep 23rd 2018)

    Sure, they are rings. But we are after expressing beta not in any bases, but in bases such that it is guaranteed to have matrix coefficients in \mathbb{N}, specifically in the bases of transitive G-sets and linear irreps.

    If we start with MM and apply any row reduction, we would end up with a matrix over \mathbb{Q}, signalling that we failed to discover the expansion of the transitive G-sets in terms of their irrep multiplicities.

    But we know from the nature of MM that there must be one way to row-reduce that produces a matrix in \mathbb{N}. Furthermore, from uniqueness of Hermite forms, this must be essentially unique. And so that must be how we deduce that we really transformed from the basis of transitive GG-sets to the basis of linear irreps, instead of to some other basis.

    • CommentRowNumber16.
    • CommentAuthorUrs
    • CommentTimeSep 23rd 2018

    It looks from what I said in #14 that for just determining the surjectivity or not of beta, one could just compute the Smith normal form of MM directly. But the intermediate Hermite normal form will determine the dimensions of the irreps, thus will help to identify the image of beta in case that it is not surjective.

    • CommentRowNumber17.
    • CommentAuthorTodd_Trimble
    • CommentTimeSep 23rd 2018

    Urs, let me put it a different way. Looking at Hermite normal form, I get a sense we might have the same thing in mind, but there’s a slight communication gap.

    Row reduction in linear algebra boils down to starting with a matrix MM and left-multiplying by elementary matrices in a certain way until we reach an echelon form. If we are working over any commutative ring RR, then “elementary matrices” are defined to be matrices of the following form:

    • 1+E1 + E where EE is a strictly lower triangular matrix with a single nonzero entry e ije_{i j}, whose left-multiplying effect is to add e ije_{i j} times the i thi^{th} row to the j thj^{th} row

    • a permutation matrix, which we could take to be a transposition matrix whose effect is to swap two adjacent rows,

    • a diagonal matrix all of whose entries are invertible in RR (WLOG, assume but one is 11).

    In the case R=R = \mathbb{Z}, we’re never going to pop out of \mathbb{Z} to \mathbb{Q} entries this way (so I didn’t understand why you brought this up). The third type of elementary matrix has just 11’s and 1-1’s down the diagonal. In general, if you start with a matrix MM over RR and left-multiply by elementary matrices defined this way, you will end with a matrix over RR.

    Now it could be that you wish to disallow the third type of elementary matrix, the diagonal ones. It’s true that whenever I’ve tried to apply the categorified Gram-Schmidt process, I’ve never had to make use of that third type. But if you want to disallow it, then I’d like to understand the theoretical grounds for why. Or, to put it more positively, why we don’t actually need it (which I have some small faith is the case, but I don’t know the reason).

    Sorry if I’m being dense.

    • CommentRowNumber18.
    • CommentAuthorUrs
    • CommentTimeSep 23rd 2018
    • (edited Sep 23rd 2018)

    I am happy to admit that I haven’t thought about Diophantine linear algebra before, and need a moment of reflection on the gcd-techniques needed to force row reduction to stay integral. Once one thinks it its obvious (e.g. here), as it goes.

    But is there really an integer version of Gram-Schmidt? Above I tried to convince myself, by appeal to uniqueness of Hermite normal forms, that applied to the particular case of Burnside multiplicity matrices there is a general reason why we are guarenteed to end up reading off multiplicities of irreps in permutation reps from the matrix entries. But not sure if this really works.

    In the entry you say at the point where the irreps come in: “…will turn out to be…”. Is that meant to be by inspection in that particular example? Or is there a general argument that the integrally row-reduced matrix exhibits the inner products of our permutation reps against irreps?

    • CommentRowNumber19.
    • CommentAuthorTodd_Trimble
    • CommentTimeSep 23rd 2018

    Oh, oh, I see your concern; thanks for spelling it out. Let me think about it some more before trying to put together a response.

    • CommentRowNumber20.
    • CommentAuthorUrs
    • CommentTimeSep 23rd 2018
    • (edited Sep 23rd 2018)

    Thanks, Todd.

    Maybe to make it more concrete:

    For MM our multiplicities matrix,

    H=UM H = U \cdot M

    its Hermite normal form, and

    H˜=U˜M \tilde H = \tilde U \cdot M

    the result of deleting the zero-rows in HH, it seems to happen in all examples checked that

    U˜MU˜ t=diag(n i) \tilde U \cdot M \cdot \tilde U^t = diag(n_i)

    where n in_i \in \mathbb{N} is the categorified norm square of the iith rational irrep, hence that U˜\tilde U happens to be the basis transformation onto the rational irreps, hence to be as close to orthonormalization as is possible over the rationals.

    That’s fantastic, that’s what I want to have. But this is not manifest in the way UU was constructed. So why does it work??

    • CommentRowNumber21.
    • CommentAuthorUrs
    • CommentTimeSep 24th 2018
    • (edited Sep 24th 2018)

    I’ll voice some more thoughts, please feel free to ignore.

    1. The new basis elements U iU_i obtained from passage of MM to Hermite normal form satisfy

      n i=dim k(hom G(U i,U j)) n_i \;=\; dim_k\left( hom_G( U_i, U_j ) \right)

      where the right hand side is independent of the ground field.

      In all the examples that I looked at in detail, the U iU_i happen to be the irreps if the ground field is taken to be k=k =\mathbb{Q}.

      From this and Schur’s lemma I deduce that the number n in_i is the number of k^\hat k-irreps into which the rational irrep U iU_i decomposes when tensored with an algebraically closed field k^\hat k. Again, this must be a basic fact of representation theory.(?)

    2. Finally something irritating:

      When we compute the image of β\beta explicitly for the example G=/3×Q 8G =\mathbb{Z}/3 \times Q_8, we currenly still seem to get that… β\beta is surjective over \mathbb{Q}. But this example is that one counterexample for surjectivity of β\beta over \mathbb{Q} that everyone cites, apparently due to Serre 77, p. 104 (haven’t actually looked at that source yet).

      So something is going on. I’ll carefully check for mistakes in my logic now. Otherwise I’ll have to ring-up Serre :-)

    • CommentRowNumber22.
    • CommentAuthorUrs
    • CommentTimeSep 24th 2018
    • (edited Sep 24th 2018)

    I think I see the resolution of the apparent contradiction to Serre’s counter-example to surjectivity of β\beta over \mathbb{Q} in the case that G=/3×Q 8G =\mathbb{Z}/3 \times Q_8:

    While in that case the Hermite normal form of MM does have as many rows as there are rational irreps, it must be that one of these rows corresponds to a multiple of an irrep by a non-invertible integer (and there are indeed two rows where this could be the case).

    This will of course be detected by the Smith normal form, where that non-invertible factor will show up on the diagonal. Still need to compute that…

    So I think I was right with the strategy in #13 and #16: Computing just the Hermite normal form is not sufficient, in general, we also need the Smith normal form.

    [edit: Ah, no, Smith normal form of H˜\tilde H itself won’t help, since that computes not the image of β\beta, but the image of the corestriction of β\beta to its image! Which is a bit pointless. ]

    • CommentRowNumber23.
    • CommentAuthorUrs
    • CommentTimeDec 31st 2018
    • (edited Dec 31st 2018)

    added a section “Via Gaussian elimination” (here) with a quick digest of Pursell-Trimble 91.

    In the process I turned the paragraph that used to be its own subsection “Application to non-bases” to a Remark inside the section “Gram-Schmidt process on Hilbert spaces”.

    diff, v24, current