Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorTom Leinster
    • CommentTimeNov 11th 2011

    I hope this is an appropriate use of the forum. I’d like to find out what proportion of mathematicians are aware of a certain fact, and I’m not feeling in the mood for asking at the Café, so I thought I’d try here.

    Let VV and WW be finite-dimensional vector spaces over an algebraically closed field. Let f:VWf: V \to W and g:WVg: W \to V be linear maps. Then gfg f and fgf g have the same eigenvalues, with the same algebraic multiplicities, with the possible exception of 0.

    It’s not terribly hard to prove this fact. But the question is: did you already know this, off the top of your head?

    (I can say something about why I’m asking, but I’d rather wait until I’ve got a few answers.)

    Thanks!

    • CommentRowNumber2.
    • CommentAuthorTim_Porter
    • CommentTimeNov 11th 2011

    I think I knew this once…. last century! but can’t be sure.

    • CommentRowNumber3.
    • CommentAuthorMark Meckes
    • CommentTimeNov 11th 2011

    I did.

    • CommentRowNumber4.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 11th 2011

    I didn’t.

    • CommentRowNumber5.
    • CommentAuthorDavid_Corfield
    • CommentTimeNov 11th 2011

    I knew that, at least I recall it now you mention it. I can’t remember where from. Maybe looking over a third year undergraduate essay my nephew wrote.

    • CommentRowNumber6.
    • CommentAuthorzskoda
    • CommentTimeNov 11th 2011

    The first part (eigenvalues the same) yes, but not the second about the multiplicities (that part even surprises me!).

    • CommentRowNumber7.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 11th 2011

    It’s sort of fun watching you pursue themes, Tom. You seem to be on something like a fixed-point kick these days. Or is that putting it too reductionistically? :-)

    • CommentRowNumber8.
    • CommentAuthorTom Leinster
    • CommentTimeNov 11th 2011

    Todd, frankly I have little idea what I’m doing. But it’s nice to know I’m creating fun :-)

    • CommentRowNumber9.
    • CommentAuthorMike Shulman
    • CommentTimeNov 11th 2011

    I might have heard that once before, but I certainly didn’t remember it.

    • CommentRowNumber10.
    • CommentAuthorAndrew Stacey
    • CommentTimeNov 11th 2011

    I hope this is an appropriate use of the forum.

    Absolutely.

    As for your actual question, I knew this (or maybe I know that I know it, if you see what I mean) but the version that springs to mind concerns compact operators on a Banach space.

    I can even remember how you prove it (in particular, I remember the proof - I’m not figuring out the proof for myself).

    • CommentRowNumber11.
    • CommentAuthorTom Leinster
    • CommentTimeNov 12th 2011

    Thanks for your answers, all. Here’s the story. A few years ago I became aware of Sheldon Axler’s book Linear Algebra Done Right (which in fact prompted my very first Café post). Ever since then I’ve been thinking on and off about the “dynamical” approach, as I think of it, to linear algebra. There, a central role is played by the eventual kernel of an operator TT, which is the union of the chain of subspaces

    ker(T)ker(T 2). ker(T) \subseteq ker(T^2) \subseteq \cdots.

    In this approach, different questions suggest themselves and lots of things become easy. One of the things I learned was this fact about eigenvalues. It struck me that I hadn’t known this fact before, given how elementary it is. And when I’ve mentioned it to people in conversation since, I’ve discovered that others have been equally unaware. With the thought at the back of my mind that I might, some day, write something about this, I was wondering how widespread that ignorance was.

    One interesting thing about this result, I think, is the “nonzero” condition. It suggests that the nonzero spectrum (I mean, the set of nonzero eigenvalues, with their algebraic multiplicities) might have some importance as an object in its own right. Write Spec ×(T)Spec^\times(T) for the nonzero spectrum of an operator TT on a finite-dimensional vector space. Then Spec ×Spec^\times has the trace-like property Spec ×(fg)=Spec ×(gf)Spec^\times(f g) = Spec^\times(g f). Of course, the trace is the sum of the nonzero spectrum, so the identity tr(fg)=tr(gf)tr(f g) = tr(g f) follows — and everyone knows that identity. Maybe Mark or Andrew have some thoughts about all this; I’m aware that I don’t know how the story goes in more sophisticated functional-analytic settings.

    • CommentRowNumber12.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 12th 2011
    • (edited Nov 12th 2011)

    One reason I thought you might have been thinking about fixed-point theory is that it gives a really easy way of seeing the truth of this result. Namely, the result holds true for the eigenvalue 1, because we obviously have a linear isomorphism

    Fix(fg)gfFix(gf)Fix(f g) \stackrel{\overset{f}{\leftarrow}}{\underset{g}{\to}} Fix(g f)

    but then the same result must hold true for any non-zero eigenvalue λ\lambda, by replacing say ff by f/λf/\lambda.

    Edit: It occurs to me that maybe I was misinterpreting what Tom is saying; perhaps by ’algebraic multiplicity’, he means the dimension of the generalized eigenspace attached to the eigenvalue λ\lambda, not the dimension of the eigenspace itself. In other words, the dimension of the eventual kernel of TλT - \lambda.

    • CommentRowNumber13.
    • CommentAuthorTobyBartels
    • CommentTimeNov 12th 2011

    I don’t think that I’ve ever heard this. Your explanation in terms of the nonzero spectrum is quite interesting.

    • CommentRowNumber14.
    • CommentAuthorTom Leinster
    • CommentTimeNov 12th 2011
    • (edited Nov 12th 2011)

    Todd, yes, by the algebraic multiplicity of TT at λ\lambda I did mean the dimension of the appropriate generalized eigenspace (what I called above the “eventual kernel” of TλT - \lambda). As I’m sure you know, this is the same as the power of xλx - \lambda appearing in the characteristic polynomial of TT.

    (I was brought up to say geometric multiplicity for the dimension of Ker(Tλ)Ker(T - \lambda) and algebraic multiplicity for the quantity just described. Maybe that’s not universal. The inequality “geom mult \leq alg mult” is obvious if you define the alg mult as the dimension of the generalized eigenspace, but not so obvious if you define it as the power in the characteristic polynomial.)

    But anyway, Todd’s proof needs to be adapted only slightly to get a proof of the fact I mentioned. We have the following linear maps between eventual kernels:

    evKer(gfλ) gf evKer(fgλ). \begin{matrix} evKer(g f - \lambda) &\stackrel{\stackrel{f}{\to}}{\stackrel{\leftarrow}{g}}& evKer(f g - \lambda). \end{matrix}

    They’re not mutually inverse. However, it’s a general fact that TμT- \mu acts as an automorphism of the subspace evKer(Tλ)evKer(T - \lambda) whenever TT is an operator and λμ\lambda \neq \mu. So gfg f acts as an automorphism of the subspace evKer(gfλ)evKer(g f - \lambda), and similarly fgf g, as long as λ0\lambda \neq 0. Hence both composites of the two maps in the display above are automorphisms, and it follows that the two maps themselves are invertible. Thus evKer(gfλ)evKer(fgλ)evKer(g f - \lambda) \cong evKer(f g - \lambda).

    As Todd points out, a similar but slightly easier argument says that fgf g and gfg f also have the same nonzero eigenvalues with the same geometric multiplicities.

    • CommentRowNumber15.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 12th 2011

    Yes, I had a similar proof. Taking λ=1\lambda = 1 for example, I observed that in the diagram you drew, we have that fgf g is an automorphism because it’s of the form 1+N1 + N where N=fg1N = f g - 1 is nilpotent, and 1+N1 + N has an inverse given by a finite geometric series 1N+N 21 - N + N^2 - \ldots. Similarly gfg f is an automorphism. Thus both ff and gg are invertible. The case for general nonzero λ\lambda follows from the case λ=1\lambda = 1 by the same trick I used before (replace ff by f/λf/\lambda).

    Strangely enough, I never had a course in linear algebra. So I may not know all the standard terminology. :-)

    • CommentRowNumber16.
    • CommentAuthorTobyBartels
    • CommentTimeNov 13th 2011

    So the ‘algebraic’ multiplicity is just as geometric as the ‘geometric’ one!

    • CommentRowNumber17.
    • CommentAuthorTom Leinster
    • CommentTimeNov 13th 2011

    @Toby: I want to rename it the dynamic multiplicity. The eventual kernel of an operator TT says something about the long-term behaviour of TT under iteration. For example, evKer(T r)evKer(T^r) is the same for all r1r \geq 1.

    I’ve never understood this usage of ’algebraic’ and ’geometric’, anyway.

    • CommentRowNumber18.
    • CommentAuthorTobyBartels
    • CommentTimeNov 13th 2011

    I think that it’s just that the algebraic multiplicity appears in the characteristic polynomial (which is algebraic) while the geometric multiplicity appears as the dimension of a space (which is geometric). So what I learnt today is that the algebraic multiplicity is also the dimension of an interesting space.