Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I hope this is an appropriate use of the forum. I’d like to find out what proportion of mathematicians are aware of a certain fact, and I’m not feeling in the mood for asking at the Café, so I thought I’d try here.
Let $V$ and $W$ be finite-dimensional vector spaces over an algebraically closed field. Let $f: V \to W$ and $g: W \to V$ be linear maps. Then $g f$ and $f g$ have the same eigenvalues, with the same algebraic multiplicities, with the possible exception of 0.
It’s not terribly hard to prove this fact. But the question is: did you already know this, off the top of your head?
(I can say something about why I’m asking, but I’d rather wait until I’ve got a few answers.)
Thanks!
I think I knew this once…. last century! but can’t be sure.
I did.
I didn’t.
I knew that, at least I recall it now you mention it. I can’t remember where from. Maybe looking over a third year undergraduate essay my nephew wrote.
The first part (eigenvalues the same) yes, but not the second about the multiplicities (that part even surprises me!).
It’s sort of fun watching you pursue themes, Tom. You seem to be on something like a fixed-point kick these days. Or is that putting it too reductionistically? :-)
Todd, frankly I have little idea what I’m doing. But it’s nice to know I’m creating fun :-)
I might have heard that once before, but I certainly didn’t remember it.
I hope this is an appropriate use of the forum.
Absolutely.
As for your actual question, I knew this (or maybe I know that I know it, if you see what I mean) but the version that springs to mind concerns compact operators on a Banach space.
I can even remember how you prove it (in particular, I remember the proof - I’m not figuring out the proof for myself).
Thanks for your answers, all. Here’s the story. A few years ago I became aware of Sheldon Axler’s book Linear Algebra Done Right (which in fact prompted my very first Café post). Ever since then I’ve been thinking on and off about the “dynamical” approach, as I think of it, to linear algebra. There, a central role is played by the eventual kernel of an operator $T$, which is the union of the chain of subspaces
$ker(T) \subseteq ker(T^2) \subseteq \cdots.$In this approach, different questions suggest themselves and lots of things become easy. One of the things I learned was this fact about eigenvalues. It struck me that I hadn’t known this fact before, given how elementary it is. And when I’ve mentioned it to people in conversation since, I’ve discovered that others have been equally unaware. With the thought at the back of my mind that I might, some day, write something about this, I was wondering how widespread that ignorance was.
One interesting thing about this result, I think, is the “nonzero” condition. It suggests that the nonzero spectrum (I mean, the set of nonzero eigenvalues, with their algebraic multiplicities) might have some importance as an object in its own right. Write $Spec^\times(T)$ for the nonzero spectrum of an operator $T$ on a finite-dimensional vector space. Then $Spec^\times$ has the trace-like property $Spec^\times(f g) = Spec^\times(g f)$. Of course, the trace is the sum of the nonzero spectrum, so the identity $tr(f g) = tr(g f)$ follows — and everyone knows that identity. Maybe Mark or Andrew have some thoughts about all this; I’m aware that I don’t know how the story goes in more sophisticated functional-analytic settings.
One reason I thought you might have been thinking about fixed-point theory is that it gives a really easy way of seeing the truth of this result. Namely, the result holds true for the eigenvalue 1, because we obviously have a linear isomorphism
$Fix(f g) \stackrel{\overset{f}{\leftarrow}}{\underset{g}{\to}} Fix(g f)$but then the same result must hold true for any non-zero eigenvalue $\lambda$, by replacing say $f$ by $f/\lambda$.
Edit: It occurs to me that maybe I was misinterpreting what Tom is saying; perhaps by ’algebraic multiplicity’, he means the dimension of the generalized eigenspace attached to the eigenvalue $\lambda$, not the dimension of the eigenspace itself. In other words, the dimension of the eventual kernel of $T - \lambda$.
I don’t think that I’ve ever heard this. Your explanation in terms of the nonzero spectrum is quite interesting.
Todd, yes, by the algebraic multiplicity of $T$ at $\lambda$ I did mean the dimension of the appropriate generalized eigenspace (what I called above the “eventual kernel” of $T - \lambda$). As I’m sure you know, this is the same as the power of $x - \lambda$ appearing in the characteristic polynomial of $T$.
(I was brought up to say geometric multiplicity for the dimension of $Ker(T - \lambda)$ and algebraic multiplicity for the quantity just described. Maybe that’s not universal. The inequality “geom mult $\leq$ alg mult” is obvious if you define the alg mult as the dimension of the generalized eigenspace, but not so obvious if you define it as the power in the characteristic polynomial.)
But anyway, Todd’s proof needs to be adapted only slightly to get a proof of the fact I mentioned. We have the following linear maps between eventual kernels:
$\begin{matrix} evKer(g f - \lambda) &\stackrel{\stackrel{f}{\to}}{\stackrel{\leftarrow}{g}}& evKer(f g - \lambda). \end{matrix}$They’re not mutually inverse. However, it’s a general fact that $T- \mu$ acts as an automorphism of the subspace $evKer(T - \lambda)$ whenever $T$ is an operator and $\lambda \neq \mu$. So $g f$ acts as an automorphism of the subspace $evKer(g f - \lambda)$, and similarly $f g$, as long as $\lambda \neq 0$. Hence both composites of the two maps in the display above are automorphisms, and it follows that the two maps themselves are invertible. Thus $evKer(g f - \lambda) \cong evKer(f g - \lambda)$.
As Todd points out, a similar but slightly easier argument says that $f g$ and $g f$ also have the same nonzero eigenvalues with the same geometric multiplicities.
Yes, I had a similar proof. Taking $\lambda = 1$ for example, I observed that in the diagram you drew, we have that $f g$ is an automorphism because it’s of the form $1 + N$ where $N = f g - 1$ is nilpotent, and $1 + N$ has an inverse given by a finite geometric series $1 - N + N^2 - \ldots$. Similarly $g f$ is an automorphism. Thus both $f$ and $g$ are invertible. The case for general nonzero $\lambda$ follows from the case $\lambda = 1$ by the same trick I used before (replace $f$ by $f/\lambda$).
Strangely enough, I never had a course in linear algebra. So I may not know all the standard terminology. :-)
So the ‘algebraic’ multiplicity is just as geometric as the ‘geometric’ one!
@Toby: I want to rename it the dynamic multiplicity. The eventual kernel of an operator $T$ says something about the long-term behaviour of $T$ under iteration. For example, $evKer(T^r)$ is the same for all $r \geq 1$.
I’ve never understood this usage of ’algebraic’ and ’geometric’, anyway.
I think that it’s just that the algebraic multiplicity appears in the characteristic polynomial (which is algebraic) while the geometric multiplicity appears as the dimension of a space (which is geometric). So what I learnt today is that the algebraic multiplicity is also the dimension of an interesting space.
1 to 18 of 18