I’m prepared to embarrass myself by writing about something that should have been clear to me a long time ago.

This is regarding something about the minimal polynomials of linear transformations that has always confused me. Let , where is an -dimensional vector space. Let us also assume that has as distinct eigenvectors, but the corresponding eigenvalues may not be distinct. If the eigenvalues are where , it is then known that is the minimal polynomial of .

We know that as polynomials, and are the same (note that I’ve exchanged the places of and . However, when we substitute , are and also the same? Remember that matrices are in general not commutative. In fact, if for matrices and we have , then it is not necessary that too.

An earlier exercise in the book “Linear Algebra” by Curtis says that for , . Why is this? Because we’re ultimately going to get the same polynomial in terms of . My mental block came from the fact that I was imagining to be a matrix which I didn’t know much about. I forgot that is a decomposition of a single matrix into two, and matrix multiplication, like the multiplication of complex numbers, is distributive. Hence everything works out as planned.

### Like this:

Like Loading...

*Related*

In the last line of second paragraph, “If the eigenvectors …” you mean eigenvalues and not eigenvectors. There is another typo where you write instead of .

Thanks Anurag! I’ve made the required changes