cozilikethinking

4 out of 5 dentists recommend this WordPress.com site

Nakayama’s lemma

The Nakayama lemma as a concept is present throughout Commutative Algebra. And truth be told, learning it is not easy. The proof contains a small trick that is deceptively simple, but throws off many people. Also, it is easy to dismiss this lemma as unimportant. But as one would surely find out later, this would be an error in judgement. I am going to discuss this theorem and its proof in detail.

The statement of the theorem, as stated in Matsumura, is:

Let I be an ideal in R, and M be a finitely generated module over R. If IM=M, then there exists r\in R such that r\equiv 1\mod I, and rM=0.

What does this statement even mean? Why is it so important? Why are the conditions given this way? Are these conditions necessary conditions? These are some questions that we can ask. We will try and discuss as many of them as we can.

M is probably finitely generated so that we can generate a matrix, which by definiton has to be finite dimensional. Where the matrix comes in will become clear when we discuss the proof. What does IM=M imply? This is a highly unusual situation. For instance, if M=\Bbb{Z} and I=(2), then (2)\Bbb{Z}\neq\Bbb{Z}. I can’t think of examples in which I\neq (1), and IM=M. However, that does not mean that there do not exist any. What does it mean for r\equiv 1\mod I? It just means that r=1+i for some i\in I. That was fairly simple! Now let’s get on with the proof.

Let M be generated by the elements \{a_1,a_2,\dots,a_n\}. If IM=M, then for each generator a_i, we have a_i=b_{i1}a_1+b_{i2}a_2+\dots+b_{in}a_n, where all the b_{ij}\in I. We then have b_{i1}a_1+b_{i2}a_2+\dots+(b_{ii}-1)a_i+\dots+b_{in}a_n=0. Let us now create a matrix of these n equations in the natural way, in which the rows are indexed by the i‘s. The determinant of this matrix will be 0, as for any column vector that we multiply this matrix with, we will get 0. On expanding this determinant, we will get an expression of the form (-1)^n+ i, where i\in I. If n is odd, then just multiply the expression by -1. In either case, you get 1+i', where i\in I (i'=i or i'=-i).

Now as 1+i' is 0, we have (1+i')M=0. Hence, r=1+i' such that r\equiv 1\mod I and rM=0

The reason why the proof is generally slightly confusing is that it is done more generally. It is first assume that there exists a morphism \phi:M\to M such that \phi(M)\subset IM. Cayley-Hamilton is then used to give a determinant in terms of \phi, and then it is assumed that \phi=1. Here I have directly assumed that \phi=1, which made matters much simpler.

Spectral Theorem

This post is on the Spectral Theorem. This is something that I should have been clear on a long time ago, but for reasons unknown to me, I was not. I hope to be able to rectify that now. The proof was discussed today in class. I am only recording my thoughts on it.

The spectral theorem states that a self-adjoint operator in an n dimensional vector space has n orthogonal eigenvectors, and all its n eigenvalues are real.

Let V be the n-dimensional vector space under consideration, and let a,b\in V. A self adjoint operator is one that satisfies the following condition: \langle Ta,b\rangle=\langle a,Tb\rangle. If the inner product is defined in the conventional way in this setting, which is a sesquilinear product, then T has to be a hermitian matrix. For motivation, we’re going to assume this inner product for the rest of the proof.

As we’re working in \Bbb{C}, T has at least one eigenvalue, and consequently and eigenvector. Let Tv=\lambda v, where \lambda is the eigenvalue and v is the eigenvector. \langle Tv,v\rangle=\langle \lambda v,v\rangle=\lambda\langle v,v\rangle=\langle v,Tv\rangle=\langle v,\lambda v\rangle=\overline{\lambda}\langle v,v\rangle. We know that \langle v,v\rangle\in\Bbb{R} (it is in fact greater than $0$). Hence, \lambda=\overline{\lambda}, which shows that \lambda is real valued.

How do we contruct the basis of orthogonal eigenvectors though? We start with one eigenvector v. Now consider the orthogonal complement of v. Let this be A. We claim that T(A)\subset A. This is because for a\in A, \langle Ta,v\rangle=\langle a,Tv\rangle=\langle a,\lambda v\rangle=\overline{\lambda}\langle a,v\rangle=\lambda\langle a,v\rangle=0 (remember that \lambda=\overline{\lambda}). Hence, if we write T in terms of the new basis which has v and elements from its orthogonal complement, then the first row and column will be all 0‘s except for the the top left position, which will have \lambda.

Now the action of T on the orthogonal complement, or A, is the same as the action of T\setminus \{\text{first row and first column}\}. The determinant of this again will have at least one solution, which ensures that we have an eigenvalue to work with. In this way, through an iterative process which ends after n iterations, we can generate n eigenvectors.