4 out of 5 dentists recommend this WordPress.com site

## Category: Algebraic Geometry

### Sheaf (Čech) Cohomology: A glimpse

This is a blogpost on Sheaf Cohomology. We shall be following this article.

From the word cohomology, we can guess that we shall be talking about a complex with abelian groups and boundary operators. Let us specify what these abelian groups are.

Given an open cover $\mathcal{U}=(U_i)_{i\in I}$ and a sheaf $\mathcal{F}$, we define the $0^{th}$ cochain group $C^0(\mathcal{U}, \mathcal{F})=\prod_{i\in I}\mathcal{F}(U_i)$. Note that we are not assuming that the sections over the individual $U_i$‘s agree on the intersections. This is simply a tuple in which each coordinate is a section. We are interested in finding out whether we can glue these sections together to get a global section. This is only possible if the sections agree on the intersections of the open sets.

We now define $C^1(\mathcal{U}, \mathcal{F})=\prod_{i,j\in I}\mathcal{F}(U_i\cap U_j)$. Here we are considering the tuple of sections defined on the intersections of two sets. Note that these intersections may not cover the whole of the topological space. Hence, we are no longer interested in gluing sections together to see whether they form a global section.

Similarly, we define $C^2(\mathcal{U}, \mathcal{F})=\prod_{i,j,k\in I}\mathcal{F}(U_i\cap U_j\cap U_k)$.

Now, we come to the boundary maps. $\delta: C^0(\mathcal{U}, \mathcal{F})\to C^1(\mathcal{U}, \mathcal{F})$ is defined in the following way: $\delta(f_i)=(g_{i,j})$, where $(g_{i,j})= f_{j|U_i\cap U_j}-f_{i|U_i\cap U_j}$. What we’re doing is that we’re taking a tuple of sections, and mapping it to another tuple; the second tuple is generated by choosing two indices $k,l$, determining the sections defined over $U_k$ and $U_l$, and then calculating $f_{l|U_k\cap U_l}-f_{k|U_k\cap U_l}$. In the image tuple, $f_{l|U_k\cap U_l}-f_{k|U_k\cap U_l}$ would be written at the $k,l$ coordinate.

Now we define the second boundary map. $\delta: C^1(\mathcal{U}, \mathcal{F})\to C^2(\mathcal{U}, \mathcal{F})$ is defined in the following way: $\delta(f_{ij})=(g_{i,j,k})$, where $(g_{i,j,k})= f_{i,j|U_i\cap U_j\cap U_k}-f_{k,i|U_i\cap U_j\cap U_k}+f_{j,k|U_i\cap U_j\cap U_k}$. What does this seemingly arbitrary definition signify? The first thing to notice is that if $f_{i,j}$ is an image of an element in $C^0(\mathcal{U}, \mathcal{F})$, then $\delta(f_{i,j})=0$. Hence, at the very least, this definition of a boundary map gives us a complex on our hands. Maybe that is all that it signifies. We’re looking for definitions of $C^i(\mathcal{U},\mathcal{F})$ which keep us giving sections over smaller and smaller open sets, and definitions of $\delta$ over these $C^i(\mathcal{U},\mathcal{F})$ which keep on mapping images from $C^{i-1}(\mathcal{U},\mathcal{F})$ to $0$.

Predictably, $H^i(\mathcal{U},\mathcal{F})=Z(\mathcal{U},\mathcal{F})/B^i(\mathcal{U},\mathcal{F})$, where $Z(\mathcal{U},\mathcal{F})$ is the kernel of $\delta$ acting on $C^i(\mathcal{U},\mathcal{F})$ and $B^i(\mathcal{U},\mathcal{F})$ is the image of $\delta$ acting on $C^{i-1}(\mathcal{U},\mathcal{F})$. Sheaf cohomology, measures the extent to which tuples of sections over an open cover fail to be global sections. The longer the non-zero tail of the cohomology complex, the farther the sections of this sheaf lie from gluing together amicably. In other words, the length of the non-zero tail measures how “complex” the topological space and the sheaf on it are. However, there is still hope. By a theorem of Grothendieck, we know that the length of the complex is bounded by the dimension of the (noetherian) topological space.

### Nakayama’s lemma

The Nakayama lemma as a concept is present throughout Commutative Algebra. And truth be told, learning it is not easy. The proof contains a small trick that is deceptively simple, but throws off many people. Also, it is easy to dismiss this lemma as unimportant. But as one would surely find out later, this would be an error in judgement. I am going to discuss this theorem and its proof in detail.

The statement of the theorem, as stated in Matsumura, is:

Let $I$ be an ideal in $R$, and $M$ be a finitely generated module over $R$. If $IM=M$, then there exists $r\in R$ such that $r\equiv 1\mod I$, and $rM=0$.

What does this statement even mean? Why is it so important? Why are the conditions given this way? Are these conditions necessary conditions? These are some questions that we can ask. We will try and discuss as many of them as we can.

$M$ is probably finitely generated so that we can generate a matrix, which by definiton has to be finite dimensional. Where the matrix comes in will become clear when we discuss the proof. What does $IM=M$ imply? This is a highly unusual situation. For instance, if $M=\Bbb{Z}$ and $I=(2)$, then $(2)\Bbb{Z}\neq\Bbb{Z}$. I can’t think of examples in which $I\neq (1)$, and $IM=M$. However, that does not mean that there do not exist any. What does it mean for $r\equiv 1\mod I$? It just means that $r=1+i$ for some $i\in I$. That was fairly simple! Now let’s get on with the proof.

Let $M$ be generated by the elements $\{a_1,a_2,\dots,a_n\}$. If $IM=M$, then for each generator $a_i$, we have $a_i=b_{i1}a_1+b_{i2}a_2+\dots+b_{in}a_n$, where all the $b_{ij}\in I$. We then have $b_{i1}a_1+b_{i2}a_2+\dots+(b_{ii}-1)a_i+\dots+b_{in}a_n=0$. Let us now create a matrix of these $n$ equations in the natural way, in which the rows are indexed by the $i$‘s. The determinant of this matrix will be $0$, as for any column vector that we multiply this matrix with, we will get $0$. On expanding this determinant, we will get an expression of the form $(-1)^n+ i$, where $i\in I$. If $n$ is odd, then just multiply the expression by $-1$. In either case, you get $1+i'$, where $i\in I$ ($i'=i$ or $i'=-i$).

Now as $1+i'$ is $0$, we have $(1+i')M=0$. Hence, $r=1+i'$ such that $r\equiv 1\mod I$ and $rM=0$

The reason why the proof is generally slightly confusing is that it is done more generally. It is first assume that there exists a morphism $\phi:M\to M$ such that $\phi(M)\subset IM$. Cayley-Hamilton is then used to give a determinant in terms of $\phi$, and then it is assumed that $\phi=1$. Here I have directly assumed that $\phi=1$, which made matters much simpler.

### Algebraic Geometry 4: A short note on Projective Varieties

What is a variety? It is the set of common zeroes for a set of polynomials. For example, for the set of polynomials $\{x+y,x-y\}\in\Bbb{R}[x,y]$, the variety is $(0,0)$.

Now what is a projective variety? Simply put, it is the common set of zeroes of polynomials in which a one-dimensional subspace is effectively considered one point. Hence, for the variety to be well-defined, if one point of a one-dimensional subspace satisies the variety, every point of the one-dimensional subspace has to satisfy that variety. Confused?

Take the polynomial $x+y+z\in\Bbb{C}[x,y,z]$. The point $(1,-1,0)$ satisfies this polynomial. Now note that the points $\lambda(1,-1,0)$ also satisfy this polynomial for every $\lambda\in\Bbb{C}$. Hence this is a projective variety. Now take $x+y+z-1\in\Bbb{C}[x,y,z]$. Here $\lambda(1,0,0)$ satisfies the polynomial for only $\lambda=1$. Hence, this is not a projective variety.

But why? Why would you want to consider a whole line as one point? When you watch the world from your little nest, every line running along your ine of sight becomes a point. Hence, athough it may be a line in “reality” (whatever this means), for you it is a point. This is the origin of projective geometry, although things have gotten sightly complicated since then.