cozilikethinking

4 out of 5 dentists recommend this WordPress.com site

The Hurewicz Theorem

Here we talk about the Hurewicz theorem. Let X be a path connected space with \pi_n(X)=0 for n\geq 2. Then X is determined up to homotopy by \pi_1(X).

What does “determined up to homotopy” mean? It means that all the spaces that satisfy the condition above are homotopic to each other. When two spaces are homotopic, what does that mean? Say A and B are homotopic. This means that there exist maps f:A\to B and g:B\to A such that g\circ f\simeq id_A and f\circ g\simeq id_B. Can we think of any examples of spaces that are not homotopic? Yes. Just map a disconnected space to a connected space. Like mapping two non-intersecting discs to a connected disc. A homotopy between spaces tends to preserve the same “kind” of connectivity between spaces; i.e. two disconnected discs would be homotopic to two discs, and not three disconnected discs.

Why do we need the Hurewicz theorem? Because it is often difficult to calculate the fundamental group of a space, but much easier to calculate the homology groups. Hence, knowing that we can map homology groups to homotopy groups, we can discover properties of the homotopy groups that we couldn’t earlier.

We state and prove the Hurewicz Theorem for the n=1 case. Let \phi:\pi_1(X,x_0)\to H_1(X) be defined such that a path \gamma goes to \gamma. How is \gamma a member of the homology group? Because it is a map from [0,1], which is a 1-simplex, to X! Now for the map to be well-defined, if \gamma'\sim\gamma, then \phi(\gamma')=\phi(\gamma). Why is this true? We shall find out later.

Anyway, we have a homomorphism \phi:\pi_1(X,x_0)\to H_1(X), and H_1(X) is an abelian group. Hence, we have a homomorphism \phi':(\pi_1)_{ab}(X,x_0)\to H_1(X), where (\pi_1)_{ab}(X,x_0) is just the group \pi_1(X,x_0) modulo its commutator subgroup. We have abelianized the fundamental homotopy group. Hurewicz Theorem says that the map h' is an isomorphism. How do we see this? Moreover, we have not even proven that h' is a homomorphism, or even a well-defined map.

We need to prove three things: that h' is well defined, a homomorphism, and an isomorphism.

!. Well-defined- Consider two homotopic paths \gamma\sim\gamma'. We need to prove that \gamma and \gamma' belong to the same homology group. There exists a map H(s.t):I\times I\to X such that H(0,t)=\gamma(t) and H(1,t)=\gamma'(t). The solid square I\times I can be thought of as the union of two 2-simplices \sigma_1 and \sigma_2. We’ll orient the boundary edges in a compatible fashion, and consider the restriction of H to \partial \sigma_1 and \partial \sigma_2.

On a completely different but related topic, note that the constant map from \Delta^1\to X is just the boundary of the constant map from \Delta^2\to X. Coming back to the above argument, we note that H(\partial\sigma_1-\partial\sigma_2)=\gamma'-\gamma-2f_{x_0}. Here f_{x_0} just denotes the mapping of a 1-simplex to the point x_0.

So what exactly is happening here? How do we know that \gamma-\gamma'\in \text{Im}(H_2(X))? Because \gamma-\gamma'=H(\partial\sigma_1)-H(\partial\sigma_2)+2f_{x_0}, and all three of H(\partial\sigma_1), H(\partial\sigma_2) and 2f_{x_0} belong to Im(H_2(X). Hence, we’ve proved that \phi is well-defined.

2. Homomorphism: Let [\gamma] and [\delta] be elements of the fundamental group. Consider \gamma*\delta:I\to X. Moreover, let \Delta^2 be [v_0 v_2 v_3] and let \sigma:\Delta^2\to X. On the [v_0v_2] edge, let the restriction of \sigma be \gamma*\delta. On the other two edges, the restriction of \sigma should be \gamma and \delta.

First we need to prove that reparametrization does not affect homotopy. Why’s that? The homotopy lies in continuously deforming one interval to another, and then making the path (say f) act on it. This homotopy is clear. Now the proof says that \gamma-\gamma*\delta+\delta is a reparametrization of \gamma*\delta. How is that?

Anyway, we have h([\gamma])+h([\delta])-\partial\sigma=\gamma*\delta=h([\gamma]*[\delta]). Hence, we get a homomorphism.

3. Surjectivity: Let us take a 1-cycle \sigma=\sum_i\sigma_i. Here it is possible that \sigma_i=\sigma_j for i\neq j. We can re-write this sum as a sum of loops, but putting together non-loops in the obvious way. Hence, we know that all cycles can be mapped to. These cycles may be disjoint too. It’s just that we’ve covered all cycles.

Now let us move to more general 1-simplices. Let \gamma_i be the path from x_0 to the base point of \sigma_i. Then \gamma_i\sigma_i\overline{\gamma_i} is homologous to \gamma_i+\sigma_i+\overline{\gamma_i}. What does this mean? It means that their difference is a boundary. Why are they homologous? This is because h([\gamma_i][\sigma_i][\gamma'_i])=h([\gamma_i])+h([\sigma_i])+h([\gamma'_i]), as h is a homomorphism. Now \overline{\gamma_i} is homologous to -\gamma_i. Hence, we get \sigma_i is homologous to h([\gamma_i][\sigma_i][\gamma'_i]). By doing this, we can center all our loops at x_0. Now any sum of loops centred at x_0 can be mapped to from the fundamental group, as that is what the fundamental group is, the collection of all loops centred at any point (remember that X is path-connected). Hence, we’ve proven that \phi is surjective.

4. Kernel: We need to prove that the kernel is the commutator subgroup. We can see why \gamma\gamma'\overline{\gamma}\overline{\gamma'} belongs to the kernel. This is because h([\gamma][\gamma'][\overline{\gamma}][\overline{\gamma'}]) is homologous to \gamma+\gamma'-\gamma-\gamma'. Now we need to prove that the kernel is a subset of the commutator subgroup.

I’m not typing up the inclusion of the kernel in the commutator subgroup, but it can be found here.

A foray into Algebraic Combinatorics

I’m trying to understand this paper by Alexander Postnikov. This post is mainly a summary of some the concepts that I do not understand. Some examples.

  • Grassmannian- A Grassmannian G(r,V) of a vector space V is a space that parametrizes all the r dimensional subspaces of V. For instance, G(1,\Bbb{C}^n) would be \Bbb{P}^{n-1}. Why do we need Grassmannians? Because we need a continuous, and hopefully smooth way to characterize the r dimensional subspaces of V. An example would be the tangent spaces on a real m-manifold M. The map \phi which maps x\in M to the tangent space at x is \phi:M\to G(m,\Bbb{R}^m). Some interesting things to note here. First that the tangent space of any manifold has a dimension that is equal to the dimension of the manifold. This much we know. Let us assume for easy visualization that the space in which the manifold and its tangent spaces have been embedded have a dimension bigger than m. What we’re doing here is that we’re mapping each x\in M to the parameter corresponding to the tangent space at x. In general this map may not even be surjective. Because as x changes slightly, so does the parameter corresponding to the tangent space at that points, we have a feel for why this map may be continuous.
  • Plucker coordinates- This is a way to assign six homogeneous coordinates to each line in \Bbb{P}^3. How does one go about doing this, and why is it useful? A brilliant explanation is given on the Wikipedia page for Plucker coordinates. Say we take a line in \Bbb{R}^3. It is uniquely determined by 2 points (say x and y). However, is it uniquely determined by the vector between those two points? No. This vector can be translated and placed anywhere. Hence, we need both the vector between the two points and some sort of an indication as to where this vector lies with respect to the origin. One such indication would be the cross product of the two points x and y. The direction would give the orientation of the plane containing x and y, and the magnitude would give the distance that x and y are from the origin. Hence, we need six coordinates- three for the vector between x and y, and three for x\times y. Will these six coordinates uniquely describe the line? Yes. The direction will specify a plane that the three points x, y and 0 can lie in, and the magnitude will specify how far in that plane the vectors x and y lie. The vector x-y along with the direction of x\times y will specify exactly where x and y lie with respect to 0.

How we shall talk a little about the formal definition of Plucker coordinates. In \Bbb{P}^3, let (x_0,x_1,x_2,x_3) and (y_0,y_1,y_2,y_3) be two coordinates. Let p_{ij}=\begin{vmatrix} x_i&y_i\\ x_j&y_j\end{vmatrix}.

There are {4\choose 2}=6 ways of selecting two elements from \{0,1,2,3\}. Why do we need i\neq j? Because if i=j, then p_{ij}=0. This is because the second row would just be the same as the first row. Also, p_{ij}=-p_{ji}. This is because we’d be exchanging two columns. Hence, there are only {4\choose 2} independent coordinates here. This ratifies the assertion that we need just 6 homogeneous coordinates to specify a line in \Bbb{P}^3.

  • Matroid- A matroid is a structure that generalizes linear independence in vector spaces. More formally, it is a pair (E,I), where E is a finite set, and I is a set of subsets of E which are “linearly independent”. The first property is that \emptyset is a linearly independent set. Secondly, if A is linearly independent, and A'\subset A, then A' is linearly independent too. This is called the hereditary property. Third, if A and B are linearly independent sets, and if A contains more elements than B, then there is an element in A that can be added to B to give a larger linearly independent subset than B.

The first two properties of linearly independent sets carry over smoothly from our intuition of what linearly independent sets are. The third property seems strange, but on a little thinking becomes clear. Think of the two independent sets \{i\} and \{i+j,i-j\} in \Bbb{R}^2. We can add either of i+j or i-j to i to create a larger linearly independent set. However, what if the smaller set was contained within the bigger set; i.e. what if the two sets were \{i\} and \{i,j\}? We could still find j to i to create a bigger linearly independent set. On a little experimentation, you will be able to convince yourself that this is a natural property of linearly independent sets of vector spaces.

Now we discuss some more properties of matroids. A subset that is not independent is called, you guessed correctly, a dependent set. A maximal independent set, one that becomes dependent on the addition of any element outside of it, is called a basis. A minimal dependent set, which becomes independent on the removal of any element, is called a circuit. Does a basis, on addition of an element, become a circuit? I don’t know. But I intend to find out.