cozilikethinking

4 out of 5 dentists recommend this WordPress.com site

Exterior Algebra and Differential Forms I

This is going to be a post about exterior algebra and differential forms. I have studied these concepts multiple times in the past, and feel that I have an idea of what’s going on. However, it would be good to straighten the chinks, of which there are many, once and for all.

For a vector space V, a p-tensor is a multilinear function from V^p\to\Bbb{R} (or maybe \Bbb{C}, depending upon the context). For example, a 1-tensor is a linear functional. The determinant of an n\times n matrix is a famous example of an n-tensor. Here, the vector space V has to be n-dimensional too. The space of p-tensors is called \mathfrak{J^p}(V^*). This is analogous to the space of linear functionals on a vector space.

Let \{\phi_1,\dots,\phi_k\} be a basis for $V^*$. Then the p-tensors \{\phi_{i_1}\otimes\dots\otimes \phi_{i_p}:1\leq i_1,\dots,i_p\leq k\} form a basis for \mathfrak{J}^p (V^*). Consequently, \dim \mathfrak{J}^p (V^*)=k^p. Why is this? Why should a p-tensor be a tensor product of 1-tensors?

Vectors here are n-dimensional, if we assume V to be an n-dimensional vector space. We’re taking an p-tuple of these n-dimensional vectors. Does this tuple have to contain basis vectors of the vector space? Or can it contain any vectors in the vector space? It just needs to contain the basis vectors. However, remember that this is not an ordered p-tuple. The basis vectors may be arranged in any order whatsoever. We want to be able to determine the maps of every p-tuple of basis vectors, arranged in any order whatsoever. Then we’ll be able to uniquely determine the multi-linear map- the p-tensor. I have still not generated a basis. Shouldn’t a basis be specific to the p-tuple that we’re considering? No wait. We just want to be able to express when a certain p-tuple goes to a certain value. That is it. So we construct a basis by designing a sum such that a tuple of basis vectors is non-zero for only one of the components- and then you multiply that component by the appropriate scalar. Every possible tuple of the basis vectors needs to be assigned values, and not just the basis vectors. This is how we cover all possibilities. Basically the indices of the p-tuple and the basis vectors have to be exactly the same.

Now let us define an alternating tensor from a regular tensor. An alternating p-tensor is one in which a permutation \sigma of the p-tuple that the tensor is acting on causes the p-tensor to be multiplied by (-1)^\sigma. In general, the action of a p-tensor on a tuple would have no relation with its action on a permutation of the tuple. Hence, this is a special kind of tensor. Any tensor can be mapped to an alternating tensor. This is done by taking permutations of the p-tuple you’re given. Then adding the action of the tensor on the tuple multiplied with alternating signs. But shouldn’t this be universal for all p-tensors? It is. There is no fixed or “first” initial configuration of the vector. You break any p-tuple the same way. In all possible permutations. You just have equivalence classes. How are these classes formed? But the input comes later! Yes it does. And the input is treated the same way it always is.

Now let us think about the tensor product of two alternating forms. The alternating forms are p and q tensors, say. Why the division? So that we can eliminate needless repetition. The jump back is quite obvious. But we need to be consistent with the representation of the permutations of the (p+q)-tuple. The division seemed needless at first when we described division by p! and q!.

The space of p-tensors that are also alternating forms is denoted as \bigwedge^p(V^*). The dimension of this will predictably be of cardinality {n\choose p}. Whenever you have a vector, permute it, and then find the mapping of those elements by those p-tensors composed of basis elements. To be more specific, we permute the vector such that the vector elements are in “increasing order”. Obviously these vector elements are the basis elements of V.

Now we shall talk about p_forms, which are just a specific case of differential forms. Let X be a smooth manifold with or without boundary. A p-form on X is a function \omega that assigns to each point x\in X an alternating p-tensor \omega(x) on the tangent space of X at x; \omega(x)\in \bigwedge^p[T_x(X)^*]. It’s just an alternating p-tensor! It has a basis, that is smaller in cardinality then the basis of a general p-tensor. It can be constructed from the tensor products of the basis elements of the dual space. What makes it so intimidating? The layers of new machinery. Think of a bunch of honey traps that will take care of all the pieces. And give you exactly what you want. The honey traps are the basis elements of the differential form, and the “pieces” are the input vector split in terms of the basis vectors of V. All these operations are happening on vectors of the tangent space at point x.

What are 0-forms? They don’t take in any vector. Hence, for all vectors, they’re constant. This implies that they’re just constant functions.

What about 1-forms? They take in one vector from the tangent space, and then probably have a simple mapping of that vector to the co-domain field (which is generally \Bbb{C}). Turns out many examples of 1-forms can be manufactured from smooth functions. If \phi:X\to \Bbb{R} is a smooth function, where X is the smooth manifold, then d\phi_x: T_x(X)\to\Bbb{R} is a linear map at each point x. Thus the assignment x\to d\phi_x defines a 1-form d\phi on X.

This discussion on differential forms will be continued in the next post.

Sweating out the homology

This is going to be a rather long post on homology. I hope I do manage to understand it. It will ultimately go up in a polished form on my blog. The reason why it is difficult to understand homology and cohomology without typing it all out is that the information given is so little. One has to construe so much from relatively dry language. I think that is the place where writing things out would help tremendously.

If two spaces X and Y are homotopic, then their homology groups are isomorphic. If the homology groups differ, then they clearly cannot be homotopic, and more specifically, homeomorphic.

A chain map f:C^*\to D^* is a map between homotopy groups C_n for each n. Hence, a chain map encodes information about an infinite number of maps between those homotopy groups. Also, these maps commute with boundary maps. What does it really mean for a map to commute? It means that the maps \partial and f_n literally commute! \partial f_n=f_n\partial for all x\in C_n. Hence the name commutative diagrams. This is honestly the first time that I have thought of this, in spite of having read about commutative diagrams all this bloody while. Maybe typing does have its benefits.

What kinds of maps take cycles to cycles and boundaries to boundaries? Commuting maps definitely do. But the commuting structure has to be present throughout. Above and below. Do other kinds of maps have similar properties? Yes. It is possible. In such a commuting structure, we have a natural map between homology groups. It is only because of the commutativity that the map is well defined; i.e. that elements in B_n go to C_n.

We now construct a functor: for a chain map f: C_*\to D_*, we construct a functor H_n such that H_n (f): H_n (C_*)\to H_n (D_*). As far as the mappings of objects go, we see a mapping of C_n‘s to H_n‘s. Do we have to have an infinite number of functors to be able to successfully create all maps between the images of chains? Yes. We need a separate functor for each map f: C_n\to D_n. H_n is known as the nth homology functor.

Now we talk about chain homotopic maps. Two maps f,g: C_*\to D_* are chain homotopic if there exist maps h_n: C_n\to D_{n+1} such that f_n-g_n=h_{n-1}d_n+d_{n+1}h_n. What does it mean for two maps to be chain homotopic? And why should such h_n‘s exist? In understanding this, this answer came in handy. Essentially, two chain homotopic maps induce the same maps between homological groups. Hence, homotopic chain maps induce isomorphic homological maps. Why is that? This is because f(z)-g(z) for z\in Ker \partial _n belongs to B_n (D_*). Hence, f and g induce the same maps between the homology groups.

Just a quick note: the kernels and the cokernels in the snake lemma and other lemmas are all with respect to the boundary maps, and not the chain maps. Why is that? Because the chain maps just act like connecting and commuting linkages. The main action happens with sets related to the boundary maps- like the kernels and the cokernels.

Now I shall study the Snake Lemma. With this lemma, there is one thing that has always confused me- how is the map well-defined? What if their difference goes to $0$? What does all this mean? OK. First of all, in whichever direction we go in the process of diagram chasing, we might or might not have a well-defined choice. We always have to check for a well-defined choice. Checking in some instances is easier than checking in others. In this case, everything works out as the lower left map is injective, and the whole diagram is commutative. Sorry this is not a completely rigorous or complete explanation, but it has made me understand something that I was at a loss to understand for far too long.

Now we shall talk about the long exact homology sequence. Say you have a short exact sequence of chain complexes 0\to C_*\to D_*\to E_*\to 0. How is the long exact homology chain induced? The main issue here is the construction of the connecting map H_n(E_*)\to H_{n-1}(C_*). This is done by using Snake’s lemma on the following diagram capture