### A foray into Algebraic Combinatorics

I’m trying to understand this paper by Alexander Postnikov. This post is mainly a summary of some the concepts that I do not understand. Some examples.

- Grassmannian- A Grassmannian of a vector space is a space that parametrizes all the dimensional subspaces of . For instance, would be . Why do we need Grassmannians? Because we need a continuous, and hopefully smooth way to characterize the dimensional subspaces of . An example would be the tangent spaces on a real -manifold . The map which maps to the tangent space at is . Some interesting things to note here. First that the tangent space of any manifold has a dimension that is equal to the dimension of the manifold. This much we know. Let us assume for easy visualization that the space in which the manifold and its tangent spaces have been embedded have a dimension bigger than . What we’re doing here is that we’re mapping each to the parameter corresponding to the tangent space at . In general this map may not even be surjective. Because as changes slightly, so does the parameter corresponding to the tangent space at that points, we have a feel for why this map may be continuous.

- Plucker coordinates- This is a way to assign six homogeneous coordinates to each line in . How does one go about doing this, and why is it useful? A brilliant explanation is given on the Wikipedia page for Plucker coordinates. Say we take a line in . It is uniquely determined by points (say and ). However, is it uniquely determined by the vector between those two points? No. This vector can be translated and placed anywhere. Hence, we need both the vector between the two points and some sort of an indication as to where this vector lies with respect to the origin. One such indication would be the cross product of the two points and . The direction would give the orientation of the plane containing and , and the magnitude would give the distance that and are from the origin. Hence, we need six coordinates- three for the vector between and , and three for . Will these six coordinates uniquely describe the line? Yes. The direction will specify a plane that the three points and can lie in, and the magnitude will specify how far in that plane the vectors and lie. The vector along with the direction of will specify exactly where and lie with respect to .

How we shall talk a little about the formal definition of Plucker coordinates. In , let and be two coordinates. Let .

There are ways of selecting two elements from . Why do we need ? Because if , then . This is because the second row would just be the same as the first row. Also, . This is because we’d be exchanging two columns. Hence, there are only independent coordinates here. This ratifies the assertion that we need just homogeneous coordinates to specify a line in .

- Matroid- A matroid is a structure that generalizes linear independence in vector spaces. More formally, it is a pair , where is a finite set, and is a set of subsets of which are “linearly independent”. The first property is that is a linearly independent set. Secondly, if is linearly independent, and , then is linearly independent too. This is called the hereditary property. Third, if and are linearly independent sets, and if contains more elements than , then there is an element in that can be added to to give a larger linearly independent subset than .

The first two properties of linearly independent sets carry over smoothly from our intuition of what linearly independent sets are. The third property seems strange, but on a little thinking becomes clear. Think of the two independent sets and in . We can add either of or to to create a larger linearly independent set. However, what if the smaller set was contained within the bigger set; i.e. what if the two sets were and ? We could still find to to create a bigger linearly independent set. On a little experimentation, you will be able to convince yourself that this is a natural property of linearly independent sets of vector spaces.

Now we discuss some more properties of matroids. A subset that is not independent is called, you guessed correctly, a dependent set. A maximal independent set, one that becomes dependent on the addition of any element outside of it, is called a basis. A minimal dependent set, which becomes independent on the removal of any element, is called a circuit. Does a basis, on addition of an element, become a circuit? I don’t know. But I intend to find out.