Fruits of procrastination

Month: February, 2014

Ah. Primes, irreducible elements, and Number Theory

Proving that primes are irreducible in any integral domain is simple.

Assume that a prime p is not irreducible. Then p=ab, where a and b are not units. Now we know that p|p. Therefore, p|ab. Let p|a. Then a=pc, where c\in R. Now we have p=pcb. This implies cb=1, which is a contradiction as b cannot be a unit.

But proving that every irreducible is a prime too requires more specialized conditions. Say a is an irreducible such that a|bc. Also assume that a\not|b. If euclidean division is valid in the ring, then (a,b)=R. If every ideal has to be a PID, then (a,b)=R.

This might not be an exhaustive list of conditions for which (a,b)=R if a and b are co-prime, but this is all we have right now.

Anyway, if (a,b)=R and a|bc, then ax+by=1. Multiplying by c on both sides, we ultimately get (c)\subset (a), which proves a|c. This is valid for both PIDs and Euclidean domains. Note that the commutativity of the domain makes all this possible.

Something that has confused me for long is the condition for injectivity for a homomorphism. The condition is that the kernel should just be the identity element.

I used to think that maybe this condition for injectivity applies to all mappings, and wondered why I hadn’t come across this earlier.

No. This condition applies only to homomorphisms f:(G,*)\to (H,.) due to their special property: f(a*b)=f(a).f(b). This condition would no longer be valid if the mapping was so defined: f(a*b)=c.f(a).f(b), where c\neq e_H.

Note to self.

f^{-1}(f(U)) may not be equal to U. Topology.

Gauss’s lemma (polynomials)

I have long interpreted Gauss’s lemma to mean that if a polynomial with integral coefficients has a rational root, that root has to be an integer.

This is incorrect.

For example, take the polynomial (2x-1)(x^2+1). It has integral coefficients. However, it does not have an integral root. It has a rational root; namely \frac{1}{2}.

Gauss’s lemma just states that a polynomial with integral coefficients, if it can be factored into polynomials with rational coefficients, can be factored into polynomials with integral coefficients.

The rational roots theorem simply follows from this.

The proof has been omitted. But it is quite simple.

A generalisation of Gram-Schmidt’s orthonogonalisation process

I just read up about the Gram-Schmidt orthogonalization process.

Say we have \{b_1,b_2,\dots,b_r\} as an orthonormal basis for a subspace. Now let w=a-(a,b_1)b_1-(a,b_2)b_r-\ldots-(a,b_r)b_r for ANY a\in X, where X is the vector space. The set \{b_1,b_2,\dots,b_r,\frac{w}{\|w\|}\} is orthonormal.

$a$ needn’t be a vector from X\setminus S, where S is spanned by \{b_1,b_2,\dots,b_r\}.

We require a to be from X\setminus S only because we want to make the orthonormal set span a larger vector space.

Small note

One might have wondered why B(X,Y) contains only linear bounded operators, and not linear operators of any and every kind. This has a very specific reason. Unless we fix x\in X, we cannot construct a cauchy sequence \{T_1,T_2,\dots\} of linear operators. And we do not want to fix x\in X, as we want to define the limit \lim\limits_{n\to\infty} T_n x for every x\in X.

Now if we allow ourselves to take all x\in X, knowing that T_i (\alpha x)=|\alpha|T_i(x), we coud convert any sequence of linear operators into a divergent one. Hence, we need to construct sequences of the kind \{T_i(x)/\|x\|\}. This requires that we use bounded linear operators. 

The strange difference between “divergent sequences” in real analysis and abstract algebra

I have been working on Commutative Algebra. A lot of the initial proofs that I’ve come across use Zorn’s lemma. The statement of Zorn’s lemma is simple enough (which I have blogged about before):

Suppose a partially ordered set P has the property that every chain (i.e. totally ordered subset) has an upper bound in P. Then the set P contains at least one maximal element.

I came across the proof of the fact that every field has an algebraic closure, which also uses Zorn’s lemma. The argument was: continue adding field extensions infinitely. So the sequence that we have generated is something like F_1\leq F_2\leq \dots (ad infinitum). Now \bigcup_{i=1}^{\infty} F_i is also a field (or rather can be made into a field). Also, \bigcup_{i=1}^{\infty} F_i is the limit of the sequence. We then argue that as \bigcup_{i=1}^{\infty} F_i is the maximal element in the chain of nested fields, it is the algebraic closure (use Zorn’s lemma here).

An analogy in real analysis would be: take a divergent sequence. Can we say anything about its limit except for the fact that it is \infty (or -\infty)? There’s really not that much to say. Say we have two divergent sequences \{n\} and \{n^2\} for n\in\Bbb{N}. All we can say is \{n^2\} approaches \infty faster than \{n\}. Nothing else.

But here, we’ve taken something like a divergent sequence, and said something intelligent about it. This has to do with the fact that the limit of the divergent sequence is still a field, and satisfies all the axioms of a field, while the limit of a divergent sequence in real algebra does not act like a real number by any stretch of imagination. This is a rather strange fact, and should be noted for a full appreciation of the argument.

Also, we did not know right away that the limit of the sequence of field extensions would be a field. We had to make a minor argument that the limit is exactly the union of all fields in the sequence.

So ya.