4 out of 5 dentists recommend this WordPress.com site

I’ve always found the construction of the quotient field of a domain very arduous and time-consuming. Most people get lost somewhere in the proof. I am going to try and make it more transparent.

What are we trying to do? We’re trying to convert a ring into a field (please forgive my language). How are we doing it? We’re creating fractions out of numbers. Hence, consider the set of two-tuples $(a,b)$, where $a,b\in R$. Here, $R$ is the ring.

Why do we need $R$ to be an integral domain? Let us break this down

1. We (sort of) know that $(a,b)=\frac{a}{b}$. Now consider $\frac{a}{b}\times\frac{c}{d}=\frac{ac}{bd}$. By definition, $b,c\neq 0$. However, what if $bc=0$? Hence we can’t have zero-divisors.

2. We are making equivalence classes of elements, where $(a,b)\equiv (c,d)$ if $ad=bc$. This relationship is transitive only if the elements of the ring are commutative. Hence, we need commutativity. Construct a counter-example with $D_{2n}$ (which is non-commutative).

3. We also need $1\in R$, as we need a multiplicative unit in the field. A field always contains the multiplicative identity.

A ring with the three properties listed above is an integral domain. Hence, we need an integral domain.

Now just define addition and multiplication, and ensure that the operations are well-defined.

The thing that is tricky about this proof (for beginners) is that the construction of equivalence sets and verifying well-defined-ness is not intuitive. People just generally want to define addition and multiplication in the obvious way, and be done with it. As long as one remembers that checking whether the operations are well-define or not is important, one is fine. Note that if $a\equiv b$ and $c\equiv d$, and $a+c\not\equiv b+d$, then the operation is nonsensical. To make it clearer for the beginner, this is equivalent to saying $x+y\neq x+y$.

I have been been part of the VSRP program for about 10 days. I have solved some problems from Atiyah-Macdonald and some from a couple of other books. I have also brushed a little of topology and other parts of Algebra.

In spite of having solved problems on prime ideals and the Zariski Topology, there is nothing that I am taking away from doing these exercises. I only have a vague picture of the underlying mathematical structure, but nothing much more. Hence unless I am really intellectually handicapped, I feel the practice (followed by most places across the world) of teaching Mathematics by solving problems is inherently flawed. The focus I feel is more on somehow being able to solve the problem, and then moving on to the next one. There is little reward to maybe generalize the problem and think about the motivation behind the problem- the generality that the problem is hiding in the form of a “trick”. I don’t think I am a big fan of this method.

I think being able to describe a mathematical structure in words is an important component of the learning process. Hence, I will continue blogging.

Something that has confused me for long is the condition for injectivity for a homomorphism. The condition is that the kernel should just be the identity element.

I used to think that maybe this condition for injectivity applies to all mappings, and wondered why I hadn’t come across this earlier.

No. This condition applies only to homomorphisms $f:(G,*)\to (H,.)$ due to their special property: $f(a*b)=f(a).f(b)$. This condition would no longer be valid if the mapping was so defined: $f(a*b)=c.f(a).f(b)$, where $c\neq e_H$.

### Integral domains and characteristics

Today we shall talk about the characteristic of an integral domain, concentrating mainly on misconceptions and important points.

An integral domain is a commutative ring with the property that if $a\neq 0$ and $b\neq 0$, then $ab\neq 0$. Hence, if $ab=0$, then $a=0$ or $b=0$ (or both).

The characteristic of an integral domain is the lowest positive integer $c$ such that $\underbrace{1+1+\dots +1}_{\text{ c times}}=0$.

Let $a\in R$. Then $\underbrace{a+a+\dots +a}_{\text{ c times}}=a\underbrace{(1+1+\dots +1)}_{\text{ c times}}=0$. This is because $a.0=0$.

If $\underbrace{a+a+\dots +a}_{\text{ d, then we have $a\underbrace{(1+1+\dots +1)}_{\text{ d times}}=0$. This is obvious for $a=0$. If $a\neq 0$, then this implies $\underbrace{1+1+\dots +1}_{\text{ d times}}=0$, which contradicts the fact that $c$ is the lowest positive integer such that $1$ added $c$ times to itself is equal to $0$. Hence, if $c$ is the characteristic of the integral domain $D$, then it is the lowest positive integer such that any non-zero member of $D$, added $c$ times to itself, gives $0$. No member of $D$ can be added a lower number of times to itself to give $0$.

Sometimes $\underbrace{a+a+\dots +a}_{\text{ c times}}$ is written as $ca$. One should remember that this has nothing to do the multiplication operator in the ring. In other words, this does not imply that $\underbrace{a+a+\dots +a}_{\text{ c times}}=c.a$, where $c$ is a member of the domain. In fact, $c$ does NOT have to be a member of the domain. It is just an arbitrary positive integer.

Now on to an important point: something that is not emphasized, but should be. Any expression of the form

$\underbrace{\underbrace{a+a+\dots +a}_{\text{m times}}+\underbrace{a+a+\dots +a}_{\text{m times}}+\dots +\underbrace{a+a+\dots +a}_{\text{m times}}}_{\text{n times}}=\underbrace{(a+a+\dots +a)}_{\text{m times}}(\underbrace{1+1+\dots +1}_{\text{n times}})$.

Now use this knowledge to prove that the characteristic of an integral domain, if finite, has to be $0$ or prime.

Today we will discuss compactness in the metric setting. Why metric? Because metric spaces lend themselves more easily to visualisation than other spaces.

Let us imagine a metric space $X$ with points scattered all over it. If we can find an infinite number of such points and construct disjoint open sets centred on them, then $X$ cannot be compact.

Hence what does it mean to be compact in a metric setting?

Compactness implies that an infinite number of points can’t be ‘far’ away from each other. There can only be a finite number of “clumps” of points such that each neighbourhood, however small, contains an infinite number of such “clumped-together” points. So should you peer at one cump through a microscope, however, strongly you magnify the clump, you will not see discrete points. You will see an impossibly dense patch that shall remain a solid continuus clump of points.

Today I plan to write a treatise on $\ell_p^n$ spaces. $\ell_p^n$ are normed spaces over $\Bbb{R}^n$ with the p-norm, or $\|\|_p$.

Say we have the $\ell_p^n$ space over $\Bbb{R}^n$. This just means that $\|x\|_p=\left( |x_1|^p + |x_2|^p+\dots +|x_n|^p\right)^{\frac{1}{p}}$, where $x\in \Bbb{R}^n$. That $\|\|_p$ is a norm is proved using standard arguments (including Minkowski’s argument, which is non-trivial).

Now we have a metric in $\ell_p^n$ spaces: $d(x,y)=\|x-y\|=\left( |x_1-y_1|^p + |x_2-y_2|^p+\dots +|x_n-y_n|^p\right)^{\frac{1}{p}}$.

Now we prove that every $\ell_p^n$ space is complete. Say we have a cauchy sequence $\{x_1,x_2,x_3,\dots\}$. This means that for every $\epsilon>0$, there exists an $N\in\Bbb{N}$ such that for $i,j>N$ $\|x_i-x_j\|<\epsilon$. This implies that for any $e\in\{1,2,\dots,n\}$, $\|x_e-y_e\|<\epsilon$. As $\Bbb{R}$ is complete, there exists a limit point for each coordinate. Using standard arguments from here, we can prove that $L_p$ spaces are complete. $L_p$ over $\Bbb{R}^n$ is called $l_p^n$

$\ell_p^\infty$ spaces are also complete.

Let $f:X\to Y$ be a mapping. We will prove that $f^{-1}(Y-f(X-W))\subseteq W$, with equality when $f$ is injective. Note that $f$ does not have to be closed, open, or even continuous for this to be true. It can be any mapping.

Let $W\subseteq X$. The mapping of $W$ in $Y$ is $f(W)$. As for $f(X-W)$, it may overlap with $f(W)$, we the mapping be not be injective. Hence, $Y-f(X-W)\subseteq f(W)$.

>Taking $f^{-1}$ on both sides, we get $f^{-1}(Y-f(X-W))\subseteq W$.

How can we take the inverse on both sides and determine this fact? Is the reasoning valid? Yes. All the points in $X$ that map to $Y-f(X-W)$ also map to $W$. However, there may be some points in $f^{-1}(W)$ that do not map to $Y-f(X-W)$.

Are there other analogous points about mappings in general? In $Y$, select two sets $A$ and $B$ such that $A\subseteq B$. Then $f^{-1}(A)\subseteq f^{-1}(B)$

Today, I will discuss this research paper by Javed Ali, Professor of Topology and Analysis, BITS Pilani.

What exactly is a proximinal set? It is the set of elements $X$ in which for any $x\in X$, you can find the nearest point(s) to it in $X$. More formally, for each $x\in X, \exists y\in X$ such that $d(x,y)=\inf \{d(x,K)\}$.

This article says a Banach space is a complete vector space with a norm. One might find the difference between a complete metric space and a complete vector space to be minimal. However, the difference that is crucial here is that not every metric space is a vector space. For example, let $(X,d)$ be a metric space, satisfying the relevant axioms. However, for $x,y\in X$, $x+y$ not being defined is possible. However, if $X$ is a vector space, then $x+y\in X$. Hence, every normed vector space is a metric space if one were to define $\|x-y\|=d(x,y)$, but the converse is not necessarily true.

What is a convex set? This article says a convex set is one in which a line joining any two points in the set lies entirely inside the set. But how can a set containing points contain a line? Essentially, the the convex property implies that every point the line passes through is contained within the convex space. Convexity introduces a geometrical flavor to Banach spaces. It is difficult to imagine what such a line segment would be in the Banach space of matrices (with the norm $\|\mathbf{A}-\mathbf{B}\|=\mathbf{A}-\mathbf{B}$.

What is a uniformly convex set? This paper says that a uniform convex space is defined thus: $\forall<\epsilon\leq 2\in\Bbb{R}, \exists \delta(\epsilon)>0$ such that for $\|x\|=\|y\|=1$ and $\|x-y\|\geq \epsilon, \frac{\|x+y\|}{2}\leq 1-\delta(\epsilon)$. Multiplying by $-1$ on both sides, we get $1- \frac{\|x+y\|}{2}\geq \delta(\epsilon)$. What does this actually mean? The first condition implies that $x$ and $y$ cannot lie in the same direction. Hence, $\|x+y\|<2$. As a result, we get $\left\|\frac{x+y}{2}\right\|<1$, or $1-\left\|\frac{x+y}{2}\right\|>0$. As $\delta(\epsilon)>0$, and as $1-\left\|\frac{x+y}{2}\right\|$ is bounded, $\delta(\epsilon)$ can be the lower bound of $1-\left\|\frac{x+y}{2}\right\|$.

But what is uniform about this condition? It is the fact that $\delta(\epsilon)$ does not change with the unit vector being considered, and depends only on $\epsilon$.

Now we go on to prove that every closed convex set of every uniformly convex Banach space is proximinal.

This is the second time I’m checking whether I can write a LATEX equation in wordpress.

$o(HK)=\frac{o(H)o(K)}{o(H\cap K)}$

$\epsilon$