cozilikethinking

4 out of 5 dentists recommend this WordPress.com site

A more intuitive way of constructing bump functions

This is a short note on creating bump functions, test functions which are 1 on the desired domain, etc. I will be working in one dimension. However, all these results can be generalized to higher dimensions by using polar coordinates.

As we know, the function f(x)= e^{-\frac{1}{x}} for x\geq 0 and 0 for x\leq 0 is a smooth function. Hence, it is an ideal candidate for constructing smooth, compactly supported functions. If we wanted to construct a smooth function that was supported on [a,b], then f(x-a)f(-x-b) is one such function.

However, the main difficulty is in constructing a bump function of the desired shape. How do we construct a bump function that is \equiv 1 on [c,d]\subset [a,b]? The idea that I had, which is different from the literature that I’ve consulted (including Lee’s “Smooth Manifolds”), is that we could consider the integrals of functions.

Consider \int_0^{\infty} f(x-a)f(-x-c)-f(x-d)f(-x-b) dx.

Basically, it we are integrating a function that is positive on [a,c], and then adding that to the integral of the negative of the same function, but now supported on [d,b].

This function will be constant on [c,d], and then decrease to 0 on [d,b]. On re-scaling (multiplying by a constant), we can obtain a bump function on [a,b] that is 1 on [c,d].

A derivation of the the Taylor expansion formula

I have tried, for long, to prove Taylor’s Theorem on my own. The only way that this proof is different from the hundreds of proofs online is that I have written \int_0^a f'(x)dx as \int_0^a f'(a-x)dx. This solves a lot of the problems I was facing in developing the Taylor expansion.

Let f\in C^{k+1}[0,a]. Then we have

f(a)-f(0)=\int_0^a{f'(x)dx}=\int_0^a{f'(a-x)dx}=xf'(a-x)|_0^a+\int_0^axf''(a-x)dx.

We can easily calculate that xf'(a-x)|_0^a=af'(0). Also,

\int_0^axf''(a-x)dx=\frac{x^2}{2}f''(a-x)|_0^a+\int_0^a \frac{x^2}{2}f'''(a-x)dx.

Clearly, \frac{x^2}{2}f''(a-x)|_0^a=\frac{a^2}{2}f''(0).

Continuing in this fashion, we get that

f(a)-f(0)=af'(0)+\dots+\frac{a^n}{n!}f^{(n)}(0)+\dots+\int_0^a\frac{x^k}{k!}f^{(k+1)}(x)dx

Assuming that f is smooth, if as k\to\infty, \int_0^a \frac{x^k}{k!}f^{(k+1)}(x)dx\to 0 then we can recover the standard Taylor expansion formula.

IMO 2011, Problem 3

IMO Problem 3: Let a function f(x):\Bbb{R}\to \Bbb{R} satisfy the relation f(x+y)\leq -yf(x)+f(f(x)). Prove that for x\leq 0, f(x)=0.

Let x+y=f(x). Then we have f(f(x))\leq f(x)(x-f(x))+f(f(x)). Cancelling f(f(x)) on both sides, we get f(x)(x-f(x))\geq 0.

For x\leq 0, if f(x)>0, then the above gives a contradiction, as f(x)>0 but x-f(x)<0.

Now we see that f(0)=0. This is because from the above equation, we have f(0)(-f(0))\geq 0. This is possible only if f(0)=0.

Hence so far, we have that for x<0, f(x)\leq 0. Now we shall prove that for all y\in\Bbb{R}, we have f(y)\geq 0. This will give us the desired contradiction.

We have yf(x)\geq f(f(x))-f(x+y). Let x=0. Then we have -f(y)\leq 0, or f(y)\geq 0. This, coupled with the fact that for y<0 we have f(y)\leq 0, we get that for all y\leq 0, we have f(y)=0.

I am attaching the receipt of my donation to Effective Altruism below:

Screen Shot 2020-02-03 at 1.39.16 PM

I also donated $50 for the treatment of students recently affected by violence in India, and $20 to Ben Wideman’s fundraiser on Facebook.

Moreover, like every other month, I donated $20 to ArXiv. That brings it to a total of $270, like every other month.

I recently read a book on cancer research, for which I wrote a review. I’m attaching it below:

Is cancer a disease that’s as old as human civilization, or is it a fairly recent affliction? Can mobile phones cause cancer? How exactly do cancer drugs work? Why are intuitive operating procedures like excising cancerous tumors largely unsuccessful in curing cancer? The author, Siddharth Mukherjee, answers all these questions and more in his page-turner, “The Emperor of All Maladies”. 

Cancer is not an external disease. It is written in our very genetic code. Whenever cells split in two, errors or mutations in genes almost always creep in. As cells keep dividing, the number of mutations slowly build up. Mutations may also be caused by external carcinogens like tar, radioactive materials, etc. Eventually, when we have mutations in certain genes (which are around 13 in number on average), the body is afflicted with cancer. If we find a way to stop these mutated genes from wreaking havoc, probably by “blocking” their protein pathways, we can cure cancer. Simple enough, right? 

No. This discovery was thousands of years in the making. The first written records of cancer that we have are from the Egyptian civilization. Imhotep, a famous Egyptian medical practitioner, wrote down a classification of medical afflictions. All such afflictions had cures written beside them. Breast cancer was the only one with “no known cure”. A famous Egyptian princess cured breast cancer by having her breasts removed. Galen, a well known doctor, thought that cancer is caused by the excess of “dark humours” in the blood, and that it can be cured by bleeding patients out. These explanations were characteristically misguided. But that was because people in the ancient times didn’t really understand biology, and modern practitioners would do much better, right? No. Modern practitioners caused their own modern havoc, which ended up taking the lives of perhaps hundreds of thousands of cancer patients. All because of misguided science. 

Around the turn of the 20th century, “radial masectomy” was suggested as the ultimate cure for breast cancer. Remove breast cancer by removing the breasts themselves. Later proponents of this school soon became even more deranged, and started removing large parts of the chest cavity from beneath the breasts. It sounded pretty convincing- if you excise the cancerous region, you’ve healed the patient! However, these patients, already disfigured for life, would almost always relapse. Despite this, radial masectomy was the modus operandi for treating cancer for more than 50 years. 

Another approach- poison the cancerous cells. This approach became known as chemotherapy. Pump in enough poison, in the form of X-rays, mustard gas, etc into the body, and you can kill the cancer cells. Simple. However, how will these poisons differentiate between cancer and regular cells? Doctors invented ad hoc mechanisms for avoiding killing regular cells- focus the X rays and poisons only on the cancer cells, insert external bone marrow into the patient after chemotherapy so that new healthy cells can be regenerated after the indiscriminate killing caused by chemotherapy, etc. However, although this technique still continues to the present day, on its own it has seen very little success. Coupled with a “cocktail” of other drugs, chemotherapy can be successful in patients whose cancers are not very advanced, but it reeks of being a “makeshift” cure instead of an actual, permanent one.

Physically remove cancer cells. Poison cancer cells. Patients still relapse and die. What are we missing? Is cancer a disease caused by viruses that can be cured by the right vaccine? Medical research has had a lot of success curing diseases caused by viruses. Think small pox, polio, etc. Hence, if cancer was virus-caused, we have a shot (pun intended). There was also lots of evidence of a cancer virus- a particular type of cancer cell always had a certain virus in it. Correlation is obviously causation. Any attempts to smother the virus theory of cancer were subverted by this simple example. Explain why this virus is always there. This stalled cancer research for decades. 

However, it was eventually discovered that viruses only carried the already mutated genes from inside the nucleus to the cytoplasm. These types of viruses are called retroviruses, and their discovery overturned medical “facts” that had been taught in medical schools for centuries. This discovery led us to understand that cancer had a genetic cause, and that we had to build molecules that could go bind to aberrant genes, rendering them ineffective. This is the current direction that cancer research has taken, and we’ve had lots of success with treating certain kinds of cancer. Other kinds of cancer are still, however, violently lethal. This hints at the fact that cancer is not one disease, but a variety of wildly varying diseases, although erroneously classified under one umbrella. 

One aspect of scientific research that was indeed revelatory for me was that in the 1950s, the American government pumped millions of dollars into cancer research, although we didn’t really have a fundamental understanding of the biology of the cancer cell. Scientific labs were expected to work like , with strict deadlines, accountability, fixed hours, etc. However, despite the money, resources and manpower allotted, most of this research was misguided. The truly useful insights were obtained by researchers working in isolation, outside of this “industry”, who were not necessarily trying to cure cancer, but just trying to discover cool facts about the human body. This throws shade on the Indian government’s scientific policy in recent years, which has reduced funding for all kinds of “useless” research, like Math and Physics, and pumped most of the available funding into things like medical research, development of weapons, etc. As history tells us again and again, most scientific achievements of mankind stem from the ability to do “directionless”, curiosity-driven research, and not research with a pre-defined agenda. Governments without an understanding of this are often in the way of scientific achievement. 

Mukherjee ends the book on a fairly sombre note. Although we’ve had a lot of success in defeating cancer, cancerous genes sometimes mutate, and the drugs that were being used to attack them become useless. This constant mutation and ability to survive comes from evolution- the thirst that organisms have to survive despite all kinds of odds. Hence, “we need to keep running to stay in the same place”, ie keep discovering new drugs to fight never ending battles with constantly mutating genes. The battle with cancer may never really be won. Our cures however may successfully prolong life, and that has to be thought of as a victory in itself. 

“The Emperor of Maladies” is much more than a “Biography of Cancer”. It explains, in full gory, disheartening and sometimes uplifting detail, why scientific research is hard, and why civilization has not been able to solve its most pressing problems for thousands of years. And how a focus on experimentation, instead of untested “intuitive” hypotheses, paved the way for substantial scientific achievement in the last century. It is a highly recommended book on science.

 

Effective Altruism- January

Attaching the receipt of my donation for this month below:

Screen Shot 2020-01-16 at 8.29.01 PM

Effective Altruism- December

Contrary to the title, I decided to not donate to Effective Altruism this December. Instead, I donated $250 to a fundraiser to help a friend’s father, who needed the money for his cancer treatment.

Screen Shot 2019-11-25 at 3.37.07 PM

I recently read “Poor Economics” by Siddharth Banerjee and Esther Duflo, in which they outline their Nobel Prize winning work. They were the early inspiration behind the formation of Effective Altruism. If I was not convinced about the impact of this organization before, I am now!

Effective Altruism- November

Screen Shot 2019-11-01 at 9.58.11 AM

For my readings this month, I will try and read a survey on the work of this year’s Nobel prize winners in economics. I will mostly follow this survey by the Nobel Prize committee.

Edit: Turns out that I read their basic arguments in this slatestarcodex post. The author of this post mainly wants to refute some of the Banerjee’s and Duflo’s arguments, and at the time of reading I found them to be convincing.

Edit Edit: I ended up reading the book “Poor Economics” by Banerjee and Duflo, which outlines their Nobel prize winning work. I also write a review of the book on Goodreads, which I am copying here:

Poor Economics

This is a book of hope.

“..if we listen to poor people themselves and force ourselves to understand the logic of their choices; if we accept the possibility of error and subject every idea…to empirical testing…(then we’ll) better understand why people live the way they do.”

This is the closest that economics can get to science. The authors take apparently common sensical claims, and perform randomized trials to evaluate whether those claims are true. And the results are often surprising.

What is the best way to ensure that more parents vaccinate their children? You might think that spreading information about the merits of vaccination would make parents line up in front of vaccination centers. This hypothesis was tested, and lots of resources were invested in spreading information about vaccination. Moreover, randomized surveys concluded that most parents in villages are indeed well informed about the advantages of vaccinating their children. However, less than a quarter of the parents queued up in front of vaccination centers. This contradicts the common sensical view that parents who know that vaccination is good would inevitably get their kids vaccinated.

The authors suggest that the reason why so few parents came was that most of these people would have to travel long distances, and stand for hours in the sun to vaccinate their children. Based on these hurdles, they made a split second decision to procrastinate, and possibly wait for the next time that these vaccination camps would be set up. What saved the day was providing a small gift for parents who came to these camps- maybe some utensils, or a couple of kgs of rice. This small bribe helped parents overcome their procrastination, and come get their kids vaccinated.

It might seem wrong to bribe people to do what is good for themselves and their kids. However, the authors suggest that in a complex world in which we have multiple issues demanding our attention and effort, we tend to procrastinate on doing things that are undeniably good for us, if they are slightly hard to do. Providing an incentive to do these things, or making these things slightly easier to do, can often lead to staggering results.

Another example is the following: chlorinated water is much safer to drink, and can prevent life threatening diseases like dysentery. It is cheaply available in India, and well within the means of most families to buy. However, very few families in Indian villages buy chlorine, despite the awareness that doing so would drastically reduce their chances of contracting a life threatening disease. This could also be attributed to the above hypothesis- procrastination. What changed things? Some villages installed chlorine dispensers right next to the village wells. Hence, availing of chlorine became easier (although not necessarily cheaper), which led to a drastic reduction in water-borne diseases in those villages.

Perhaps one way of summarizing this issue would be the following: people living in the developed world, or in large cities in the developing world, don’t have as many issues that demand their attention. The water is already chlorinated. Good schools and hospitals are available nearby. In such an environment, people can devote their full attention to issues that would further improve their lives. However, in villages, people have to take care of a lot more things. This causes them to procrastinate on all of them, causing their condition to only worsen over time.

One of the most fascinating sections of the book was on teenage pregnancy- randomized controlled trials suggested that teenage girls are well aware of the fact that getting pregnant at a young age would only make life more difficult for them. However, the absence of schools or colleges in the area only left the option of finding a husband and getting married open. This process would sometimes lead to unplanned pregnancies (this was more in the Mexican context than the Indian). Although it was thought that sex education would make the situation better, it only worsened the situation. What saved the day was providing free uniforms and books to girls, and hence ensuring that they could remain in school longer. This caused them to not actively pursue marriage, and led to a drastic drop in teenage pregnancy.

A large number of countries try to promote birth control to slow the rate of population growth. It was thought that the easy availability of contraception would solve the problem. This did not help at all. What was not considered was that women are almost never in control of the timing and number of pregnancies- the male patriarchs would decide that. In randomized control trials in Bangladesh, women volunteers would visit women in the afternoon, when their husbands would be away at work, and inform them about contraception. This lowered the birth rate of that district by 60%!

Moreover, what was most surprising to me was that the factor that led to a sharpest drop in birth rate in Brazil was the popularization of telenovelas. In these telenovelas, the female characters would only have one or two children. This normalized the prospect of having less children for women, and led to a substantial drop in birth rates in just a decade!

Basically, many of our “obvious” and “common sense” ideas are wrong. We do not know what will make the poor less poor. Let us deploy our most successful weapon- Science. We can conduct experiments, and determine the factors that actually make a difference. And then make these things policy- at the grassroots level or the national. The authors won a well deserved Nobel prize for introducing “experiments” of this kind in economics. These experiments are easy to perform, and can very tangibly make the poor better off.

That is why this is a book of hope.

EVM hacking

Just a small note on this article. It claims to prove “mathematically” that it is almost impossible that EVMs are hacked in India.

Its argument is the following: there are approximately 3002 EVMs in each constituency in India. After the electronic votes are polled, 5 EVMs are selected at random, and the total number of votes polled is compared with the paper ballots. If there is 100% agreement of the EVMs with the paper ballots, only then are the votes polled in those EVMs considered legal.

Let us suppose that a party only hacks 1% of the EVMs, and does so only in 50 constituencies. These numbers are low, and I find them to be acceptable. 1% of 3002 would be around 30. Hence there would be 2972 unhacked EVMs in each of those 50 constituencies. If 5% of the EVMs are selected randomly for checking, then 5% of 3002 turns out to be around 38. Hence, the probability of selecting only unhacked EVMs is {38\choose 2972}/{38\choose 3002}\approx 0.68. Hence, the probability of selecting only unhacked EVMs from each of those 50 constituencies is (0.68)^{50}\approx 0.

This would suggest that it is almost impossible for EVM hacking to go undetected.

However, there are a couple of assumptions made in this argument that are simply untrue:

  • If a political party has enough influence to hack EVMs, can it not also influence the EVMs selected for testing?!
  • Discrepancies between EVM counts and paper ballots are actually common. See this news article for instance. Hence, the statement that a discrepancy in even a single machine in any constituency renders the whole election void is simply untrue.

I don’t know if EVM hacking is a reality in India. However, it is most definitely a possibility (at least mathematically).

An attempt to understand the Nobel Prize winning Science of 2019

Every year, I would read that Nobel prizes have been awarded to certain distinguished individuals at some of the top research institutes. On further reading, I would realize that their research is almost completely incomprehensible to all but a few people across the world. This year, I have tried to read and blog about their research, if only to convey in layman’s terms what these individuals have achieved.

nobel prize

Chemistry Prize

The Nobel Prize for Chemistry this year was awarded to John Goodenough, M Stanley Whittingham, and Akira Yoshina for the development of a safe and efficient Lithium-Ion based battery. I shall be following this article for the exposition.

Batteries have a simple enough principle- one element (element A) gives away electrons,  and another (element B) collects electrons. This depositing and collecting should happen naturally (without any external input). Then electrons travel from element A to element B, forming an electric current in the process. The giving away of electrons happens at the the negative end of the battery or the anode, and the collection of electrons happens at the the positive end of the battery or the cathode. 

One potential problem to avoid is the following: say that we need to light a bulb that lies on the path between the anode and the cathode. Then we need to ensure that the electrons only pass through the bulb. If the anode and cathode come in direct physical contact, then we will find a short circuit. This short circuiting is in fact a major problem in the manufacturing of batteries, and Akira Yoshino solved this problem in lithium batteries, amongst others, in his Nobel prize winning research.

The Voltaic cell, or the first battery ever produced, was made up of alternating layers of tin/zinc and copper plates. These plates were exposed to air.

1200px-Voltaic_pile.svg

But wait. All of these are metals: and we know that metals have a propensity to lose electrons. What will make one of them gain electrons? As one might know from a previous Chemistry class, it depends on the relative reduction (or oxidation) potentials of the elements. As zinc/tin have a greater propensity to lose electrons as compared to copper, they will do so. Copper, on exposure to air, forms CuO. In this state, copper is in a state Cu^{+2}. On receiving excess electrons at the cathode, the copper gains those electrons to again form Cu. This completes the circuit, and we have a current. In fact, the French monarchy was so impressed by the very first demonstration of the Voltaic cell that they made Volta a count! This battery created a voltage of 1.1 V.

We now come to the ubiquitous lead acid battery- both electrodes of which are lead, and the electrolyte contains sulfuric acid (H_2SO_4). Clearly, as both electrolytes have lead, we don’t have a clear idea of which side should see the loss of electron and which side the gain! Turns out that one side has oxidized lead- PbO_2 (so lead is in state Pb^{+4}). The non oxidized state sees Pb lose two electrons to form Pb^{+2} (and then PbSO_4), whilst the other side sees Pb^{+4} gain two electrons to form Pb^{+2}. This battery creates a voltage of 2 V.

In the quest of making batteries that are lighter and produce higher voltages (the Voltaic battery was huge), scientists inevitably stumbled upon Lithium. It has a density of 0.53 gm/cm^3, which makes it ideal for batteries for watches, phones, etc. However, it is extremely reactive with water and air (as opposed to the Voltaic cell, which worked in fact only when exposed to air). This turns out to be a major problem that would take decades to effectively solve.

Two important developments happened in the 50s and 60s- one was that propylene carbonate was discovered to be an effective solvent for alkali metals (like lithium). Also, Kummer started studying ion transfer in solids. Note that atoms in solids are relatively rigidly fixed, and not free to move around very much. However, he noticed that sodium ions could move as easily within solids as within salt melts (salt melts would have a less rigid structure as compared to solids, and hence facilitate an easier transport of ions). The phenomenon of ion transfer would become important in the development of lithium batteries.

Now here’s the important difference for lithium batteries- we don’t want lithium to lose electrons at the negative electrode only for lithium ions to form compounds with the electrolyte. Lithium, being extremely reactive, would surely cause such reactions to be difficult to control. We want these lithium ions to float on to the other (positive) side, and just settle in between the atoms of that electrode. This process is called Intercalation Hence, we want a cathode which allows lithium ions to settle in, move about easily (easy ion transfer), gain back electrons if so desired, etc. Metal chalcogenides (M(\text{metal})X_2(\text{electronegative element})) were considered to be amongst such options.

11581_2015_1566_Fig1_HTML

One of the first such metal chalcogenides (MX_2) to be considered was  Titanium Sulfide- TiS_2. A voltage of 2.5 V could be recorded in such lithium batteries, and Exxon started manufacturing these batteries. However, remember the problem of ensuring that the cathode and anode are not in physical contact? Dendrites of lithium started forming at the negative side (quite obvious, as lithium ions would travel from the negative to the positive side), which would eventually touch the positive side, short circuiting the battery. This was a huge setback for the lithium battery development.

Scientists eventually hit upon this idea- what if the positive lithium ions didn’t have to travel all the way from the negative electrode to the positive electrode? What if they could settle inside the negative electrode itself? They then started searching for materials that would make this possible. Akira Yoshino solved this problem- by considering heat-treated petroleum coke. This could form the anode, and allow lithium ions to settle in through intercalation at the anode (negative end) itself.

John Goodenough, on the other hand, found a material for the cathode that would increase the voltage of the cell from 2.5 V to 4-5 V. Instead of TiS_2, he considered another metal chalcogenide- CoO_2. Oxygen atoms are smaller than sulfur atoms (as in TiS_2), and would allow lithium ions to move about more easily. This would allow an easier gain of electrons by these ions, and hence a higher voltage. Moreover, lithium ions are especially mobile in close-packed arrays- and CoO_2 had exactly that structure. If the cathode and anode are both only meant to “house” the lithium ions, where would the lithium ions come from (if not from the anode plate)? The electrolyte- containing LiBF_4 in propylene carbonate, along with lithium metal.

Hence, Yoshino and Goodenough, together, produced a much more powerful and stable lithium battery, and their research truly changed the world. The computer or mobile phone you might be reading this on is evidence enough.

 

Physics Prize

The Physics prize for this year was awarded to James Pebbles, Michel Mayor and Didier Queloz. I will be referring to this article by the Nobel Prize committee.

James Peebles

Punchline: Peebles is the man behind the mathematical foundations of dark matter and dark energy!
We shall now begin with some background.

Cosmic Microwave Background (CMB) radiation- At the moment of the Big Bang (approximately 13.8 billion years ago), the universe was insanely hot, as one might expect. Electrons and nuclei were too excited (literally!) to combine to form elements. Charged particles would interact with photons (light), and hence light would not be able to travel long distances without being interfered with. 400,000 years of this madness, and then things cooled down (to around 3000 K). Charged particles no longer interacted with photos, allowing light to travel intergalactic distances, and telling us earthlings of galaxies far away in space (and possibly time). Electrons and nuclei could combine to form elements. Note that the energy of light travelling across an expanding universe would decrease because of redshift (this has caused the temperature of this radiation to drop from 3000 K to 2.7 K). The word “redshift” refers to a shift of frequency/energy from a higher (violet) region to a lower (red) region. In this case, the frequency has shifted to even lower than red- the microwave region. This, friends, is called the cosmic microwave background radiation- the radiation that has been travelling since 400,000 years after the big bang. And it is not the same in all directions!!

Let me try and explain the last line of the previous paragraph. Suppose you had an instrument using which you could measure the intensity/frequency, etc of the cosmic microwave background radiation ( CMB radiation is easily detectable on Earth, and its first detection also won the Nobel prize). Then if you turn in the instrument around in all directions, you will find a slight change in intensity, frequency, etc. This property, of not being the same in all directions, is called anisotropy.

We shall now derive some basic equations that are relevant to an expanding universe. We know that the universe is expanding in every direction. But what is the mechanism of this expansion? Expanding relative to what? These are some common questions that often trip the budding scientist. Let us, for purposes of illustration, imagine that the whole universe is an expanding balloon of radius R– not just the rubber boundary, but the air inside too. Consider a mass $m$ on the boundary of the balloon. Then the energy of this mass is \frac{m\dot{R}^2}{2}-\frac{GMm}{R}. Clearly, the first term is the kinetic energy and the second term the potential energy. Here M=\frac{4\pi}{3}\rho R^3.

A little rearrangement of this gives \dot{R}^2=\frac{8\pi G}{3}\rho R^2-kc^2. Here k=-\frac{2E}{mc^2}, and may be interpreted as curvature. A fundamental question in cosmology has been- does the universe have positive curvature (is shaped like a ball) or negative curvature (is shaped like a horse’s saddle at each point)? Or is it flat (zero curvature)? Turns out that it is very nearly flat. However, arriving upon this answer was not easy, and took decades of cutting edge scientific work. Peebles was instrumental in arriving upon this answer, which lay upon understanding that the universe is at critical energy density (and not more or less, which would be characterized by positive and negative curvature respectively).

One of the fundamental properties of the universe that is studied in cosmology is energy density- how much energy does the universe pack in a given volume (given ball), and how does this density change when that volume volume itself expands (as the universe is expanding)? The equation E=mc^2 tells us that matter can be converted into energy, or is just another form of energy. The energy density contained in matter changes by a factor of \frac{1}{R^3} when the given volume expands from a ball of radius 1 to a ball of radius R. The energy density contained in radiation (say in light) changes even more, because an expanding universe creates redshift (loss of energy) as explained above. When a ball expands from radius 1 to R, the energy density in radiation changes by a factor of \frac{1}{R^4}. Let us now try and bring these facts together.

Baryons (traditional matter that humans can perceive) form around 5% of the mass/energy of the universe. If baryons were the only matter in the universe, then our theories of gravity would predict a vastly different universe than what we can see. Galaxies would not form, and we would all be floating subatomic particles in space. To come up with a concept of matter that makes gravitational clumping of planets and galaxies etc possible, scientists came up with dark matter. However, this dark matter behaves like ordinary matter under expansion of the universe, in that its matter/energy density decreases (by a factor of \frac{1}{R^3}) on expansion. Hence, the energy density from matter and dark matter would become more and more sparse with time. This does not explain the energy density of the observable universe, as can be measured by cosmologists. Scientists then came up with the concept of, wait for it, dark energy. The energy density of dark energy doesn’t decrease with the expansion of the universe. Almost sounds like a cop out! But the presence of both dark matter and dark energy have been confirmed by multiple scientific experiments since the time of their conception. Dark energy should form 69% of the total energy in the universe.

Where does dark energy show up mathematically? Let us think back to the equation \dot{R}^2=\frac{8\pi G}{3}\rho R^2-kc^2. Let us now add a constant \Lambda to the right, to get \dot{R}^2=\frac{8\pi G}{3}\rho R^2-kc^2+\Lambda R^2. Note that \rho=O(\frac{1}{R^3}. Hence, the equation looks like \dot{R}^2=\frac{\text{constant}}{R}-kc^2+\Lambda R^2. As the size of the universe (or R) grows, the \Lambda R^2 term comes to dominate, regardless of how small the value of \Lambda is (it is scientifically predicted to be quite small, actually). Hence, \dot{R}^2 looks more and more like a quadratic equation, which means that \dot{R} looks like a line \sqrt{\Lambda} R. A velocity that grows linearly with R suggests a non-zero acceleration. Hence, the larger the side of the universe, the faster the galaxies recede from each other! As \Lambda is supposed to denote dark energy, it is this dark energy that causes galaxies to accelerate away from each other.

Now what was Peebles’ contribution to all of this? Turns out that when Penzias and Wilson observed CMB radiation in 1964, the theoretical basis for such a radiation (at 10 K) had already been laid in Peebles, and in fact Penzias and Wilson could understand the import of their discovery only after talking to Peebles.

Another contribution of his was the following: scientists used to think that both light elements (like hydrogen and helium) and heavier metals (like iron) were produced right during the big bang. However, Peebles clarified that only light metals could have been produced during the earlier stages of the universe, and that too when the temperatures had dropped enough to convert deuterium (a hydrogen isotope) to helium. If the matter density at this moment was high, then large amounts of helium would have been produced, otherwise lower.

Anisotropies in the CMB- Energy/temperature of CMB radiation is affected by two factors: (1) if the radiation is climbing out of a deep potential well (say getting away from an object with high gravitational attraction), then the radiation loses lots of energy in the process of climbing out, hence causing a lowering of temperature. (2) During decoupling (separation) of the radiation from charged matter (400,000 years after the big bang), the potential energy between the charged particles and photons is converted to energy of the photons, raising their temperature.

Remember that CMB radiation tells us about the state of the early universe. In the early universe, fluctuations in density would cause acoustic waves to travel in the hot plasma (acoustic waves are waves with frequencies in the acoustic region. They can vary across other parameters though). These acoustic waves would inevitably leave an imprint on CMB radiation (although they themselves would not be CMB radiation). These waves can be of different frequencies (all within the acoustic region however), and there can be a different power associated with each frequency. The power spectra of these acoustic waves tell us a lot about the early universe, and also help us detect dark energy!

Angular-power-spectrum-of-CMB-temperature-fluctuations-showing-the-acoustic-peaks-and

 

The first peak is formed when baryonic (normal) matter and dark matter fall towards the centre of mass (perhaps) under the influence of gravity. Note that even such a collapse can produce acoustic waves, much like a building collapsing can send outwardly radiating cracks through the structure. Now after this collapse, radiation, with its energy increased because of this collapse, forces matter out again. This produces the second peak. However, the radiation cannot force dark matter to come out, as dark matter does not interact with radiation (that is the reason why we cannot see it or perceive it in other ways). This dark matter exerts a gravitational force on the baryonic matter, and causes the latter to collapse again, causing the third acoustic peak. Because it is only the same baryonic matter that comes out and then collapses again, the height of the third peak is exactly the same as the height of the second peak. The relative heights of the peaks tell us that baryonic matter is only 5% of known matter, and that dark matter is 26% of known matter. The rest of the 69% is dark energy.

What was Peebles’ contribution to all of this? He insisted on including the cosmological constant \Lambda, which brought dark energy to the fore, and helped explain the heights of the acoustic peaks. He also accurately calculate the anisotropy of CMB radiation as 5\times 10^{-6}, which was experimentally confirmed. He also predicted that anisotropies are visible in CMB radiation only at large scales, and that at small scales these anisotropies are mitigated due to diffusion. This was also experimentally confirmed.

Peebles is perhaps the rock around which our understanding of the composition of the universe revolves. A laureate amongst laureates.

Michel Mayor and Didier Queloz

Planets revolve around stars right? Almost. In any “solar system” (system of a star and its planets), both the star and the planets revolve around a common centre of mass. Analyzing this stellar motion, however small, is the most promising way of detecting whether that star has accompanying planets, because the gravitational pull of the planets would perturb the motion of the star in observable ways. The planets themselves would not be observable directly because of their extremely small size and distance. This is the kind of analysis that Mayor and Queloz did to detect earth-like planets around distant stars, kickstarting this whole field.

But how would one observe the motion of stars? Would we see them moving across the sky, and then make deductions? No. If one were an observer in the plane of the rotation of the star about the common centre of mass, sometimes the star would be coming towards us, and at other times it would be going away. Hence, Doppler effect would help us study the motion of the star.

The way that Doppler spectroscopy would work before was that scientists would compare the spectra of stars with the spectra of gases like hydrogen fluoride (HF), and then make deductions about the motions of stars. This was a fairly restrictive technique as only bright stars could be analyzed this way. Michel Mayor instead used the new fiber cable-linked echelle spectrograph called the ELODIE spectrograph, using which all kinds of stars, of low and high brightness, could be analyzed. Clearly, this opened up a lot more stars with potential planets to scientists.

Soon, using this spectrograph, Mayor and Queloz observed the star 51 Pegasi to have a revolution period of just 4 days, which helped them study many periods of this star, and hence its motion in great detail. Soon they deduced that it had a Jupiter-like planet 51 Pegasi b at an astonishing distance of 0.05 AU. Earlier, scientists had thought that a Jupiter-like planet would have to be at a large distance from its star. However, this discovery turned that prediction on its head. It was later hypothesized that such planets were probably formed at a large distance, but migrated closer to such stars due to gravitational attraction and other effects.

main-qimg-d84ec18a9de106e3a54bdca038741c9d

Mayor and Queloz started this revolution with a slightly improved spectrograph and suspended beliefs (about how far a Jupiter-like planet should be from its star), and now that revolution has yielded 4,000 exoplanets and 3,000 planetary systems. Moreover, the method of detecting planets has moved from studying Doppler effects to studying the reduction in brightness of stars when planets pass in front of them.

Hopefully, we shall soon discover life in an exoplanet, and end our isolation in the universe.

Thanks for reading!

 

 

 

 

John Tate’s works

I’ve been sick for a couple of days. So I decided to take the evening off and read random articles on the internet. I chanced upon Milne’s 9 page summary of Tate’s collected works.

I was indeed surprised by how comprehensible it was. John Tate passed away recently. Hence, it is only appropriate that one tries to fathom his contributions to Mathematics.

Some things that struck me were Tate’s conjecture, which is pretty similar to the Hodge conjecture, and the Isogeny theorem, which is what a grad school friend of mine works on (I think she is working on a slightly generalized version of it though, in a different context).

Of all the fields of Mathematics that I have been exposed to, number theory has always seemed the farthest from comprehension. Although one would imagine that a subject purportedly dealing with natural numbers would be comprehensible, the modern treatment of the field often seems to be written in a different language: what with its Hecke L-functions and number fields and unramified extensions and the like. This article, I feel, attempts to bridge that divide. I am truly grateful for the expository gift of Milne.