For my readings this month, I will try and read a survey on the work of this year’s Nobel prize winners in economics. I will mostly follow this survey by the Nobel Prize committee.

Edit: Turns out that I read their basic arguments in this slatestarcodex post. The author of this post mainly wants to refute some of the Banerjee’s and Duflo’s arguments, and at the time of reading I found them to be convincing.

]]>Its argument is the following: there are approximately 3002 EVMs in each constituency in India. After the electronic votes are polled, 5 EVMs are selected **at random**, and the total number of votes polled is compared with the paper ballots. If there is **100% **agreement of the EVMs with the paper ballots, only then are the votes polled in those EVMs considered legal.

Let us suppose that a party only hacks 1% of the EVMs, and does so only in 50 constituencies. These numbers are low, and I find them to be acceptable. 1% of 3002 would be around 30. Hence there would be 2972 unhacked EVMs in each of those 50 constituencies. If 5% of the EVMs are selected **randomly **for checking, then 5% of 3002 turns out to be around 38. Hence, the probability of selecting only unhacked EVMs is . Hence, the probability of selecting only unhacked EVMs from each of those 50 constituencies is .

This would suggest that it is almost impossible for EVM hacking to go undetected.

However, there are a couple of assumptions made in this argument that are simply untrue:

- If a political party has enough influence to hack EVMs,
**can it not also influence the EVMs selected for testing?!** - Discrepancies between EVM counts and paper ballots are actually common. See this news article for instance. Hence, the statement that a discrepancy in even a single machine in any constituency renders the whole election void is simply untrue.

I don’t know if EVM hacking is a reality in India. However, it is most definitely a possibility (at least mathematically).

]]>The Nobel Prize for Chemistry this year was awarded to John Goodenough, M Stanley Whittingham, and Akira Yoshina for the development of a safe and efficient Lithium-Ion based battery. I shall be following this article for the exposition.

Batteries have a simple enough principle- one element (element A) gives away electrons, and another (element B) collects electrons. This depositing and collecting should happen naturally (without any external input). Then electrons travel from element A to element B, forming an electric current in the process. The giving away of electrons happens at the the negative end of the battery or the **anode**, and the collection of electrons happens at the the positive end of the battery or the **cathode. **

One potential problem to avoid is the following: say that we need to light a bulb that lies on the path between the anode and the cathode. Then we need to ensure that the electrons only pass through the bulb. If the anode and cathode come in direct physical contact, then we will find a short circuit. This short circuiting is in fact a major problem in the manufacturing of batteries, and **Akira Yoshino** solved this problem in lithium batteries, amongst others, in his Nobel prize winning research.

The Voltaic cell, or the first battery ever produced, was made up of alternating layers of tin/zinc and copper plates. **These plates were exposed to air.**

But wait. All of these are metals: and we know that metals have a propensity to lose electrons. What will make one of them gain electrons? As one might know from a previous Chemistry class, it depends on the relative reduction (or oxidation) potentials of the elements. As zinc/tin have a greater propensity to lose electrons as compared to copper, they will do so. Copper, on exposure to air, forms CuO. In this state, copper is in a state . On receiving excess electrons at the cathode, the copper gains those electrons to again form Cu. This completes the circuit, and we have a current. In fact, the French monarchy was so impressed by the very first demonstration of the Voltaic cell that they made Volta a count! This battery created a voltage of 1.1 V.

We now come to the ubiquitous lead acid battery- both electrodes of which are lead, and the electrolyte contains sulfuric acid (). Clearly, as both electrolytes have lead, we don’t have a clear idea of which side should see the loss of electron and which side the gain! Turns out that one side has oxidized lead- (so lead is in state ). The non oxidized state sees Pb lose two electrons to form (and then ), whilst the other side sees gain two electrons to form . This battery creates a voltage of 2 V.

In the quest of making batteries that are lighter and produce higher voltages (the Voltaic battery was **huge**), scientists inevitably stumbled upon Lithium. It has a density of , which makes it ideal for batteries for watches, phones, etc. However, it is extremely reactive with water and air (as opposed to the Voltaic cell, which worked in fact only when exposed to air). This turns out to be a major problem that would take decades to effectively solve.

Two important developments happened in the 50s and 60s- one was that propylene carbonate was discovered to be an effective solvent for alkali metals (like lithium). Also, Kummer started studying ion transfer in solids. Note that atoms in solids are relatively rigidly fixed, and not free to move around very much. However, he noticed that sodium ions could move as easily within solids as within salt melts (salt melts would have a less rigid structure as compared to solids, and hence facilitate an easier transport of ions). The phenomenon of ion transfer would become important in the development of lithium batteries.

Now here’s the important difference for lithium batteries- we don’t want lithium to lose electrons at the negative electrode only for lithium ions to form compounds with the electrolyte. Lithium, being extremely reactive, would surely cause such reactions to be difficult to control. We want these lithium ions to float on to the other (positive) side, and just settle in between the atoms of that electrode. This process is called **Intercalation** Hence, we want a cathode which allows lithium ions to settle in, move about easily (easy ion transfer), gain back electrons if so desired, etc. Metal chalcogenides () were considered to be amongst such options.

One of the first such metal chalcogenides () to be considered was Titanium Sulfide- . A voltage of 2.5 V could be recorded in such lithium batteries, and Exxon started manufacturing these batteries. **However**, remember the problem of ensuring that the cathode and anode are not in physical contact? Dendrites of lithium started forming at the negative side (quite obvious, as lithium ions would travel from the negative to the positive side), which would eventually touch the positive side, short circuiting the battery. This was a huge setback for the lithium battery development.

Scientists eventually hit upon this idea- what if the positive lithium ions didn’t have to travel all the way from the negative electrode to the positive electrode? What if they could settle inside the negative electrode itself? They then started searching for materials that would make this possible. **Akira Yoshino** solved this problem- by considering heat-treated petroleum coke. This could form the anode, and allow lithium ions to settle in through intercalation at the anode (negative end) itself.

**John Goodenough**, on the other hand, found a material for the cathode that would increase the voltage of the cell from 2.5 V to 4-5 V. Instead of , he considered another metal chalcogenide- . Oxygen atoms are smaller than sulfur atoms (as in ), and would allow lithium ions to move about more easily. This would allow an easier gain of electrons by these ions, and hence a higher voltage. Moreover, lithium ions are especially mobile in close-packed arrays- and had exactly that structure. If the cathode and anode are both only meant to “house” the lithium ions, where would the lithium ions come from (if not from the anode plate)? The electrolyte- containing in propylene carbonate, along with lithium metal.

Hence, **Yoshino** and **Goodenough**, together, produced a much more powerful and stable lithium battery, and their research truly changed the world. The computer or mobile phone you might be reading this on is evidence enough.

The Physics prize for this year was awarded to James Pebbles, Michel Mayor and Didier Queloz. I will be referring to this article by the Nobel Prize committee.

**James Peebles**

** Punchline**: Peebles is the man behind the mathematical foundations of dark matter and dark energy!

We shall now begin with some background.

Cosmic Microwave Background (CMB) radiation- At the moment of the Big Bang (approximately 13.8 billion years ago), the universe was insanely hot, as one might expect. Electrons and nuclei were too excited (literally!) to combine to form elements. Charged particles would interact with photons (light), and hence light would not be able to travel long distances without being interfered with. 400,000 years of this madness, and then things cooled down (to around 3000 K). Charged particles no longer interacted with photos, allowing light to travel intergalactic distances, and telling us earthlings of galaxies far away in space (and possibly time). Electrons and nuclei could combine to form elements. Note that the energy of light travelling across an expanding universe would decrease because of redshift (this has caused the temperature of this radiation to drop from 3000 K to 2.7 K). The word “redshift” refers to a shift of frequency/energy from a higher (violet) region to a lower (red) region. In this case, the frequency has shifted to even lower than red- the microwave region. This, friends, is called the cosmic microwave background radiation- the radiation that has been travelling since 400,000 years after the big bang. And **it is not the same in all directions!!**

Let me try and explain the last line of the previous paragraph. Suppose you had an instrument using which you could measure the intensity/frequency, etc of the cosmic microwave background radiation ( CMB radiation is easily detectable on Earth, and its first detection also won the Nobel prize). Then if you turn in the instrument around in all directions, you will find a slight change in intensity, frequency, etc. This property, of not being the same in all directions, is called **anisotropy**.

We shall now derive some basic equations that are relevant to an expanding universe. We know that the universe is expanding in every direction. But what is the mechanism of this expansion? Expanding relative to what? These are some common questions that often trip the budding scientist. Let us, for purposes of illustration, imagine that the whole universe is an expanding balloon of radius – not just the rubber boundary, but the air inside too. Consider a mass $m$ on the boundary of the balloon. Then the energy of this mass is . Clearly, the first term is the kinetic energy and the second term the potential energy. Here .

A little rearrangement of this gives . Here , and may be interpreted as curvature. A fundamental question in cosmology has been- does the universe have positive curvature (is shaped like a ball) or negative curvature (is shaped like a horse’s saddle at each point)? Or is it flat (zero curvature)? Turns out that it is very nearly flat. However, arriving upon this answer was not easy, and took decades of cutting edge scientific work. Peebles was instrumental in arriving upon this answer, which lay upon understanding that the universe is at critical energy density (and not more or less, which would be characterized by positive and negative curvature respectively).

One of the fundamental properties of the universe that is studied in cosmology is energy density- how much energy does the universe pack in a given volume (given ball), and how does this density change when that volume volume itself expands (as the universe is expanding)? The equation tells us that matter can be converted into energy, or is just another form of energy. The energy density contained in matter changes by a factor of when the given volume expands from a ball of radius to a ball of radius . The energy density contained in radiation (say in light) changes even more, because an expanding universe creates redshift (loss of energy) as explained above. When a ball expands from radius to , the energy density in radiation changes by a factor of . Let us now try and bring these facts together.

Baryons (traditional matter that humans can perceive) form around 5% of the mass/energy of the universe. If baryons were the only matter in the universe, then our theories of gravity would predict a vastly different universe than what we can see. Galaxies would not form, and we would all be floating subatomic particles in space. To come up with a concept of matter that makes gravitational clumping of planets and galaxies etc possible, scientists came up with **dark matter**. However, this dark matter behaves like ordinary matter under expansion of the universe, in that its matter/energy density decreases (by a factor of ) on expansion. Hence, the energy density from matter and dark matter would become more and more sparse with time. This does not explain the energy density of the observable universe, as can be measured by cosmologists. Scientists then came up with the concept of, wait for it, **dark energy**. The energy density of dark energy doesn’t decrease with the expansion of the universe. Almost sounds like a cop out! But the presence of both dark matter and dark energy have been confirmed by multiple scientific experiments since the time of their conception. Dark energy should form 69% of the total energy in the universe.

Where does **dark energy** show up mathematically? Let us think back to the equation . Let us now add a constant to the right, to get . Note that . Hence, the equation looks like . As the size of the universe (or ) grows, the term comes to dominate, regardless of how small the value of is (it is scientifically predicted to be quite small, actually). Hence, looks more and more like a quadratic equation, which means that looks like a line . A velocity that grows linearly with suggests a non-zero acceleration. Hence, the larger the side of the universe, the faster the galaxies recede from each other! As is supposed to denote dark energy, it is this dark energy that causes galaxies to accelerate away from each other.

Now what was Peebles’ contribution to all of this? Turns out that when Penzias and Wilson observed CMB radiation in 1964, the theoretical basis for such a radiation (at 10 K) had already been laid in Peebles, and in fact Penzias and Wilson could understand the import of their discovery only after talking to Peebles.

Another contribution of his was the following: scientists used to think that both light elements (like hydrogen and helium) and heavier metals (like iron) were produced right during the big bang. However, Peebles clarified that only light metals could have been produced during the earlier stages of the universe, and that too when the temperatures had dropped enough to convert deuterium (a hydrogen isotope) to helium. If the matter density at this moment was high, then large amounts of helium would have been produced, otherwise lower.

Anisotropies in the CMB- Energy/temperature of CMB radiation is affected by two factors: (1) if the radiation is climbing out of a deep potential well (say getting away from an object with high gravitational attraction), then the radiation loses lots of energy in the process of climbing out, hence causing a lowering of temperature. (2) During decoupling (separation) of the radiation from charged matter (400,000 years after the big bang), the potential energy between the charged particles and photons is converted to energy of the photons, raising their temperature.

Remember that CMB radiation tells us about the state of the early universe. In the early universe, fluctuations in density would cause acoustic waves to travel in the hot plasma (acoustic waves are waves with frequencies in the acoustic region. They can vary across other parameters though). These acoustic waves would inevitably leave an imprint on CMB radiation (although they themselves would not be CMB radiation). These waves can be of different frequencies (all within the acoustic region however), and there can be a different power associated with each frequency. The power spectra of these acoustic waves tell us a lot about the early universe, and also help us detect dark energy!

The first peak is formed when baryonic (normal) matter and dark matter fall towards the centre of mass (perhaps) under the influence of gravity. Note that even such a collapse can produce acoustic waves, much like a building collapsing can send outwardly radiating cracks through the structure. Now after this collapse, radiation, with its energy increased because of this collapse, forces matter out again. This produces the second peak. However, the radiation cannot force dark matter to come out, as dark matter does not interact with radiation (that is the reason why we cannot see it or perceive it in other ways). This dark matter exerts a gravitational force on the baryonic matter, and causes the latter to collapse again, causing the third acoustic peak. Because it is only the same baryonic matter that comes out and then collapses again, the height of the third peak is exactly the same as the height of the second peak. The relative heights of the peaks tell us that baryonic matter is only 5% of known matter, and that dark matter is 26% of known matter. The rest of the 69% is dark energy.

What was Peebles’ contribution to all of this? He insisted on including the cosmological constant , which brought dark energy to the fore, and helped explain the heights of the acoustic peaks. He also accurately calculate the anisotropy of CMB radiation as , which was experimentally confirmed. He also predicted that anisotropies are visible in CMB radiation only at large scales, and that at small scales these anisotropies are mitigated due to diffusion. This was also experimentally confirmed.

Peebles is perhaps the rock around which our understanding of the composition of the universe revolves. A laureate amongst laureates.

**Michel Mayor and Didier Queloz**

Planets revolve around stars right? Almost. In any “solar system” (system of a star and its planets), both the star and the planets revolve around a common centre of mass. Analyzing this stellar motion, however small, is the most promising way of detecting whether that star has accompanying planets, because the gravitational pull of the planets would perturb the motion of the star in observable ways. The planets themselves would not be observable directly because of their extremely small size and distance. This is the kind of analysis that Mayor and Queloz did to detect earth-like planets around distant stars, kickstarting this whole field.

But how would one observe the motion of stars? Would we see them moving across the sky, and then make deductions? No. If one were an observer in the plane of the rotation of the star about the common centre of mass, sometimes the star would be coming towards us, and at other times it would be going away. Hence, Doppler effect would help us study the motion of the star.

The way that Doppler spectroscopy would work before was that scientists would compare the spectra of stars with the spectra of gases like hydrogen fluoride (HF), and then make deductions about the motions of stars. This was a fairly restrictive technique as only **bright** stars could be analyzed this way. Michel Mayor instead used the new fiber cable-linked **echelle spectrograph **called the **ELODIE spectrograph**, using which all kinds of stars, of low and high brightness, could be analyzed. Clearly, this opened up a lot more stars with potential planets to scientists.

Soon, using this spectrograph, Mayor and Queloz observed the star 51 Pegasi to have a revolution period of just 4 days, which helped them study many periods of this star, and hence its motion in great detail. Soon they deduced that it had a Jupiter-like planet 51 Pegasi b at an astonishing distance of 0.05 AU. Earlier, scientists had thought that a Jupiter-like planet would have to be at a large distance from its star. However, this discovery turned that prediction on its head. It was later hypothesized that such planets were probably formed at a large distance, but migrated closer to such stars due to gravitational attraction and other effects.

Mayor and Queloz started this revolution with a slightly improved spectrograph and suspended beliefs (about how far a Jupiter-like planet should be from its star), and now that revolution has yielded 4,000 exoplanets and 3,000 planetary systems. Moreover, the method of detecting planets has moved from studying Doppler effects to studying the reduction in brightness of stars when planets pass in front of them.

Hopefully, we shall soon discover life in an exoplanet, and end our isolation in the universe.

Thanks for reading!

]]>

I was indeed surprised by how comprehensible it was. John Tate passed away recently. Hence, it is only appropriate that one tries to fathom his contributions to Mathematics.

Some things that struck me were Tate’s conjecture, which is pretty similar to the Hodge conjecture, and the Isogeny theorem, which is what a grad school friend of mine works on (I think she is working on a slightly generalized version of it though, in a different context).

Of all the fields of Mathematics that I have been exposed to, number theory has always seemed the farthest from comprehension. Although one would imagine that a subject purportedly dealing with natural numbers would be comprehensible, the modern treatment of the field often seems to be written in a different language: what with its Hecke L-functions and number fields and unramified extensions and the like. This article, I feel, attempts to bridge that divide. I am truly grateful for the expository gift of Milne.

]]>I haven’t read anything directly related to social causes this past month. But I did read the books “12 Rules for Life” by Jordan Peterson and “Elon Musk” by Ashlee Vance. Both tangentially talk about the need to approach social issues head on.

I also watched “Family Man” on Amazon Prime. It was refreshing to see the state of Indian muslims shown in such a blindingly honest manner in the Indian mainstream media. The TV series deals with delicate issues in an amazingly nuanced way, and I would recommend it do everyone.

I plan on spending more time reading the EA newsletters, and perhaps also sections of “Gates Notes”.

]]>Of the books that I completed last month, the two most relevant are both by Jared Diamond- “Guns, Germs and Steel”, and “Upheaval: Turning Points for Nations in Crisis”. I would like to quote a passage in full, as it is extremely relevant to what has been happening in Kashmir:

*Despite those Dutch military successes, the US government wanted to appear to support the Third World anti-colonial movement, and it was able to force the Dutch to cede Dutch New Guinea. As a face-saving gesture, the Dutch ceded it not directly to Indonesia but instead to the United Nations, which seven months later transferred administrative control (but not ownership) to Indonesia, subject to a future plebiscite. The Indonesian government then initiated a program of massive transmigration from other Indonesian provinces, in part to ensure a majority of Indonesian New-Guineans in Indonesian New Guinea. Seven years later, a hand-picked assembly of New Guinean leaders voted under pressure for incorporation of Dutch New Guinea into Indonesia. New Guineans who had been on the verge of independence from the Netherlands launched a guerrilla campaign for independence from Indonesia that is continuing today, over half a century later. *

This very closely parallels what has happened in our northernmost (former) state. It was instructive to learn that such approaches have been implemented in the past, and did not yield desired results.

]]>**Monty Hall Problem**

The Monty Hall Problem is a famous problem in Mathematics. Although I have known about the problem for a long time, I could never fully understand it. I recently read about it in the book “The Curious Incident of the Dog in the Nighttime”, and thought I finally had some understanding of it. I will try to write down my thoughts on it.

There are three doors- we shall call them and . There is a car behind one of those doors, and nothing behind the other doors. You are asked to choose a door. Let us suppose you choose . The host will now open one of the remaining doors to show that the car is not behind it. Let us suppose that he opens . Should you now stick to your previous choice of doors, or should you change your choice of doors to ?

The best way to understand this problem is to generalize it; perhaps by increasing the number of “doors”. Let us suppose that there are cups (instead of doors), labeled to . There is a ball in one of those cups, and we have to choose the cup that we think contains the ball. Clearly, the probability of the ball being in cup is , and the probability of the ball not being in cup is . As we can see, the probability of the ball **not** being in cup is substantially higher; in other words, we can be almost certain that the ball is not in cup . Let us now suppose that there is a host, who asks you to choose a cup which you think contains the ball. Let us say you choose . Now out of the remaining cups, he opens cups which do not contain the ball. So there are only two cups remaining. We shall call the remaining cup . Should you switch to ?

Remember that we can be almost sure the ball was never in cup (the probability of it being in cup was ). Hence, it almost certainly had to have been in some other cup. Now all cups except for have been opened. Hence, because the probability of the ball being in cup is almost , and all other cups except for have been opened, the ball is almost certainly in cup . Hence you should switch to !!

The same thing happens in the Monty Hall problem with doors. The probability of the car being behind is , and the probability of the car not being behind (and hence being behind or ) is . Now that the host has opened to show that there is nothing behind it, its probability of gets transferred to . Hence, has a probability of having the car behind it, and you should switch to it!

I shall soon be updating this blog post with other mathematical gems from the book.

]]>

Today is India’s Independence Day, and hence an appropriate occasion to talk about this report. It is a 560 page report on torture in Kashmir, out of which I read the first 100 pages. More than anything, it helped broaden my viewpoint on the Kashmir conflict. As we all know, India has removed Kashmir’s special status, and made it a union territory. Most people in India are in support of this, and think it will lead to development and peace. While Kashmir boils in furore, the Indian government denies any protests or tension there. Only time will tell what this will lead to.

]]>As you might have noticed, I have only donated 5%, instead of the usual 10% of my earnings in the month of July. The reason for that is that I donated the other 5% towards the undergraduate fees of a girl in Tamil Nadu, India. Her father had passed away recently, and she was unable to afford her college fee anymore. What makes me happier is that she wants to pursue Mathematics.

]]>

**Relation of Alain-Cohn equations with minimal surfaces**– The first talk of the day was given by Marco Guaraco. The theme of the talk was finding a function that satisfies a particular PDE, and then making that converge to a minimal surface in a nice way. Remember that yesterday we determined that every function that satisfies the minimization of energy condition need not be a minimal surface. Hence, it is not obvious that itself would be a minimal surface. However, we can make it converge to the minimal surface.

The speaker wrote a nice set of notes on his talk, which pretty much contain all that he talked about and more. Hence, I am not going to write notes for this talk, although there seem to be a couple of misprints that I could have elaborated about.

**Harmonic maps to metric spaces**– The second talk of the day was given by Christine Breiner.

Let between Riemannian manifolds, and define (this could perhaps be thought of as a form of energy). The critical points for are harmonic maps (which means that as we vary , the functions that are stationary points are harmonic maps). This is clearly a variational problem. Some examples are geodesics, harmonic forms, and totally geodesic maps.

There is a theorem by Ahlfors-Bera ’60 and Morrey ’38 which states that if is a bounded, measurable Riemannian metric on , then an almost conformal homeomorphism . Here I suppose is the metric induced on the sphere from Euclidean space. A question one can then ask is, if is a geodesic space that is homeomorphic to , is there a quasi-symmetric or quasi-conformal homeomorphism ? Note that we have weakened conformal to quasi-conformal. We have put in a homeomorphism, but taken away boundedness of .

There are some partial results in this direction. If is a compact, locally CAT(1) space and has finite energy, then an almost conformal harmonic homeomorphism. Note that we don’t have boundedness of here. However, being a CAT(1) space suffices. But what is a CAT(1) space? It is a complete geodesic space if geodesics with perimeter, the comparison triangles on are “fatter”. One way to think of this is that CAT(1) spaces are “less curved” than under the usual metric.

The speaker then goes on to talk about other things that I could not fully understand, including the following definition: for , . The point that I did understand is the following: a map is harmonic if it’s locally minimizing. Moreover, CAT(1) spaces are hugely useful in this area as they crop up everywhere where we do not have a bounded metric.

**K-stability**– The third talk of the day was given by Sean Paul from the University of Wisconsin Madison. Let be a compact Kahler manifold, where is . Clearly, such a form can only be defined on an even dimensional manifold. just denotes that it is -dimensional, and not an -product of .

Let us define . The reason why we have raised to the th power is that we want to create a volume form, as that is the only way that we can integrate over the whole manifold. perhaps is the volume of . Hence, we want some sort of a normalized integral of .

Let us now define . One also refers to this as the set of Kahler metrics on . Let us define . An important open question is: does there exist an function such that ? We are integrating on the right, and hence we’re sort of taking an average (the division by the volume of is but a trivial calculation, and let us assume we do that here). On the left, we are just finding the scalar curvature of a particular function. So does there exist a function whose scalar curvature models the average curvature of the manifold ?

Let us now perhaps place some extra conditions on to make it more amenable- let us assume it is Fano, which means it has positive curvature. Define the function . We want to prove that it is bounded below, as we vary the function . Although it may be negative, it cannot go to . Note that is always positive. This is just an aside, and perhaps only increases the chances of being unbounded below.

It is a theorem of Tian’s from ’97 that is positive on iff constant such that . Here the speaker notes that , in other words, a rescaled average norm of first derivative.

If is not bounded below, then such that . We have to somehow find a way to contradict this.

Remember that is a Fano variety, which means a variety with positive scalar curvature. Let . It turns out that . How can a matrix be embedded into a set of functions? You make each elemeent of act on . That makes is a function. As itself is also a subset of ($X$ is a variety, and hence a subset of , makes sense.

It turns out that proving is bounded below in implies that it is bounded below in also. This is because for , there exist such that . I don’t understand the notation .

The speaker then goes on to discuss related results, like . As far as my understanding of the talk goes, the speaker did not state that the above theorem had been proven, but only talked about possible approaches that one could take to prove it.

**Mean Curvature Flow**– The last talk of the day was given by Lu Wang. Given an arbitrary curve in , let its mean curvature at a point be denoted be the vector . Then . The rate of change of volume in gradient flow is proportional to the integral of the mean curvature on manifold. Sounds intuitive enough.

It turns out that “maximal surfaces are stable solutions”. What I think this means (although I cannot be sure) is that during gradient flow, the manifold ultimately becomes this maximal surface.

The speaker then goes on to give examples of various kinds of gradient flows- that of a sphere contracting to a point, a cylinder contracting to its axis, etc. As one can see, contraction further increases curvature, that only accelerates the rate of contraction. Hence, contraction to the final state only takes finite time. One may also think of the example of a two dimensional dumbbell, which can be thought of as two spheres connected by a long narrow rod (with curvature). The two spheres soon separate. In fact, it is a fact that closed surfaces form singularities in finite time. Whilst one may think that the rod does not contract, and hence the two spheres dissociate from the rod, that is false, to maintain continuity, the rod does in fact contract with the spheres (continuity is maintained until the formation of the singularity).

We shall now discuss the Avoidance Principle, which states that if we have two hypersurfaces and (one may imagine them as two shells in a space that is of one higher dimension), and one is contained within the other (one may think of two concentric circles), and , then for all . This is because the inner hypersurface has higher curvature than the outer one, and hence the gradient flow for it is faster. Although pointwise mean curvature may not always be larger for the inner hypersurface, the rate of change of volume depends on the total integral of the mean curvature (hence the average mean curvature in some sense). And the average mean curvature of the inner hypersurface is definitely higher.

Now what about surfaces with singularities? How do they exhibit gradient flow? Imagine a cone (flat sides, and vertex with curvature. One can show that such a cone also shows well understood gradient flow in which the vertex smoothens out, and the flat sides also become more rounded. The speaker goes on to explain this using the concept of expanders.

Consider the following integral , which may be thought of as the rate of change of the rounded cone. This does not make sense, as the integral blows up at . However, we can correct it by subtracting . Hence, the final expression that we have is . This is equal to .

As , . Why do we need to divide by ? I suppose that if we were to consider a gradient flow of a cone, it would keep expanding into something larger and larger. To somehow control the size of , we divide by . Note that in this case, gradient flow does not speed up with time, as the curvature keeps on reducing. Hence, that phenomenon is only valid in closed curves in .

The speaker then also talks about the fact that for generic cones , there will be a sequence of expanders which are alternately stable and unstable. But isn’t gradient flow supposed to stop in finite time? No, for a cone, as we saw above, gradient flow continues for infinite time, although at a slowing rate. The for just denote the various phases that the cone passes through. What the the stability (or lack of it) of mean? This I am not sure of.

I will try to record the talks that take place tomorrow too.

]]>