cozilikethinking

4 out of 5 dentists recommend this WordPress.com site

Category: Uncategorized

Effective Altruism

The reason that I’m writing this non-mathematical post here is that I want to help spread the word about this fantastic organization. I also heard about it in Vipul Naik’s blog.

I decided to start donating to Effective Altruism December last year. It is an organization that does research on the most effective ways to donate money, and then makes suggestions based on where one might want to donate. Typically, it asks its members to pledge 10% of their lifetime income to such causes.

I was initially introduced to Effective Altruism by some of my math heroes like Vipul Naik. Some causes that facilitated my joining this organization were that I now have a disposable income through a monthly stipend, not all of which I spend, and that I’ve gotten increasingly involved in fundraising at my university through various organizations, and find my involvement to be meaningful and fulfilling.

Below is the screenshot of my donation for this month. I shall soon update this blog with details of the specific organizations linked with EA that are doing amazing work, especially organizations that are fighting malaria/other preventable diseases in Africa and other parts of the world.

Screen Shot 2019-02-07 at 12.19.12 AM

Math Olympiads #1

I’ve been obsessed with Math Olympiads and other kinds of competitive math for a loong time now. Although I should be doing more serious mathematics now that can actually get me a job, I shall ignore good sense and start my series of posts on Math Olympiad problems!

The first problem that I want to write about is a relatively easy one from IMO (International Math Olympiad) 1974. I think it was a long-listed problem, and didn’t actually make it to the test. It asks to prove the fact that

\displaystyle 343|2^{147}-1

(that 343 divides 2^{147}-1)

The important thing in this problem is to figure out that 343=7^3 and 147=3.7^2. Hence, we’re being asked to prove that 7^3|2^{3.7^2}-1.

We have the following theorem for reference: If a and b are co-prime, then a^{\phi(b)}\equiv 1\pmod b, where \phi(b) is the number of natural numbers that are co-prime with b. Fermat’s Little Theorem is a special case of this.

Now on to the calculations and number crunching. We have fleshed out the main idea that we’ll use to attack the problem. We can see that \phi(7^3)=7^3-7^3/7=6.7^2. Hence, as 2 is co-prime with 7^3, 2^{6.7^2}-1\equiv 0\pmod {7^3}. We can factorize this as (2^{3.7^2}+1)(2^{3.7^2}-1)\equiv 0\pmod {7^3}. Now we have to prove that 7^3 does not have any factors in common with 2^{3.7^2}+1. As 2^3\equiv 1\pmod 7, 2^{3.7^2}\equiv 1\pmod 7 too. Hence, 2^{3.7^2}+1\equiv 2\pmod 7. This shows that 2^{3.7^2}+1 does not have any factors in common with 7^3, and leads us to conclude that 7^3|(2^{3.7^2}-1), or 343|2^{147}-1. Hence proved.

I realize that the formatting is pretty spotty. I shall try and re-edit this in the near future to make it more readable.

 

The Möbius strip

Almost every Math student (or even otherwise) has heard of the Möbius strip at least once in their career. I too have. Embarrassingly, I always had my doubts about it. And I can bet that a lot of people have the same problem.

A Möbius strip is almost always taught in the following “intuitive way”: take a long, thin strip of paper, and connect the opposite ends along the length in such a way that one end is oriented in the opposite direction as compared to the other end. You get something that looks like the picture below:

Paper mobius strip

And when you take an object, say a pen cap around the Möbius strip, you’re somehow supposed to believe that the pen cap returns to the same spot as before, but with the opposite orientation. But you (or at least I) would think “No!! The pen cap has not returned to the original starting point! It is at the other side of the piece of paper! Points at opposite sides of the paper can’t be the same point!”

As I grew older, I decided to exorcise my demons, and think about these seemingly fundamental concepts that I had accepted, but not really understood. And I have arrived upon the conclusion that the paper model is completely wrong, at least to convey the essence of the Möbius strip. A much much more convincing model is the following:

Mobius strip

Imagine a coin that is traveling from top to bottom. The coin is colored in the following way: the left side is red, and the right side is blue. After disappearing at the bottom, it re-emerges at the top, but now its left side is blue, and its right side is red. This picture is much much clearer to me than paper strips forming loops.

Thus ends my spiel for the day!

Term paper on Conformal Geometry

I had to write a term paper on Conformal Geometry for my Differentiable Manifolds class. Although I slacked off on it at first, I’ve spent the whole day today trying to pretty it up. Although I couldn’t make some changes like changing the font of “Abstract”, or inserting a picture on the cover page, mainly because I didn’t understand some aspects of the layout I had downloaded from the internet, I’m still reasonably proud of how the paper looks (the content could have been better for sure). I’m uploading it for reference.

Here’s the paper-Introduction to Conformal Geometry

Major Life Update

Hello all! I’ve decided to do my PhD in Conformal Geometry, with applications to String Theory. I’ve been doing study projects in both Algebraic Geometry and Conformal Geometry, but realized I like both Physics and Math, and want to keep thinking about them for the rest of my life.

This is also my way of saying I’m probably going to start blogging a lot more now. Thank you for all your support and guidance 🙂

Topology Qualifying Exams

We were given a list of questions by the Professor to prepare for our Topology Qualifying exams at Penn State. I decided to write up the questions and answers in Latex. This document will hopefully be useful to future students preparing for the exam, and of general interest to others.

The latest draft is- Topology_Quals_Preparation-2

Aligning Machine Intelligence- MIRI

I am applying for a summer internship (or attendance at a workshop, or something like that) with the Machine Intelligence Research Institute (MIRI) at Berkeley, California. I have an interview tomorrow over Skype. In order to prep for it, I went on the MIRI website and read their mission statement and a paper outlining the various issues that researchers at MIRI try to address.

The paper in question is “Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda”. A link to the paper is here.

The paper says at the outset that MIRI does not concern itself with making more intelligent or more powerful artificial machines, as it is already an active  field of research with multiple institutes doing wonderful work in that direction. It deals with the question of how to make the machines behave in a way that is aligned with human interests and expectation. Some features discussed in the paper are:

  1. Real-world models– Machines can be compared based on a method similar to scientific induction- expose machines to a complex environment, and then make them develop models of the environment. If their models can predict future events in the complex environment with a better success rate, then their models are better. This seems to be an effective way to compare machines. However, when machines are put to model the external (real) world, they themselves will be part of the environment. That makes their observations and  conclusions questionable. For instance, if the machines are not water-proof. Then rainfall in the external environment will be termed as a catastrophic event, and the machine will spend more time and resources studying ways of avoiding rainfall, which does not align with human interests.
    Moreover, as machines are rewarded based on the maximization of a reward function, they may outsmart their human judges by finding ways of maximizing the function without creating the best models of the complex environment. This is similar to students gaming the system by learning important sections of the textbook for the exam, without reading the full book and gaining a cohesive understanding of the material, as long as they can predict what types of questions will be asked.
  2. Decision Theory– Given a situation, what decision much a machine take? At the outset, it sounds easy. Make a list of all possible actions, see which action maximizes the utility function, and then select that action. However, it is not clear how a machine would be able to exhaustively check all possible outcomes of all possible actions, and then select the one that is “best”. Due to the varying degrees of such analyses that it can do, it is also possible that in two environments that are identical in every aspect, the machine chooses different courses of action. Making a reliable machine which takes the same decision every time after analyzing the consequences of each possible action thoroughly is a difficult problem.
  3. Logical Uncertainty– Humans understand that despite understanding a complex system arbitrarily well, it is often difficult to predict the events in the system. This is not because of lack of information, but because of lack of deductive reasoning skills. For instance, if you were shown all 52 cards of a deck in order, and then the cards were shuffled in ways that you understand, you’d still have trouble predicting  which card is on top after a million fast shuffles. This is because the human mind is mostly incapable of making fast long calculations. In such circumstances, we assign probabilities to various outcomes, and then select the outcome which has the highest probability.
    A similar situation is applicable to smart machines- they will face situations in which they will not not be able to predict events accurately. Hence, they will need to assign probabilities to outcomes too. The assignment has to be done in a way such that is maximizes the successful prediction rate. Teaching a machine how to do that is an active area of research.
  4. Vingean reflection– This has to do with creating smart machines that can themselves create even smarter machines without human intervention. Let us assume that we can create machines which weigh all possible courses of action, and select the one which serve human interests best. Hence, it follows the same procedure to create a smarter machine. However, because the machine it creates will be smarter than itself, the parent machine will not be able to predict all courses of action that the child machine would take (if it could, it would be as smart as the child machine, which contradicts the hypothesis). Hence, an aligned machine may create a machine that is not aligned with human interests.
  5. Error-tolerant agent designs– Currently, if a machine is malfunctioning, we can just open it up and correct its code, hardware, etc. However, an intelligent machine, even if it is programmed to listen to instructions if the human believes repairs are needed, may find ways of not listening if it has an incentive to escape such meddling. In other words, although the machine has to follow the code which instructs it to listen to its human programmer, it may cleverly find another part of the code or its set of instructions which allows it to escape. Programming an intelligent machine to listen to humans is a contrived problem.
  6. Value Specification– The way humans are programmed to procreate is that the act of sex itself is pleasurable. Hence, we have a natural inclination to procreate. Although humans are aware of this, they don’t try to change the pleasurable nature of sex. In a similar way, intelligent machines can be programmed to follow human instructions if they are rewarded on doing so. A sufficiently intelligent machine can figure out this rewards system, and may decide to change it. If the machine is no longer rewarded on following human instructions, it may soon go out of human control. Hence, programming machines to not change the reward system is an area of research.
    Moreover, machines must inductively learn about human values and interests, as programming human interests into a computer is a fuzzy area at best. For instance, everything acceptable in human society is unlikely to yield an exhaustive list anytime soon, and hence cannot be fed into a machine. However, a machine may learn about society by observing it, and then base its actions on what is acceptable to humans. This is analogous to the fact that although not every cat picture in the world can be fed into a machine, it can inductively learn what a cat looks like by trawling the internet for cat pictures, and then identify an arbitrary cat picture based on its inductive learning.

I had a great time understanding and writing about the Artificial Intelligence concerns of MIRI, and hope to understand more in the near future.

Just a small update: I came to the PSU Hackathon to learn the rudiments of coding. While random scrolling down wikipedia articles, I came across the AKS Primality test, created by three computer scientists at the Indian Institute of Technology, Kanpur. The Wikipedia page does a great job of explaining the algorithm. It is the first polynomial time algorithm ever to determine the primality of a number. And it is so simple!

The algorithm depends not the following fundamental fact: if n is a prime number, and a is a number co-prime to n (say n-1), then for any integer x, we have x^n+a\equiv (x+a)^n \pmod {n}. This is a necessary and sufficient condition for primality! This is the simplest and most awesome thing I’ve seen in the recent past.

Note: Do you see why {n\choose k}\equiv 0 \pmod {n} is if and only if n is prime? Hint: For proving that this is not true for non-prime n, consider {n\choose j}, where j is the smallest prime factor of n.

 

I’ve been meaning to update this blog for the longest time. In order to finally get around to it, I decided to post about what I’ve been reading, as compared to an expository article on an area of Math which would understandably take more time and planning to execute.

Today I was reading the following article by Jacob Lurie- Spectral Homotopy Theory. It seems amazingly readable, despite being on a relatively advanced topic. I shall update this post to include a (hopefully respectable) summary of the article.

Decisions decisions

Hi! I’ve decided to join Penn State for a PhD in Mathematics this fall. I am excited about this leg of my life. Thanks all.

-Ayush