HIV rebound

The paper that I’m writing about today is “The size of the expressed HIV reservoir predicts timing of viral rebound after treatment interruption” by Li et al. I will quote passages from the paper, and then try to explain what all of those fantastically long words mean.

Objectives:

Therapies to achieve sustained antiretroviral therapy-free HIV remission will require validation in analytic treatment interruption (ATI) trials. Identifying biomarkers that predict time to viral rebound could accelerate the development of such therapeutics.

This is one of a whole host of papers that deals with identifying biomarkers that can aid in the permanent treatment of HIV-positive patients. What does permanent treatment mean? When HIV-positive patients are put on an active treatment regimen, the treatment is often spectacularly successful…..until the treatment stops. Then, patients see a violent relapse. However, there are some patients (we’ll call them super-patients) that don’t see a relapse at all. Researchers are now trying to figure out what it is about these patients that helps them not relapse when treatment is stopped, and whether these conditions can be re-created in all patients. Simple.

Methods:

Cell-associated DNA (CA-DNA) and CA-RNA were quantified in pre-ATI peripheral blood mononuclear cell samples, and residual plasma viremia was measured using the single-copy assay.

What is single-copy assay? Here is a direct quote from this paper:

This assay uses larger plasma sample volumes (7 ml), improved nucleic acid isolation and purification techniques, and RT-PCR to accurately quantify HIV-1 in plasma samples over a broad dynamic range (1–106 copies/ml). The limit of detection down to 1 copy of HIV-1 RNA makes SCA 20–50 times more sensitive than currently approved commercial assays.

Essentially it is a new-and-improved method of measuring the amount of HIV RNA in your blood plasma.

What are the results of this experiment?

Results:

Participants who initiated antiretroviral therapy (ART) during acute/early HIV infection and those on a non-nucleoside reverse transcriptase inhibitor-containing regimen had significantly delayed viral rebound. Participants who initiated ART during acute/early infection had lower levels of pre-ATI CA-RNA (acute/early vs. chronictreated: median <92 vs. 156 HIV-1 RNA copies/106 CD4þ cells, P < 0.01). Higher preATI CA-RNA levels were significantly associated with shorter time to viral rebound (4 vs. 5–8 vs. >8 weeks: median 182 vs. 107 vs. <92 HIV-1 RNA copies/106 CD4þ cells, Kruskal–Wallis P < 0.01). The proportion of participants with detectable plasma residual viremia prior to ATI was significantly higher among those with shorter time to viral rebound.

So people who start HIV treatment early have a more successful treatment overall, and it takes a longer time for the disease to rebound even when the treatment is stopped. This largely aligns with common sense and disease rebounds seen in other diseases like cancer. What is more surprising is that patients on the non-nucleoside reverse transcriptase inhibitor-containing regimen also see the same kind of success. Let us explore some of the words in this phrase. A nucleoside is a nucleotide, which is the basic building block of DNA and RNA, minus the phosphate group. Reverse transcriptase is the process of constructing complementary DNA sequences from RNA sequences (reverse transcription, because regular transcription constructs RNA from DNA). So constructing DNA from RNA without the help of nucleosides helps in treating HIV? Maybe this newly constructed DNA helps the immune system figure out how to fight the HIV RNA in the plasma? I’m not sure.

Moreover, higher levels of cell-associated HIV RNA lead to a shorter rebound time after treatment is stopped (ATI). This also makes sense. Treatment should only be stopped when RNA levels have decreased considerably. This is something I also came across in the book “The Emperor of Maladies” by Siddhartha Mukherjee. Cancer treatment, whether it be chemotherapy or a strict drug regimen, is often stopped when the patient supposedly feels cured for a duration of time. However, the cancer often rebounds very quickly. This tells us that treatments, whether they be for cancer or HIV, should be carried on for much longer than they are today, and the patient feeling “fine” is not a good marker for when the treatment should be stopped.

Conclusion:

Higher levels of HIV expression while on Antiretroviral Therapy (ART) are associated with shorter time to rebound after treatment interruption. Quantification of the active HIV reservoir may provide a biomarker of efficacy for therapies that aim to achieve ART-free remission

This is a repetition of the above. Stop treatment only when HIV RNA levels are low. This will increase the time it takes for the disease to rebound. Essentially, disease treatment aligns with common sense. Who knew.

It sure doesn’t feel like predictive processing

Reddit user @Daniel_HMBD kindly re-wrote some parts of my previous essay to make it clearer. I am now posting this corrected version here.

Broad claim: The brain (conscious or unconscious) “explains away” a large part of our surroundings: the exact motion of a tree or a blade of grass as it sways gently in the wind, the exact motion of a human as they walk, etc. If we could force our brain to make predictions about these things as well, we’d develop our scientific acumen and our understanding of the world.

How can I understand the motion of a blade of grass? The most common answer is “observe its motion really closely”. I’ve spent considerable amounts of time staring at blades of grass, trying to process their motion. Here’s the best that I could come up with: the blades are demonstrating a simple pendulum-like motion, in which the wind pulls the blade in one direction and its roots and frame pull it in the opposite direction. Observe that I didn’t end up observing the tiny details of the motion. I was only trying to fit what I saw with what I had learned in my Physics course. This is exactly what our brain does: it doesn’t really try to understand the world around us. It only tries to explain the world around us based on what we know or have learned. It does the least amount of work possible in order to form a coherent picture of the world. Let me try and explain this point further in a series of examples.

When ancient humans saw thunder and lightning in the sky, they “explained away” the phenomena by saying that the Gods were probably angry with us, and that is why they were expressing their anger in the heavens. If there was a good harvest one year, they would think that the Gods were pleased with the animal sacrifices they’d made. If there was drought despite their generous sacrifices, they would think that the Gods were displeased with something that the people were doing (probably the witches, or the jealous enemies of our beloved king). Essentially, they would observe phenomena, and then somehow try to tie it to divine will. All of these deductions were after the fact, and were only attempts at “explaining away” natural phenomena.

When pre-Renaissance humans observed their seemingly flat lands and a circular sun rising and setting everyday, they explained these observations away by saying that the earth was (obviously) flat, and that the sun was revolving around the earth. They then observed other stars and planets moving across the skies, and explained this by saying that the planets and stars were also orbiting us in perfectly circular orbits. When the orbits were found to be erratic, they built even more complicated models of celestial motion on top of existing models in order to accommodate all that they could see in the night skies. They had one assumption that couldn’t be questioned: that the earth was still and not moving. Everything else had to be “explained away”.

When we deal with people who have a great reputation for being helpful and kind, we are unusually accommodating of them. If they’re often late, or sometimes dismissive of us, we take it all in our stride and try to maintain good ties with them. We explain away their imperfect behavior with “they were probably doing something important” and “they probably mean well”. However, when we deal with people who we don’t think very much of, we are quick to judge them. Even then they’re being very nice and courteous to us, we mostly only end up thinking “why are trying so hard to be nice” and resent them even more. We explain away their behavior with “they probably have an ulterior motive”.

Essentially, our brain sticks to what it knows or understands, and tries to interpret everything else in a way that is consistent with these assumptions. Moreover, it is not too concerned with precise and detailed explanations. When it sees thunder in the skies, it thinks “electricity, clouds, lightning rods”, etc. It doesn’t seek to understand why this bolt of lightning took exactly that shape. It is mostly happy with “lightning bolts roughly look and sound like this, all of this roughly fits in with what I learned in school about electricity and lightning, and all is going as expected”. The brain does not seek precision. It is mostly happy with rough fits to prior knowledge.

Note that the brain doesn’t really form predictions that often. It didn’t predict the lightning bolt when it happened. It started explaining away with lightning bolt after it was observed. What our brain essentially does is that it first observes things around us, and then interprets them in a way that is consistent with prior knowledge. When you observe a tree, your eyes and retina observe each fine detail of it. However, when this image is re-presented in the brain, your “the tree probably looks like this” and “the leaves roughly look like this” neurons fire, and you perceive a slightly distorted, incomplete picture of the tree as compared to what your eyes first perceived.

In other words, your brain is constanly deceiving you, giving you a dumbed-down version of reality. What can you do if you want to perceive reality more clearly?

Now we enter the historical speculation part of this essay. Leonardo da Vinci was famously curious about the world him. He made detailed drawings of birds and dragonflies in flight, of the play between light and shadows in real life, futuristic planes and helicopters, etc. Although his curiosity was laudable, what was even more impressive was the accuracy of his drawings. Isaac Newton, another curious scientist who made famously accurate observations of the world around him, was unmarried throughout his life and probably schizophrenic. John Nash and Michelangelo are other famous examples.

I want to argue that most neurotypicals observe external phenomena, and only after such observations try to explain these phenomena away. However, great minds generate predictions for everything around them, including swaying blades of grass. When their observations contradict these predictions, they are forced to modify their predictions and hence understanding of the world. Essentially, they are scientists in the true sense of the word. What evidence do I have for these claims? Very weak: n=1. Most of what I do is observe events, concur that this is roughly how they should be, and then move on. Because I can explain away almost anything, I don’t feel a need to modify my beliefs or assumptions. However, when I consciously try to generate predictions about the world around me, I am forced to modify my assumptions and beliefs in short order. I am forced to learn.

Why is it important to first generate predictions, and then compare them with observations? Let us take an example. When I sit on my verandah, I often observe people walking past me. I see them in motion, and after observing them think that that is roughy how I’d expect arms and legs to swing in order to make walking possible. I don’t learn anything new or perceive any finer details of human motion. I just reaffirm my prior belief of “arms and legs must roughly swing like pendulums to make walking possible” with my observations. However, I recently decided to make predictions about how the body would move while walking. When I compared these predictions with what I could observe, I realized that my predictions were way off. Legs are much straighter when we walk, the hips hardly see any vertical motion, and both of these observations were common to everyone that I could see. Hence, it is only when we make prior predictions that we can learn the finer minutae of the world around us, that we often ignore when we try to “explain away” observations.

I was on vacation recently, and had a lot of time to myself. I tried to generate predictions about the world around me, and then see how they correlated with reality. Some things that I learned: on hitting a rock, water waves coalesce at the back of the rock. Leaves are generally v-shaped, and not flat (this probably has something to do with maximizing sunlight collection under varying weather conditions). People barely move their hips in the vertical direction while walking. It is much more common to see variations in color amongst trees than height (height has to do with availability of food and sunlight, while color may be a result of random mutations). A surprisingly large number of road signs are about truck lanes (something that car drivers are less likely to notice, of course). Also, blades of grass have a much smaller time period than I assumed. Although I don’t remember the other things I learned, I think that I did notice a lot of things that I had never cared to notice before.

Can I use this in Mathematics (for context, I am a graduate student in Mathematics)? In other words, can I try to make predictions about mathematical facts and proofs, and hopefully align my predictions with mathematical reality? I do want to give this a serious shot, and will hopefully write a blog post on this in the future. But what does “giving it a serious shot” entail? I could read a theorem, think of a proof outline, and then see whether this is the route that the argument goes. I could also generate predictions about properties of mathematical objects, and see if these properties are true about these manifolds. We’ll see if this leads anywhere.

So forming predictions, which really is a lot like the scientific method, is naturally a feature of people of certain neural descriptions, who went on to become our foremost scientists. It is yet to be seen whether people without these neural descriptions can use these skills anyway to enhance their own understanding of the world, and hopefully make a couple of interesting scientific observations as well.

An article for my school magazine

I wrote a write-up for the school magazine of The Heritage School, Kolkata. I attended the school from classes 4 to 10, and it was fun to relive some of the moments I spent at that school. I am reproducing the article below.

I was a part of the first batch of students to join The Heritage School. I missed my family terribly at my Dehradun boarding school, and my parents decided to keep me back in Kolkata and admit me to the next best thing- a “day-boarding school”. I still have vivid memories of terrible mosquito infestations in the dining hall, snakes and other curious reptiles, the thin capillary of a road that connected Ruby Hospital to the school campus, and the myriad activities and clubs that were supposed to make us well-rounded individuals. I have behaved (and sometimes misbehaved) my way through four principals, seven class teachers, and 75 million “Namaste Ma’am”s and “Sir”s. As I look back to a school that I left thirteen years ago, I am blank and overflowing with memories at the same time.

The first thing that comes to mind when I think about the school is teachers. The Heritage School has the best teachers of all schools anywhere. Period. This point can perhaps only be appreciated when one leaves the school and steps out into a generally hostile world where you’re just another fly on the wall. I’ve been taught by teachers at fancy boarding schools in India, highly experienced IB teachers in Singapore, and famous researchers at reputable colleges in India and the United States. And I can say without reservation that nowhere else did I feel that the teachers really cared about my personal growth, and that we were all one big family. For this feeling of inclusiveness and belonging, I will always be grateful. All of this despite the fact that I was uninhibitedly stupid in almost everything I did.

I also remember the library being my favorite place in the school. One of my most painful memories is falling sick on library day in class 5 and not being able to go to school. The impossibly expensive and glossy books, the air conditioning, the comfortable seats- all of this made this the best place on campus. From reading Harry Potter secretly during class to the librarian secretly allowing me to borrow the Wheel of Time series from the restricted Teachers’ section, I am grateful to the school for encouraging a reading habit in all of us, that has surely contributed to who we are today.

When I left school in class X, it was one of the highest points in my life. I was the ICSE topper, had a coveted scholarship to complete classes 11 and 12 in Singapore, and anything seemed possible. Since then I have lived through many more academic highs and lows, chosen careers and then completely changed paths, learned more about my limitations, learned that connections with people matter much more than professional success, and have undergone the slow process of “humanization” that any person of my age can relate to. I even have some gray hair to show for it!

I am a completely different person now, of course. I have lived abroad for about one-third of my life, do research in an esoteric branch of Mathematics, and have forgotten most of what I learned in textbooks at school. However, my experiences at the school made me what I am today. Being a mathematician, I am obliged to prove this rigorously. So here goes: If I’d not gone to the Heritage school, I would never have found some of my best friends who I am still in touch with today, never have taken part in a bazillion extra-curricular activities that have now given me life-long hobbies, and never have escaped from the friendless hellhole that adolescence was for me in my apartment complex. Hence, it was only because of The Heritage School that I could have some of the best and formative memories of my life. QED 

Of dead Russian authors and dead-er French kings

Note: I’m in a gradual process of anonymizing this blog. This is just so that I can write more freely, and include observations from my life that cannot be tied to my boring real world grad student existence. We’ll see how that goes.

There’s a theme from Anna Karenina by Tolstoy that has stayed with me for years. Anna is cheating on her husband Alexei with a young army man. Alexei is a reputable senior statesman who has maintained his family’s irreproachable position in society through hard work and intelligence, and is generally respected by the higher echelons of Russian bureaucracy. Hence, his self respect and position in society take a major hit when his wife is found to openly be having an affair with someone else. Seeing as we’re talking about society in 19th century Russia, Alexei is expected to “discipline” his wife and forcibly put the affair to an end, or perhaps divorce her and leave her to fend for herself without money in an unforgiving Russian society.

Instead of all of this, Alexei has a religious awakening, and he suddenly begins to sense the love in all of humanity (perhaps seeing himself as Jesus Christ incarnate). He refuses to discipline his wife or divorce her, and tells her that she can continue living in their house with their children, while having an affair with the young army man at the same time. He protects her dignity and her standard of living, while also going out of his way to ensure that she has a romantic partner of her choosing. This is perhaps as close to God as one can get. This, as one might expect, leads her to hate and loathe him even more, so much so that she cannot even bear to look at him or be in the same house as him.

I was shocked when I read this for the first time. It seemed unfair and bizarre and very real, all at the same time. I couldn’t quite put it all together. Why would she not be grateful to such an accommodating husband? It has taken me a couple of years to understand that Anna did not need a semi-god like figure to “forgive” her for her mistakes. She just needed someone who would empathize, and not necessarily position himself above her as a superhuman, even if he was only offering kindness and not punishment.

Why am I talking about all of this? Because I face situations like these in my daily life too. If I am nice to a friend, and they don’t reciprocate the way that they “should”, I sometimes remind them that I was nice to them, and they’re not being fair to me in this social transaction. Nine times out of ten, it leads relations to sour between us. Instead of empathy, I offer them terms of an implicit social contract that they’re violating. I’ve almost always been this way, and often thought that this was a fair and honorable way to conduct human relationships. Of course I was wrong each time.

However, my life is fairly insignificant in the grand scheme of things. Hence, there is a more important reason why I am writing this post. I have been listening to Mike Duncan’s Revolutions podcast, and am currently at the French Revolution. A short summary would be that a bunch of French intellectuals thought that the only way to make society better would be to kill the royals, and then subsequently guillotine their own leaders. They’d read a lot of books, heard some sophisticated rhetoric, and concluded that they were smarter and better informed than everyone else. Hence, they should put their knowledge to good use, and kill everyone. Of course Colonialism, Communism, Fascism, and almost every other overarching genocidal movement in the last five hundred years has been the result of a bunch of educated elites reading a ton of books, and deciding that this made them smarter than everyone else. They would write thick manuscripts and manifestos on what an “ideal society” should look like, and then decide that anyone who stood in the way of their irreproachable vision was the enemy and deserved to be killed.

Of course each and every of these educated, intelligent men was wrong. They single handedly led to the avoidable deaths of millions. Adopting the neuroscientist Iain McGilchrist’s terminology, observing patterns and constructing theories, all of these are the domain of the left hemisphere of the brain. Empathy and connectedness – these are the domain of the right hemisphere. The French intellectuals were predominantly using their left hemispheres in devising their grand plans and writing flowery manifestos on what the future could look like, but rejecting their right hemispheres and consequently empathy for their fellow citizen. The French king Lous XVI was not an evil tyrant who would not listen to reason. He was an uncharacteristically pliant ruler who essentially followed almost every whim of his citizens. And he was still beheaded on the streets of Paris.

Whenever we think we know what’s best for other people and the world in general, we are almost always wrong. All our grand plans are probably flawed, and will need to be re-worked. Hence, if our plans can only be realized by killing or hurting other people, that’s good a sign as any that we’ve made a major mistake and we need to go back to the drawing board. The only grand plans that have ever worked, say Capitalism, Democracy or public infrastructure, are ones that gave people even more freedom, whether it be political freedom or freedom of movement.

The best that we can do in this world, apart from giving the people in our lives even more freedom, is empathize with them. That doesn’t necessarily mean that we should be a Christ-like specter of unconditional love and forgiveness. It just means that we step into their shoes and see the world from their perspective, rather than look down on them from above and pass judgement on them or forgive them out of divine grace. This is (of course) is a repeat of what Tolstoy said about farmers in Anna Karenina: that we should seek to understand and empathize with them rather than seek to “uplift” them, treating them as animals unfit to fend for themselves.

I will make a greater effort to not write sappy blogposts in the future, doling out generic “love everyone” advice. However, I feel strongly enough about this to put it in writing, if only to laugh at it years later.

The case for falling in line

Picking up bits and pieces from various writers that I admire and producing a relatively inferior narrative.

A lot of instagram is basically a bunch of people encouraging each other to be “fierce”, not care what others think of them, keep on doing what they love, keep on being who they are, etc. This is good advice for a lot of people. I have friends who are paranoid about what others might think of them, and bend over backwards to accommodate others, often at the cost of their own happiness. This advice is probably meant for them. They would truly be happier and more fulfilled in their lives if they stopped caring about what others are thinking, and did what they wanted.

This advice, unfortunately, does not reach them. People who frequently consume content and post on social media websites are often not the very accommodating types that I describe above, but those who are extroverted and think that they have important things to say to others. These qualities (traits?) sometimes correlate with narcissism, false self-image, etc. And it is these already-extroverted people, a subset of whom are already convinced of their relative superiority over others, that such advice to “be fierce” and “don’t care what others think” reaches. I know. Because I have been one of them (some would argue that I still am, and they’re probably right). Well here goes my spiel, which is a bastardized version of Scott Alexander’s “Should You Reverse Any Advice You Hear” and Freddie deBoer’s unfortunately titled “Women Do Not Need Lunatic Overconfidence” (my take on this article has nothing to do with women).

If you frequently get such advice on the internet, chances are that you don’t need this advice. You are already “fierce”, and have a search history comprising of things like “how to not care what people think”. Complex machine learning algorithms have picked up these search patterns, and keep displaying similar content. The internet is not meant to change you. It is designed to keep you in the hole that you’ve dug for yourself.

In my personal history, I have displayed a lot of personality traits that didn’t help in making friends or getting along with people. Ever. For some reason, I decided to try and change myself. This of course was not my first reaction, and I stuck to “be fierce” and “don’t care what others think” in the beginning. I was probably slated to stick to these notions for life, as I see a lot of people around me doing. But a lot of truly inspirational people, for some weird reason, agreed to hang out with me pretty often, and I noticed that that they were objectively far better people than me. So I decided to change myself.

Some changes that I’ve tried to make are that I try to speak less and let others take centre stage, not pass judgement too quickly, not express my opinion on something unless I am explicitly asked for one, not try to impose my way of doing things, etc. All of these are different manifestations of the same phenomenon: I learned to shut up. This is bad advice for a lot of people. Some people are very reserved and self-conscious. They perhaps need to be encouraged to speak out more and assert themselves much more. However, it was good advice for me. I am happy that I have tried to make this change.

So what does real, helpful advice look like? Most movies that we watch and books that we read ask us to be who we are, not change ourselves, etc. And when we try to do these things, some of us (like me) come away unhappy and dissatisfied. Hence, perhaps the only useful advice that there can be is “figure out where you want to be in life, and try different things until you get there”. This is so general that it is almost useless. However, it is still better advice than the more specific “never change” and “you are already the best”.

So kids, don’t take advice from the internet. The internet is not your friend. Wait…

Last year in retrospect

I turn (even) older today. Hence, this seems as good an occasion as any to put the last year in retrospect and think about things I could have done better.

Blogging

I decided last summer to start blogging about research papers outside of my field. I would often email these pieces to the authors of the papers I would write about. Regardless of the merits of my posts, I came away with a very polite and encouraging picture of researchers.

Response to blogpost on quantum computing
Response to my CRISPR blogpost
Response to blogpost on Neuromorphic Computing
Response to blogpost on Chlorophyll

What could I have done differently? I could have done a deeper dive into these subject areas, perhaps reading multiple papers to bring out the true essence of the field. I could perhaps also have been more regular about blogging. Regardless, I unilaterally call this exercise a success, as I had a lot of fun doing it and learned a lot.

Effective Altruism

It has now been about three years that I’ve been donating 10% of my income to charity. This has been a difficult transition for me. I was never particularly inclined towards charity before (in school or college), and generally thought that money donated to someone was a net negative. However, after a host of bizarre incidents (like reading Gandhi’s autobiography, some personal circumstances that pushed me to re-evaluate my life, etc), I decided to push myself to try and have a net positive impact on the world.

GiveWell approximates that Effective Altruism charities save 1 life in a developing country for every $2300 donated. By that estimate, I might have saved around 3.8 lives in the last three years. Let’s round down to 3. So three more people are alive in the world today because of the money that I donated. As I type this, I feel a staggering impulse to just gawk in disbelief. For someone who has generally struggled with positive self-image, this is surely the most important thing I have ever, ever done. Whatever I do, I will always have this. Let this inconsequential grad student have this moment of joy.

Of course the other people involved with Effective Altruism are much more awesome than I am, and I have learned a lot by talking to them. I am also being hosted by CEELAR in the UK to work on Artificial General Intelligence. Although I won’t be able to avail of this opportunity right now because of VISA issues, I hope to do so in the near future.

How to learn

I’ve always wanted to understand how one should learn. As any researcher can surely testify, the dream perhaps is to one day be able to take any research paper or textbook and be able to understand exactly what is happening in one go. This dream is often unfulfilled as researchers take years to understand their specific subfield, and often cannot understand research from other unrelated areas. This gets in the way of cross-disciplinary research in academia and industry.

I tried to get better at it last year by trying to read papers from various fields. A quick feedback loop ensured that I kept correcting my approach and trying to get better. I started out by reading papers and understanding them at an intuitive level. This proved to be effective, but there were many topics that were still beyond my grasp. I then changed my approach to trying to draw diagrams of various concepts. Although helpful in non-mathematical fields, this didn’t help me too much in mathematics as I wasn’t able to remember theorems and calculations. I then migrated to trying to type out each line in textbooks and writing detailed analyses. This was again much more helpful than my previous approaches, and often led to new insights. However, I kept forgetting old facts and theorems. I have recently moved to studying concepts by comparing them to previously known concepts and ideas. This was in part inspired by Roam Research, which is an app that claims that the best learning happens when we’re able to place concepts in context. Although I don’t know if this is the best method to learn, it is surely the best method that I’ve tried yet. This approach, moreover, is how the right hemisphere of the brain processes information anyway. Hence, it is in many ways how humans really learn about their environment.

Self-improvement

I’ve often had various social anxieties, and have found it difficult to make friends. I used to blame it on others, but have on deep introspection found that most of the blame rests solely on me. Consequently, I have tried to improve myself so that I can contribute more positively to relationships.

One aspect that I have tried to improve upon is empathy. I find it difficult to empathize with people, and this probably has to do with complicated neurological reasons. According to Ian McGilchrist, my left brain hemisphere is dominant, which contributes to false self-image, general apathy, etc. I have tried to correct for this by taking oxytocin supplements. Although I’ve been lazy about studying the actual effects of the medicine, I feel that there is an overall positive effect.

I’ve also tried to contact friends and family more often, tried to be more helpful, and been more assertive with respect to people who are not nice to me. Although working on my social life is a life-long project, I have only recently realized how important it is to my overall happiness, and I do wish to keep chipping away at it.

I’ve also found out a lot about myself by reading research papers from the social sciences, and I’ve blogged about them here and here. I’ve also had very fruitful correspondence with Dr. Laran, the author of one of those papers. Moreover, I recently had the opportunity to listen to the bulk of Eliezer Yudkowsky’s sequences, which have been truly life changing for me. I plan to keep this exercise going in the near future.

Final thoughts

Being at home the whole of last year has been a tremendous learning experience for me. I got the time to read a whole host of things and learn a lot. I talked to fantastic people, and also deepened bonds with friends. If you’re still reading this post and have recommendations on what I else I should read/write about, please do feel free to comment or write to me. Thanks for reading!

Yet another stab at image recognition

Like every other idiot with an internet connection, I am fascinated by machine learning and neural nets. My favorite aspect of AI is image recognition, and I’ve written about it in the past. I am going to try and talk about it in reference to a book I’ve recently been reading.

The book that I’ve been reading is “The Master and the Emissary” by Iain McGilchrist. It is hands down the most amazing work I’ve come across in the recent past, and I plan to write a more detailed review on completing it. However, there is one fact that I want to flesh out below.

The main thesis of the book is that the left and right hemispheres of the brain are largely independent entities, and often process the world in conflicting ways. The left part of the brain recognizes objects by “breaking them up into parts and then assembling the whole”, while the right part of the brain “observes the object as a whole”. Clearly, the left part of the brain is horrible at recognizing objects and faces, and mainly deals only with routine tasks. The right part on the other hand is what we mainly depend on for recognizing things and people in all their three dimensional glory.

Anyone with even a cursory understanding of how neural networks (something something convolutional neural nets) recognize objects knows that neural algorithms mainly resemble the left side of the brain. Image inputs are broken up into small pieces, and then the algorithm works on trying to identify the object under consideration. Maybe this is why image recognition is bad (much, much worse than humans for instance)? How can one program a “right brain” into neural nets?

I don’t know the answer to this. However, it now seems clear to me that a lot of our approach to science and programming in general is based on a Reductionist philosophy- if we can break things up into smaller and smaller units, we can then join together those fundamental units and figure out how the whole edifice works. This approach has been spectacularly successful in the past. However, I feel that this approach has mostly served to be misleading in certain problems (like image recognition). What can be a possible roadmap for a solution?

The left and right hemispheres of the brain perform image recognition like this: the right brain processes the object in its entirety, and notices how it varies in relation to all other objects that it has seen before. For instance, when the right brain looks at you, it notices in what ways you’re different from the persons around you, and also from the other inanimate things in the background. The left brain now breaks up those images into smaller parts to notice similarities and differences, forms categories for “similar” things, and places all of the observed entities those categories. For instance, it places all the people in the “humans” category”, the trees in the background in the “trees” category, and so on. Hence, the right brain notices fine and subtle features of objects all at one go, and the left brain clubs objects together in a crazy Reductionist daze.

How would a neural network do “right brain” things? I’m tempted to say that there may be a lot of parallel computing involved. However, I don’t think that I understand this process well enough because it inevitably leads to the opinion that we should just have a bazillion parameters that we should try to fit onto every image that we see. This is clearly wrong. However, it does seem to me that if we’re somehow able to model “right brain” algorithms into neural nets, image recognition may improve substantially. More on this later (when I understand more about what is going on exactly).

Find out what you don’t know, and protect it from half-baked explanations

I’ve thought about these ideas for a long time, and they’ve only been strengthened by reading biographies and articles like Ambidexterity and Cognitive Closure. In this article, I’ll try to untangle this mess of ideas, and also try and provide a refutation of The Case Against Education by Bryan Caplan.

Why are the greats that great?

What is the major difference between scientific revolutionaries like say Newton, Einstein and da Vinci, and researchers that populate various universities around the world, trying to write publishable papers (and also aspiring researchers like me who are at the bottom of the food chain)? Well an easy answer would be “Einstein probably had 25,000 more IQ points than you, and that’s why he did all of those wonderful things that you can’t”. Fine. We can happily accept this argument of “he’s just much, much smarter” and come to peace with our relative mediocrity. However, this goes against my general experience. I have met a lot of very, very high IQ people. People who won multiple gold medals at the International Math Olympiad with perfect scores, people who aced multiple Olympiads and also topped their cohort at Cambridge math, etc. You get the drift. Why aren’t these people discovering new scientific theories and revolutionizing human understanding? Can there really be no Newton amongst them? Was Newton that much smarter than all of them?

I first came across the following idea in Malcolm Gladwell’s Outliers: high IQ people are really good at finding answers, when they know that there’s an answer to be found. But they’re not markably better at asking questions or finding gaps in their understanding. Let’s take an example. Imagine that you were born in a time before Newton’s laws were discovered. You’re asked the following question: “Imagine that you have a smooth surface with no friction. All real-world surfaces have some friction, hence you have to take surfaces with lower and lower friction, and take some sort of limit. If an object is slid on it, will it ever stop unless an external force comes and stops it?” You can re-discover Newton’s First Law in one afternoon without any prior knowledge of it, and feel very smug. Now imagine that you’re instead asked “Clearly all objects that move come to a stop. If you kick a ball on the field, it will stop after traveling some length. What is stopping the object?” The most intuitive answer, which corresponds both to experience and ancient Greek beliefs, is that every object has a propensity to come to its “natural state”, which is a state of rest. Hence, it is the nature of objects itself that is making them come to a stop. In some sense, discovering Newton’s law was not the hard part. It was knowing that there was a law to be discovered at all that made discovering it so difficult. You had to suspend belief in your own experience, and consider a hypothetical smooth surface with no friction. In other words, you had to be led in the right direction with the right questions. Someone had to ask “What if my assumption about objects naturally coming to a halt is really an assumption about the friction exerted by surfaces, and that I can weaken this assumption?”

The same could be said about Einstein. Discovering Relativity was not as difficult as knowing that there was something there was something to be discovered. Of course, Einstein was lucky in the sense that Morley-Michelsen’s experiment had only recently shown that there was “something funny going on with light”, and he just needed to assess the implications of that in order to come up with Special Relativity. Leonardo da Vinci, of course, was famously curious, and it was having these questions in the first place that caused him to discover so many scientific and artistic facts (including, apparently, Newton’s laws before Newton). This brings us to the fact that although a high IQ may be useful in finding answers to questions, it doesn’t help one discover new and important questions to ask. In other words, although it helps us fill in gaps in our knowledge, it doesn’t help us discover those gaps. And discovering those gaps is most of the battle. But what helps in discovering those gaps?

Curiosity, ambidexterity or schizophrenia?

An easy answer is curiosity. You have to be curious about the world around you in order to ask the important questions. However, that is not the complete picture. For instance, I am sometimes curious and ask myself how exactly did trees outside of my window evolve to be so tall? An answer that instantly comes to mind is that trees need to catch sunlight, and taller trees caught more sunlight. Hence, as trees that caught more sunlight probably had a greater chance of survival and procreation, trees have evolved to be tall. I am satisfied with this explanation, and move on. However, if I force myself to think more deeply, I notice that the trees are conical in shape. Hence, although growing taller did make them get more sunlight, they didn’t necessarily prevent shorter trees from also getting a lot of sunlight. So why did trees evolve to become so tall? Clearly a lot of resources must have been expended to become taller. A possible answer is that tall conical trees grew on mountains that were covered in shadows for large parts of the day, and only tall trees could catch sunlight for most of the day. This again is too simplistic an explanation, and there are still more questions to ask. What if tall conical trees originated in sunny mountain valleys, but failed to originate in other shadowed plains? Clearly my hypothesis will be wrong, and I will have to look at alternate explanations.

In general, if I ever ask questions at all, I stop after the first answer. My mind thinks of an explanation, and accepts it without trying to poke holes into it. However, when I write things down, I can reflect upon my explanation much more easily and maybe see some holes. However, this process ends within a couple of iterations, and I move on even though I might not be completely satisfied with my answer. Who are these freaks who keep on questioning their assumptions and hypotheses until they arrive upon earth-shattering facts, and why can’t I be like them?

Scott Alexander, in his article on predictive processing, argues that people like me, who are satisfied with half-baked approximations of the facts, have very high priors. We assume certain things about the world, and stick to them, shielding them from attack until we absolutely have to discard them. We don’t deal well with uncertainty, and prefer inaccurate but convincing untruths over difficult-to-find but accurate truths. Of course there may be an energy-theoretic argument for this: poking holes in your own arguments takes energy, and being satisfied with your own half-truths helps in conserving useful mental energy. I can think of an evolutionary argument for why most humans have this feature. On the other hand, people with schizophrenia or ambidexterity have very low priors, which means that they don’t shield their assumptions about the world from attack (as much), and are open to external inputs changing their priors. They are much more tolerant of uncertainty, and won’t accept anything less than the absolute truth (that which can explain all known observations). In other words, people like me never try to uncover gaps in our knowledge, and rush to fill them with half-truths when they are inevitably exposed. Revolutionary scientists and schizophrenics, on the other hand, uncover the gaps in their understanding with ease, and then try to hold out on filling these gaps until they find an explanation that is completely convincing.

Is the only way to scientific greatness self-induced schizophrenia or ambidexterity? I really hope not. Perhaps if we can try really hard to question our beliefs and hypotheses, to actively seek data that contradicts our half-baked explanations, there is still some hope. Of course writing things out would help. Knowing that certain gaps exist in our knowledge is the first, and most important step. We should spend a considerable amount of effort in exposing these gaps, and not being satisfied with untruths.

The case against The Case Against Education

This leads me to my criticism of Caplan’s “The Case Against Education“. Caplan argues that because students soon forget everything that they learn in school, and that skills in one field are rarely transferable to other fields (I think he also makes the implicit argument that IQ is the most important determinant of professional success), we should stop investing this much money into schools and colleges, and should instead focus on developing marketable skills within children. This goes against my experience of going to school and college.

I have taken a lot of courses whose contents I have mostly forgotten. These include courses that are completely irrelevant to my field of interest, like history, geography and environmental science, and also courses that are aligned with my field of interest, like mathematical physics and geometry. Although I’ve forgotten most of the material from these courses, they did succeed in creating place-holders or gaps of knowledge in my memory. For instance, although I might forget a theorem that I can use in a certain situation, I do know that there does exist such a theorem. I can now look it up and find out more details. Similarly, although I might have forgotten most of the contents of the Constitution, I do know that it does contain something about secularism and freedom of speech. I can now look up credible sources to find out more. Hence, although my formal education has failed in getting me to remember all that I’ve been taught, it has succeeded in something almost as important: creating place-holders for knowledge in my brain, that I can easily fill by a simple internet search.

Would I have been able to learn all of this material (and consequently create place-holders for knowledge) if I had been self-taught, or perhaps been taught at home by my parents. Not if I was extraordinarily brilliant or curious, or perhaps have had parents that were ready to devote considerable amounts of time and effort to educate me in a plethora of fields. What is more likely is that I would have had little or no training in most things. I do believe that there are certain aspects of schooling that are harmful to students, and I have suffered a great deal because of the nature of my schooling. However, there are certain beneficial aspects of it that have been overlooked in certain discourse.

Is it really as simple as we’re making it out to be?

Now let us see what are some failings of my basic argument: it is much easier to make conjectures in mathematics (Fermat’s Last Theorem, Twin Prime Conjecture, etc) than prove them. Hence, exposing “gaps in our understanding” is not half the battle in this case. The same could be said of finding out a unified theory of Physics: we know that a gap exists in our understanding of the universe. It is filling this gap that is proving to be difficult. Hence, my basic hypothesis would have to be re-phrased to also address these cases.

Thanks for reading!

Dostoevsky is a two-trick pony

Like most other people who enjoy self-abuse through reading thousand-page novels, I’ve had the experience of reading Dostoevsky and marveling at his ability to capture “reality”. I’ve read “Crime and Punishment” in the past, and am now reading “The Brothers Karamazov”. Needless to say, there have been many parts that have completely floored me. I used to think that Dostoevsky was perhaps a “god amongst men”, having the ability to capture emotions and human behaviour in a way that is far beyond our abilities. However, in light of my previous two articles on goal conflict and the dynamic model of human personality, I feel that Dostoevsky uses only two tricks again and again to create unbelievably real scenarios, and that we too may perhaps be able to use those tricks to enhance our writing.

The trick that Dostoevsky uses most often is the following: his characters behave in one particular way, and then behave in a completely opposite way the next moment. And for some reason, this only makes them more believable. For instance, the poor Captain in The Brothers Karamazov is elated when he is offered money by Alyosha. He dreams aloud about how he will use that money to pay for medical treatment for his family, and take his son on a long-promised vacation. However, it is at that moment that he chooses to throw the money on the ground, stamp on it in disgust, and let Alyosha know exactly what he think of his charity. What’s more surprising is that Alyosha later says that now that the Captain has rejected his money once, if he is again offered the same money the next day, he will happily accept it. And we know that Alyosha is correct.

This can be seen through the lens of the passive goal guidance system dealing with conflicting goals- his goal of providing for his family vs his goal of preserving/signaling his honor. When the Captain fantasizes in detail about how this money will solve all his problems, his passive goal guidance system mistakes imagination for reality, and assumes that all his financial problems are already solved. This causes disengagement with this goal, and engagement with the conflicting goal of saving his honor and not accepting charity from the brother of his enemy. Moreover, when he makes a big show of how his honor matters to him much more than any of his financial problems, his goal of proving to the world that his honor cannot be bought is also fulfilled, and now the conflicting goal of providing for his family again re-surfaces. Hence, Alyosha correctly predicts that if the Captain is offered that money again, he will readily take it.

This jumping between conflicting goals is something we also see in the most other characters in the book. For example, Grushenka is torn between taking revenge on the man who betrayed her, or running into his arms when he comes calling again. However, it is only after fantasizing in great detail, about how she will take her revenge by insulting him and turning down his offer to go with him, that she decides to run into his arms. This may be interpreted as her being torn between two conflicting goals- her goal of taking revenge for earlier wrongdoing, vs her goal of being with a man she still loves. After she imagines taking revenge on him in great detail, her passive goal guidance system assumes that this goal has been fulfilled, and disengages with it. This causes her other goal to surface- that of running into his arms, and this is exactly what she does.

Another trick that Dostoevsky uses is that his characters readily abandon the good and solid things in their life, and value only those things that carry an element of risk or uncertainty. For instance, Mitya has a beautiful, rich and virtuous fiance, Katerina Ivanovna, who is ready to forgive all of his infidelity and be with him. However, he abandons her and chooses to pursue Grushenka, who is an undependable and promiscuous escort to a rich landlord in town, and also has a relationship with his own father. This brings us back to the paper on the dynamic theory of personality, which told us that our desire for something/someone increases with the uncertainty involved in obtaining the object/person (it is maximum when our chances of obtaining that object/person are 50-50). Hence, Katerina Ivanovna, despite all her qualities, was too much of a sure thing for Mitya to desire. He was self-destructively pulled towards a woman who was much more ambivalent towards him, and with whom his future was much more uncertain.

To Dostoevsky’s credit, the concepts of goal conflict and the dynamic theory of personality are manifestly true for the human nature. Hence, it is through the use of these two concepts that he is able to create hyperreal characters. Tolstoy uses the concept of goal conflict as well. For instance, when Natasha is finally pursued by her childhood love in War and Peace, she doesn’t reciprocate, but falls for someone entirely different. However, I find Tolstoy’s treatment to be much more subtle than Dostoevsky’s. Although Tolstoy and Dostoevsky both occupy a position in world literature that has hardly been challenged in the last couple of centuries, I find Tolstoy to be much more of a literary genius than Dostoevsky. I can perhaps explain this more in a future post.