Neuromorphic Computing

I’ve searched far and wide for an easy to understand review paper on Neuromorphic computing, and finally got lucky with Neuromorphic Computing: From Materials to Systems Architecture. This is a report from a roundtable conference organized in 2015 to discuss the future of Neuromorphic computing. The organizing committee of the conference comprised of Evan Schuller from UC San Diego and Rick Stevens from the University of Chicago and Argonne National Laboratory.

Why Neuromorphic Computing

Currently, 5-15% of the world’s energy goes into powering computing devices. Surely we’d love to have devices that consume much less power. We are, in fact, acquainted with a computing device that consumes much less power- the human brain. Can we make computing devices that mimic the human brain in terms of power consumption, given that we are ready to give up some of the accuracy that modern computational devices possess? Turns out that we can…if we can construct working neuromorphic devices.

Other capabilities that a brain-like device might possess are speech and image recognition. Clearly, human brains are capable of recognizing people and understanding what they are saying. Despite impressive capabilities in other areas, conventional devices are still thought to be pretty bad at this, despite consuming much more power. Hence, it might do us good to manufacture brain-like devices.

But wait. Devices have been created which are “pretty” good at speech and image recognition. Systems have now been created with 5 billion transistors on a single die, and feature sizes approaching 10 nm! Moreover, parallel computing helps us put multiple such devices to work at the same time. Surely things are looking good in terms of capabilities (if we can forget that such devices consume millions of times more power). However, this dream cannot last forever. There will come a time when we cannot pack any more transistors on a single die. Because of energy dissipation, we will not be able to pack transistors more densely (otherwise, because of the large amount of energy dissipated in too small an area, the whole die may catch on fire or be severely damaged). In short, we will have to create a fundamentally different kind of technology, that does not just involve packing more and more transistors on a chip. Neuromorphic devices promise to be this different kind of technology.

It is important to note that neuromorphic computing may not be as accurate as conventional computing devices. Hence, they need only be implemented when lower power “functional” approximations are required, as opposed to high power precise values. For instance, if you need to find out whether Jupiter is closer to us or the Sun, we need only know the approximate distances in millions of miles, and not up to inches. Neuromorphic computing may be useful to us in these situations.

Another feature that is important to us is that neuromorhic devices should be able to learn from past experience, much like humans do. If you take two people and put them in the same situation, say lock them up in a room, they may react differently. One person may panic, possibly because of an earlier clasutrophobic experience, while the other person may stay relatively calm, knowing that someone will most likely rescue them, based on a similar past experience. We want neuromorphic devices to also have different reactions to the same situation. This means that we want neuromorphic devices to have self-learning capabilities when exposed to external stimuli, and we want this self-learning to change them and their reactions to future stimuli. This is not what happens in conventional devices, as all devices of the same kind react in the very same way to external stimuli.

Von Neumann vs Neuromorphic Architecture

The von Neumann architecture, named after the famed polymath John von Neumann, has the processor and memory units separated. Whenever a computation needs to be performed, the processor calls for the required information from the memory unit, performs the computation, and then stores the answer in the memory unit. Although processing power and storage capacity have both increased over the years, the rate of transfer of information between memory and the processor has stayed relatively stagnant. Hence, this is a bottleneck that is preventing faster computations.

Neuromorphic devices on the other hand would have their memory and processing units located right next to each other. In fact, in the human brain, there is not such a clear demarcation of what the storage unit is and the processing unit is, and different parts take on either or both roles based on learning and adapting. Hence, the bottleneck present in von Neumann architectures can be avoided.

Also, at the device level, the von Neumann architecture is made up of resistors, capacitors, etc. These are elements that perform clearly differentiated functions, and are not fault tolerant. If the resistors in the device stop working, the whole device stops working. The human brain, and hence the neuromorphic architecture, consists of just one element- the neuron. The neuron consists of dendrites, the soma, the axon and synapses. While these parts do perform some specific functions, most of them multi-task, and can learn to perform the tasks of another neuron or another part of the same neuron. Hence, neuromorphic architecture is fault resistant by design, which means that if some part of it stops working, the device can still adapt and keep going.

Comparing the two architectures

Let us now directly compare the two architectures:

In the advantages column, black font shows an advantage for the von Neumann architecture, and red font shows an advantage for the neuromorphic architecture. It becomes clear that although neuromorphic devices are much less reliable than conventional devices, they consume a million times less power, and neurons can also be much more densely packed on a chip than transistors because of the smaller amount of energy dissipated. Note that the above data for neuromorphic systems was not gathered from actual neuromorphic devices (good scalable devices are yet to be built), but from simulations of such devices on conventional transistor chips.

Existing Neuromorphic Systems

Many existing devices can be modified to approximate neuromorphic devices. On one hand, we have artificial neural network architectures whose processing algorithms depend on matrix operators (which are much faster than other algorithms). These also consume much less power than conventional CPU devices, although they are less accurate. On the other side of the spectrum we have the actual digital or analog implementation of neurons- dendrites, some, axon and synapses. Axons are implemented as conventional wires, and synapses can be constructed to show learning with time (this is called spike dependent plastic synapses or STDP). The analog implementation is much more power efficient than the digital implementation, and also requires less transistors (yes, both still require transistors). However, the brain is still four or five orders of magnitude more power efficient.

Some devices, like IBM’s TrueNorth chip and FACETS at the university of Heidelberg, contain millions of neurons and billions of synapses. Hence, they may be coming close to approximating the complexity and power of the human brain. Why are these more synapses then neurons? A synapse may be thought of as a connection between one neuron and another. Each neuron needs to be connected to very many other neurons to be useful. Hence, the number of synapses needs to be orders of magnitude bigger than the number of neurons. Existing neurotrophic chips restrict the number synapses per neuron to 256. We might need many more synapses per neuron in order to become “more like the brain”.

Also, in current neuromorphic devices, synapses are arranged in a dense crossbar configuration.

Synapses being in this configuration implies that multiple neurons can be connected to each other at the same time, through direct contact. However, the crossbar cannot have an arbitrary fan-in fan-out ratio. Hence, the number of neurons that can be connected to each other through this crossbar configuration is clearly controlled.

Implementation Needs

These properties of the brain need to be replicated in our ideal neuromorphic devices of the future:

  1. Spiking- Neurons work only when they “spike”. The intricacies of how they work are explained in this beautiful video. Essentially, neurons lie at rest until their their voltage is disturbed to above a threshold value, after which they perform whatever function they’re expected to perform, and then go back to rest. It’s like a bored parent sitting in front of the TV, who doesn’t really care if the kids are dirtying the house or fighting with each other. They only get to action if they hear screaming or see blood, discipline the children, and then again go back to resting and watching TV. That sure sounds much more relaxing than a hyperactive parent who is constantly running around their children, screaming at them and micro-managing all their activities. Conventional devices are like those hyperactive parents, always working, and hence constantly drawing copious amounts of power. Neuromorphic devices would be like the bored parent, work only when they’re asked to work, and be at rest the rest of the time.
  2. Plasticity- Neuromorphic devices, we hope, will be self-learning. For this, they need to have a little plasticity- be able to change their properties and parameters as they learn.
  3. Fan in/fan out- Conventional devices have a much lower number of connections between various units than that required in neuromorphic devices. We still have to figure out whether this part is essential, or we can do with a smaller number of connections.
  4. Hebbian learning- A much used phrase in neuroscience is “neurons that fire together, wire together”. What does this mean? Imagine that performing a particular task involves neuron A sending a message to neuron B. At first, neuron B might not be as receptive to neuron A’s message, as the synapse between them might be weak or relatively ineffective. However, the more this synapse is activated, the stronger it becomes. After a few times, it becomes super easy for neuron A to transfer a message to neuron B, and hence it becomes much easier, faster, and almost second nature for a person to perform that task. This reaffirms conventional wisdom that practicing a skill will make you better at it. We need neuromorphic devices to have the same property- the more times one neuron activates another, the stronger their connection should become. In contrast, this does not happen at all in conventional computing, and your computer doesn’t run Fortnite faster if you’ve played it every day for the last 5 years.
  5. Criticality- The brain is thought to be in a critical state. By this, we mean that the brain, although not chaotic and hence relatively stable, is capable of changing itself as it is exposed to the varied experiences that life throws at us. We want neuromorphic devices to be the same way.

Building Blocks

The four building blocks of a neuromorphic device would be:

  1. Synapse/memristor- The synapse is perhaps the most important of the building blocks. It transmits messages between neurons, and strengthens or weakens with time. The device that can perhaps mimic it most convincingly is the memristor. I learned about this device a couple of weeks back, and it is by far my favorite piece of technology. This video does a fantastic job of explaining what a memristor is and what it does. It essentially acts like a transistor that is capable of handling very high voltages. The capability of a memristor that we are concerned with here is that it should be able to strengthen with time. Let us look at the diagrams below:

A regular transistor, as shown in the graph on the left, shows the same increase in current as we apply a voltage to it. Each time we apply this voltage, the current goes up by the same amount. A memristor on the other hand, as shown in the graph on the right, shows an increase in current every time we apply the same voltage. In other words, as opposed to a regular transistor, the transfer of messages by the memristor gets easier every time.

2. Soma/Neuristor- The soma is the body of the neuron where the threshold voltage is reached, which causes the neuron to spike and send a message to other neurons. One possible implementation is a capacitor coupled with a memristor. The capacitor, when it reaches the threshold frequency, would activate the memristor, which in turn would send the message.

3. Axon/Long wire- The axon was previously thought to be responsible only for signal conduction between the some and the next neuron. However, it is now known to also be responsible for signal conduction. It can be implemented using a long wire.

4. Dendrite/Short wire- Dendrites are the input wires that bring messages from outside to the soma. They have also been shown to possess pattern recognition capabilities. They can be implemented using short wires.

Issues with Implementation

There are multiple issues to be resolved, if we implement neuromorphic devices are suggested above.

For one, memristors often show chaotic behavior. They are designed to show huge increases in drain current when a small voltage is applied to them. Hence, when this voltage difference exists by accident in the environment, memristors can allow huge currents to pass when not required to do so. They can also inject previously stored energy into the device, which leads to undesired voltage amplification.

The way that a synapse implementation can work is the following: synapses are required to show stronger connections the more they’re activated. Hence, if two electrodes are separated by an insulator to not allow large current to pass, as those electodes are activated more and more, we need metal impurities to slowly form between the two electrodes in order to allow currents and hence signals to flow. However, the rate of formation and positions of these impurities is still random, and hence a lot more study is required so that we may be able to control this “strengthening” of the synapse.

Spin torque switching happens when a relatively large current of polarized electrons causes the magnetic field in an electromagnet to switch. This article is a beautiful explanation of how this happens. Such devices are used in Magnetic RAMs and other storage devices. However, the stability of such a device, along with the effect of impurities on it at the nanoscale, still needs to be studied.

Nanoelectronics, on the whole, is a complicated endeavor. Effects of impurities, Oersted fields, etc, which are not markable at larger scales, become all important at the nano scale. If we are to squeeze millions of neurons onto a chip, we have to study nanoelectronics. Hence, the amount of progress we can make in this field directly controls the amount of progress we can make in building powerful neuromorphic devices.

Conclusion

This paper, as noted before, was written at a conference which hoped to lay down a roadmap that computer scientists and material scientists could follow to make scalable neuromorphic devices. Its recommendations, for such a dream to reach fruition, are the following:

Computer scientists and material scientists should work together . Computer scientists can worry about optimizing algorithms and putting everything together, while material scientists can worry about the best possible implementation of the building blocks. The successful construction of such a device will follow only from singling out the best possible materials for the various parts, which can come only from a deep grasp of the quantum properties of materials along with a good understanding of nanotechnology.

Moreover, the device should exhibit three dimensional reconstruction, external and internal healing, distributed power delivery, fault tolerance, etc. It is important to understand that the construction of such a device will not happen overnight, and hence scientists should recognize and implement useful intermediate steps, that will ultimately result in a neuromorphic revolution.

References

  1. Neuromorphic Computing: From Materials to Systems Architecture
  2. Memristors

Published by ayushkhaitan3437

Hello! My name is Ayush Khaitan, and I'm a graduate student in Mathematics. I am always excited about talking to people about their research. Please please set up a meeting with me if you feel that I might have an interesting perspective to offer- https://calendly.com/ayushkhaitan/meeting-with-ayush

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: