Hello, passionately curious people this is SAI TARUN PELLURU.

Well before starting our journey, I request you to hit that follow button if you like to join my journey and please let me know any mistakes in this blog regarding the tech specs or grammatical mistakes I have attempted in this blog. So let’s start our journey to the brain 😀.

From the early ’50s to now Von Neumann architecture was been the best architecture creating the actual base for any computer. I believe Von Neumann was one of the most influential people in the world. There has been a drastic change in the computational speed from then to now. Along with the increase in computation speed, needs of the human also increased. When we buy a smartphone we not only check the speed and storage, we also check the price, this is where we never compromise. A computer-making company has to undergo thousands of parameters including price while designing a smartphone or a desktop. Meeting the needs of today's generation and forecasting the future needs has been the greatest challenge for the many companies

YOU: Okay, so what is Neuromorphic computing?

ME: Wait. I need to tell you more, let me complete it.

The reason for the increase in computational speed every year is the TRANSISTOR embedded in INTEGRATED CIRCUITS. One of the largest breakthroughs in electronics and computers was integrated circuits where transistors were been integrated without been physically wired individually. It's been used in almost every electronic device like a smartphone, computer, etc. It is used to make processors for smart devices. If you are carrying a smartphone, you are now carrying 4 billion transistors in your hand. The size of the transistor now is about 14 nanometres and scientists are trying to bring it down to one.

YOU: Hmm, so will we use a better transistor in Neuromorphic computing?

ME: lemme complete.

One of the co-founders of the INTEL, Moore extrapolated the data regarding transistors and statistics of the computational speed and made one of the greatest predictions in history. THE MOORE’S LAW. MOORE’S law states that the number of transistors and resistors will double every 24 months, which means computational speed increases and price decreases. The law was sailing smoothly until now. Well, many people predict that Moore's law is coming to an end.


Why? due to the quantum tunneling (“If you don’t know why somethings happen just leave it to quantum.😂 just kidding”). Let's make it simple when we off the current to a transistor theoretically the electron flow must stop, but practically there is a leak. Electrons leak even after power off is quite normal in transistors but when we scale down the transistors to very small size the leakage increases which leads to an increase in heat emission and wastage of power. That’s when FINFET came into the existence.

fin FET.

which decreases the leakage current even when we scale down the transistor to 10 nanometres. Because we are using silicon as a transistor some say that graphene is the best replacement for silicon and we can scale transistors less than one nanometer. I don’t know about it. But silicon is not going to be replaced for a decade.

YOU: I am gonna leave the blog.

ME: okay cool, here it is.

The reason why I explained about the MOORE’S LAW is that to understand why neuromorphic computing is the title of the blog. You will get to know everything at the end of the blog. It's going to be lengthy but trust me you will realize.

Our brain has the capacity to work parallel and malleable. It has 100 billion neurons each with 100–1000 synapses(connections). So totally quadrillion connections with just 20 watts of power in the space of 2 liters. It performs 1 billion calculations per second (1 EXA flop). For us to simulate the brain we need 1.5 million processors and 1.6 petabytes of memory( 16 lakhs GB) and megawatts of power and space of the entire building. But few companies have attempted.

petaflop k super computer.

Peta flop k supercomputer in Japan simulated on day brain activity using neuro simulation technology NEST algorithms. It took 4.68 years to simulate one-day brain activity. That’s 1700* slower than the human brain.

Japan’s post k exaflop performs one-day activity in 310 days. That is 310* slower than the human brain

Every computer is based upon the VON NEUMANN architecture where memory and processor are connected through data lines and are tightly coupled.

But what if we choose the brain to increase computational capability? As seen by the drastic performance of brain simulation, the more biologically representative architecture has to be implemented. THE NEUROMORPHIC ARCHITECTURE. Well, I have a non-WIKIPEDIA answer. Simple, taking inspiration from the brain and implementing them. It helps us to accurately and real-time simulate the aspects of the brain.

von Neumann vs neuromorphic

But the brain is not the perfect machine. In any regard, they get bored, fatigued, frustrated and are not perfect decision-makers, and can prone to errors. This leads us to other goals, pairing neuromorphic architecture with Artificial Intelligence. To take the best aspects of the brain and pair them to the von Neumann architecture which leads to heterogeneous architecture.

let’s assume this heterogeneous architecture as a brain in which the left side of the brain is made up of von Neumann architecture and the right side of the brain is made up of neuromorphic architecture. The left brain performs analytical thinking and language whereas the right brain deals with pattern recognition, learning, reasoning.

Von Neumann architecture is measured by FLOPS(floating-point operations per second) where a neuromorphic computer is calculated using SOPS(synoptic operations per second). One computer science stream which we deal with is MACHINE LEARNING. In machine learning, we learn about neural networks. By creating nodes assigning weights and feeding them with a large set of data just like neurons.

neural network. (1)

so machine learning + neuromorphic architecture + Von Neumann architecture together emulating consciousness in machines called cognitive computing.

So how to design such. let’s understand the basics of the composition of neuron


It is the composition of the cell body, axon, synapses let us assume cell body->processor, axon->data bus, synapses->memory. All these compose to form ”Neuro synaptic core”. These are nodes in the machine learning neural nets(more physical).

neuro synaptic core.

This neuro synaptic core works without a clock. This is referred to as a spiking neural network. Where neural synaptic cores are activated when a signal reaches a certain activation threshold. It will run until the power off.

parallel operation is multiple neuro synaptic cores can be activated and trigger other cores at the same time. This clockless parallel architecture allows for a vast decrease in energy consumption and an increase in performance.

If any synaptic core stops working(refer to neural net (1) image), the neural net can adapt and route through other core. This is referred to as neuroplasticity.

Many companies are trying to make a device made up of this neuromorphic architecture. Like IBM’s true north device. which made first in 2011 with 256 neurons with 262,144 synapses with one neuro synaptic core. It was again designed in 2014 with I million neurons with 256 million synapses,4096 neuro synaptic cores( each contains 250 neurons with 65000 synapses and 40 billion SOPS).

and intel also trying to achieve this through the LOIHI chip. You can google them.

So you think I am going to work on this. Well, I am going to work on the software side, by studying more about the brain and functionalities and simulate them through computational methods. I want to design an algorithm for it.

So we need to work on

these are the skills to be gained. But don’t worry ill write only about neuroscience and computational or neuromorphic computing or cognitive computing approach to it.

let’s achieve the cognition together….

Thanks to the internet and some YOUTUBE channels. If you like the blog follow me, comment down for queries and applaud to appreciate me which gives me a kind of motivation. We will together learn cognitive computing. And also comment down if you like to join my journey or if you need any special topic related to cognitive computing. In the next blog, we will try to learn machine learning.



sai tarun pelluru.♥

My name is pelluru venkata satya sai tarun. I am student with a great passion to learn . I live in visakhapatnam,Andrapradesh,india.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store