Neuromorphic Computing - Next generation of AI
The 1st generation of artificial intelligence drew
conclusions based on classical logic in a specific defined problem and was
useful to monitor processes and improve efficiency. The 2nd generation
is concerned with sense and perception, like analyzing video frame content
using deep learning networks.
The next gen AI will move towards human cognition, with
qualities like adaptation and interpretation. It will overcome the limitation
of AI solutions whose training depends on deterministic views of events and
lack context. Next-generation AI will emulate ordinary human activities.
Traditional computing is reaching
its limit are becoming inefficient to handle the next wave of AI. With Moore's
Law (the number of transistors doubles every two years while the cost halves)
almost reaching its limit, there is a search for new paths to increase the
computational abilities to take AI to the next level. Traditionally, all computers are based on Von
Neumann architecture where memory and processor are isolated and data moves
between them. This is different from biological computers, i.e. brain where the
memory and logic are closely connected in neurons and signals are transmitted through
synapses.
A neuromorphic chip copies this model
by implementing neurons in silicon with a goal to impart cognitive abilities to
machines. This dense network on neuromorphic chips is called Spiking Neural
Network (SNN). This network encodes information in form of spike trains, i.e.,
time difference between two spikes determines network properties. Neuron
functioning is governed by differential equation and uses analog signals exchanging
electric signal bursts at different intensities. It has an event-driven nature
of only making neurons in action active. This is unlike the current digital
chips which are binary based and have continuous values. Due to this
uniqueness, the SNN has a different training method than Artificial Neural
Network (ANN). It uses Spike Time Dependent Plasticity (STDP) rather than
gradient descent.
Connected processor and memory
makes neuromorphic chips more efficient at training and running neural
networks. They run AI models faster than equivalent CPUs and GPUs while
consuming less power. This is crucial as power consumption is a huge challenge
for AI. The small size and low power consumption also make them suited for use
cases that require running AI algorithms at the edge as opposed to the cloud.
Neuromorphic computing can create algorithmic approaches to deal with uncertain
and ambiguous situations.
A lot of big companies like IBM, Intel, Qualcomm have become key players in the space. Intel designed Loihi chip, which contains 131,000 neurons and 130 million synapses and processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors. Qualcomm created Zeroth chip which uses deep learning in a low-power platform suited for cell phones. IBM's TrueNorth chip has over a million neurons and over 268 million synapses. It is 10,000 times more energy-efficient than conventional microprocessors and only uses power when necessary.
Neuromorphic computing has various application segments such as image processing, signal processing, object detection, data processing.
In self-driven and smart vehicles, it will help in sensing and responding erratic behavior of surrounding vehicles. It will also play a role in satellites for surveillance and aerial imagery. Other applications include healthcare monitoring ,smart spaces and cyber-security.
For enterprises, this technology could mean massive
improvements in a host of areas, from predictive data analytics to automation
and process optimization.
Overall, neuromorphic computing provides a highly probable solution to the upcoming performance crisis.