Home / Technology / AI & IT / Analog Neuromorphic Chips promise to bring deep learning to mobile and wearable devices

Analog Neuromorphic Chips promise to bring deep learning to mobile and wearable devices

Most consumers use some form of artificial intelligence (AI) and machine learning every day without even realizing it. From AI-driven applications like Google Maps to autopilot mechanisms on commercial flights to anti-spam filters that depend on machine learning to adjust their rules over time, next-gen technology is everywhere.

 

Deep Neural networks (DNN) or large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain have been responsible for many exciting advances in artificial intelligence in recent years. The deep learning (DL) algorithms allow high-level abstraction from the data, and this is helpful for automatic features extraction and for pattern analysis/classification. However, both training and execution of large-scale DNNs require vast computing resources, leading to high power requirements and communication overhead. The chips like IMB North and  SpiNNaker, a project developed by the University of Manchester are digital, they compute the information using the binary system.

 

However our Human brain  that has somewhere around 100 billion neurons processes information in analog fashion. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

 

Inspired by Brain, scientists have been experimenting with analog circuits  for implementing deep nets to better see and hear while consuming a fraction of the power on the order of 100x.  These neuromorphic chips are analog, consisting of neuromorphic hardware elements where information is processed with analog signals; that is, they do not operate with binary values, as information is processed with continuous values.

 

The general problem with analog circuits have been they are inherently more noisy than digital circuits.Indeed, our brains are incredibly noisy systems that work just fine.  However with neural nets, where the internal states don’t have to be precise and the system adapts to produce the desired output for a given input.

 

Stanford’s Brains in Silicon project and the University of Michigan IC Lab, with backing from DARPA’s SyNAPSE and the U.S. Office of Naval Research, are building tools to make it easier to build analog neuromorphic systems. Stealthy startups are also beginning to emerge. Rather than attempt to run deep nets on standard digital circuits, they have designed analog systems that can perform similar computations with much less power, with inspiration from our analog brains.

 

In analog chips, there is no separation between hardware and software, because the hardware configuration is in charge of performing all the computation and can modify itself. A good example is the HiCANN chip, developed at the University of Heidelberg, which uses wafer-scale above-threshold analog circuits. There are also hybrid neuromorphic chips, like the Neurogrid from Stanford, which seek to make the most of each type of computing. It usually processes in analog and communicates in digital.

 
Right now, the neural networks are pretty complex and are mostly run on high-power GPUs. Using analog circuits would allow bring  such functionality to your cell phone or embedded devices,.  The wearable gadget  won’t have to be “cloud connected” to be smart—it will carry enough “intelligence” to be able to work even where no Wi-Fi or cellular coverage is available. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.

 

 

MIT Engineers design artificial synapse for “brain-on-a-chip” hardware

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

 

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

 

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy. The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

 

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow. Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

 

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

 

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

 

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

 

Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers. “Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says.

 

References and Resources also include:

http://news.mit.edu/2018/engineers-design-artificial-synapse-brain-on-a-chip-hardware-0122

https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/analog-and-neuromorphic-chips-will-rule-robotic-age

About Rajesh Uppal

Check Also

DARPA ONISQ to exploit quantum computers for improving artificial intelligence (AI), enhancing distributed sensing and improving military Logistics

Quantum technologies offer ultra-secure communications, sensors of unprecedented precision, and computers that are exponentially more …

error: Content is protected !!