Trending News
Home / International Defence Security and Technology / Technology / ICT / Next Generation Neuromorphic ( brain like ) computing chips bring deep learning from cloud to mobile devices

Next Generation Neuromorphic ( brain like ) computing chips bring deep learning from cloud to mobile devices

Last March, Google’s computers roundly beat the world-class Go champion Lee Sedol, marking a milestone in artificial intelligence. The winning computer program, created by researchers at Google DeepMind in London, used an artificial neural network that took advantage of what’s known as deep learning, a strategy by which neural networks involving many layers of processing are configured in an automated fashion to solve the problem at hand. In addition the computers Google used to defeat Sedol contained special-purpose hardware—a computer card Google calls its Tensor Processing Unit. Reportedly it uses application-specific integrated circuit, or ASIC to speed up deep-learning calculations.

 

Deep Neural networks (DNN) or large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain have been responsible for many exciting advances in artificial intelligence in recent years. The deep learning (DL) algorithms allow high-level abstraction from the data, and this is helpful for automatic features extraction and for pattern analysis/classification.

 

Deep learning is useful for many applications such as object recognition, face detection, speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning and the exponential increase of the chip processing capabilities, especially GPGPUs. The Big Data is term used to signify the exponential grow of data taking place, as 90% of the data in the world today has been created in the last two years alone.

 

However, both training and execution of large-scale DNNs require vast computing resources, leading to high power requirements and communication overhead. Scott Leishman, a computer scientist at Nervana, notes that another computationally intensive task—bitcoin mining—went from being run on CPUs to GPUs to FPGAs and, finally, on ASICs because of the gains in power efficiency from such customization. “I see the same thing happening for deep learning,” he says. Researchers are also developing neuromorphic chips based on silicon photonics and memristors.

 

The Air Force Research Lab (AFRL) reports good results from using a “neuromorphic” chip made by IBM to identify military and civilian vehicles in radar-generated aerial imagery. The unconventional chip got the job done about as accurately as a regular high-powered computer, using less than a 20th of the energy. The AFRL awarded IBM a contract worth $550,000 in 2014 to become the first paying customer of its brain-inspired TrueNorth chip. It processes data using a network of one million elements designed to mimic the neurons of a mammalian brain, connected by 256 million “synapses.”

 

Wu staged contest between TrueNorth and a high-powered Nvidia computer called the Jetson TX-1, that used different implementations of neural-network-based image-processing software to try and distinguish 10 classes of military and civilian vehicle represented in a public data set called MSTAR. Examples included Russian T-72 tanks, armored personnel carriers, and bulldozers. Both systems achieved about 95 percent accuracy, but the IBM chip used between a 20th and a 30th as much power.

 

 

The  enhanced power efficiency of neuromorphic allows deploying advanced machine vision, which usually requires a lot of computing power, in places where resources and space are limited. Satellites, high-altitude aircraft, air bases reliant on generators, and small drones could all benefit, says AFRL principal electronics engineer Qing Wu. “Air Force mission domains are air, space, and cyberspace. [All are] very sensitive to power constraints,” he says.

 

Scientists from China’s Zhejiang province have developed a computer chip that works much like the brain, the media reported. TrueNorth, an advanced brain-like chip developed by IBM, is on the US’ technology embargo list to China, so Chinese scientists had to start their research from scratch.

 

A neural network is typically organized into layers, and each layer contains a large number of processing nodes. Data come in and are divided up among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The output of the final layer yields the solution to some computational problem. Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.

 

The  major factors accounting for the recent success of deep neural network is the significant leap in the availability of computational processing power. Researchers have been taking advantage of graphical processing units (GPUs), which are small chips designed for high performance in processing the huge amount of visual content needed for video games. The foremost proponent of Graphics processing units, or GPUs  which can perform many mathematical operations in parallel  is  Nvidia . “Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning.” The company invested $2 billion in research and development (R&D) to design a graphics-processing architecture, as the company stated, “dedicated to accelerating AI and to accelerating deep learning.”

 

While large strides have recently been made in the development of high-performance systems for neural networks based on multi-core processors, Bill Jenkins of Intel suggests that significant challenges remain in power, cost and, performance scaling. Field-programmable gate arrays (FPGAs) are a natural choice for implementing neural networks, he believes, because they combine computing, logic, and memory resources in a single device. Reportedly, Microsoft is also using field-programmable gate arrays (FPGAs), which provide the benefit of being reconfigurable if the computing requirements change. The Nervana Engine, an ASIC deep-learning accelerator, will go into production in early to mid-2017.

 

Over the coming year, deep-learning software will increasingly find its way into applications for smartphones, where it is already used, for example, to detect malware or translate text in images.For those applications, the key will be low-power ASICs.  The drone manufacturer DJI is already using something akin to a deep-learning ASIC in its Phantom 4 drone, which uses a special visual-processing chip made by California-based Movidius to recognize obstructions.

Nvidia new AI brain has eight Pascal GPUs, 7TB of solid state memory, and needs 3,200 watts

The Tesla P100 is a Pascal-based chip  that packs 150 billion transistors into a 16 nanometer FinFET chip, resulting in an impressive 5.3 teraflops of performance. It also large memory bandwidth thanks to its use of High Bandwidth Memory 2, and the P100 is the first to feature the tech.

 

They’re are built specifically for massive deep learning networks, and has massive implications on everything from cloud networks, to social media, to self-driving cards and autonomous robots.Yet a massive supercomputing cluster consisting of 140,000 processing units still performs 83 times slower than a cat’s brain, said Wei Lu, a computer engineer at the University of Michigan.

New AI chip from MIT could enable mobile devices to run deep neural networks locally

At the International Solid State Circuits Conference in San Francisco, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.

 

Neural networks are typically implemented using graphics processing units (GPUs), special-purpose graphics chips found in all computing devices with screens. A mobile GPU, of the type found in a cell phone, might have almost 200 cores, or processing units, making it well suited to simulating a network of distributed processors.

 

Smartphones can already make use of deep learning by tapping into remote servers running the software. “It’s all about, I think, instilling intelligence into devices so that they are able to understand and react to the world—by themselves,” Lane says.

 

“Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

 

The new chip, which the researchers dubbed “Eyeriss,” could also help usher in the “Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination. “With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots,” writes Larry Hardesty of MIT News Office.

 

The key to Eyeriss’s efficiency is to minimize the frequency with which cores need to exchange data with distant memory banks, an operation that consumes a good deal of time and energy. Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. Moreover, the chip has a circuit that compresses data before sending it to individual cores.

 

Each core is also able to communicate directly with its immediate neighbors, so that if they need to share data, they don’t have to route it through main memory. This is essential in a convolutional neural network, in which so many nodes are processing the same data.

 

The final key to the chip’s efficiency is special-purpose circuitry that allocates tasks across cores. In its local memory, a core needs to store not only the data manipulated by the nodes it’s simulating but data describing the nodes themselves. The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximizes the amount of work that each of them can do before fetching more data from main memory.

 

“This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” says Mike Polley, a senior vice president at Samsung’s Mobile Processor Innovations Lab. “In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe.”

 

“In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe. The MIT researchers’ work was funded in part by DARPA.

 

Silicon Photonic Neuromorphic chips and Neural Network

The research team has made the pioneering breakthrough of the development of photonic computer chips that imitate the way the brain’s synapses operate. The work, conducted by researchers from Oxford, Münster and Exeter universities, combined phase-change materials – commonly found in household items such re-writable optical discs – with specially designed integrated photonic circuits to deliver a biological-like synaptic response.

Crucially, their photonic synapses can operate at speeds a thousand times faster than those of the human brain.

The PCM’s ability to absorb light changes when heated, which can be used to control the amount of light that passes through the waveguide. In previous research, the group had shown that optical pulses could be used to switch between various states of absorption to store information—effectively creating a photonic memory device.

The team believes that the research could pave the way for a new age of computing, where machines work and think in a similar way to the human brain, while at the same time exploiting the speed and power efficiency of photonic systems.

Professor C David Wright, co-author from the University of Exeter, said: ‘Electronic computers are relatively slow, and the faster we make them the more power they consume. Conventional computers are also pretty “dumb”, with none of the in-built learning and parallel processing capabilities of the human brain. We tackle both of these issues here – by developing not only new brain-like computer architectures, but also by working in the optical domain to leverage the huge speed and power advantages of the upcoming silicon photonics revolution.’

Professor Wolfram Pernice, a co-author of the paper from the University of Münster, added: ‘Since synapses outnumber neurons in the brain by around 10,000 to one, any brain-like computer needs to be able to replicate some form of synaptic mimic. That is what we have done here.’

Silicon Photonic Neural Network Unveiled

Alexander Tait and pals at Princeton University in New Jersey have built an integrated silicon photonic neuromorphic chip and show that it computes at ultrafast speeds. “Photonic neural networks leveraging silicon photonic platforms could access new regimes of ultrafast information processing for radio, control, and scientific computing,” say Tait and co.

 

The authors have reported the first experimental demonstration of an integrated photonic neural network that also makes first use of electro-optic modulators as photonic neurons.The nodes take the form of tiny circular waveguides carved into a silicon substrate in which light can circulate. When released this light then modulates the output of a laser working at threshold, a regime in which small changes in the incoming light have a dramatic impact on the laser’s output.

 

A silicon-compatible photonic neural networking architecture called “broadcast-and-weight” has been proposed.  In this architecture, each node’s output is assigned a unique wavelength carrier that is wavelength division multiplexed (WDM) and broadcast to other nodes. Incoming WDM signals are weighted by reconfigurable, continuous-valued filters called microring (MRR) weight banks  and then summed by total power detection. This electrical weighted sum then modulates the corresponding WDM channel. A nonlinear electro-optic transfer function, such as a laser at threshold or, in this work, a saturated modulator, provides the nonlinearity required for neuron functionality.

 

They go on to demonstrate how this can be done using a network consisting of 49 photonic nodes. They use this photonic neural network to solve the mathematical problem of emulating a certain kind of differential equation and compare it to an ordinary central processing unit.

 

The results show just how fast photonic neural nets can be. “The effective hardware acceleration factor of the photonic neural network is estimated to be 1,960 × in this task,” say Tait and co. That’s a speed up of three orders of magnitude. “Silicon photonic neural networks could represent first forays into a broader class of silicon photonic systems for scalable information processing,” say Taif and co.

 

Memristor could implement brain-inspired device to power artificial systems

Memristors offer attractive alternative for implementation of deep neural networks. They take on the role of traditional transistors in such computers by opposing the flow of current. They can also remember the last voltage they experienced, not unlike the brain’s synapses. Those signal junctions build stronger connections among neurons based on the strength and timing of signals, and form a basic part of how the brain’s memory and learning processes work. “We show that we can use voltage timing to gradually increase or decrease the electrical conductance in this memristor-based system,” Wei Lu, a computer engineer at the University of Michigan said. “In our brains, similar changes in synapse conductance essentially give rise to long term memory.”

 

for more information on Memresistors: http://idstch.com/home5/international-defence-security-and-technology/technology/biosciences/new-memristor-materials-chips-basis-next-gen-computing-deep-neural-networks-analyzing-big-data-video-real-time/

 

 

 

 

 

Brain-like computer chip developed by Chinese scientists

Jointly developed by scientists from Hangzhou Dianzi University and Zhejiang University, the new chip, named “Darwin” was revealed earlier  after more than a year of research, Xinhua news agency reported. “It can perform intelligent computer tasks by simulating a human brain’s neural networks, in which neurons connect with one another via synapses,” said Ma De from Hangzhou Dianzi University.

 

The black plastic piece is smaller than a dime, but is equipped with 2,048 neurons and four million synapses, two of the fundamental units that make up the human brain. With the new chip, a computer can do more while using less electricity.”It can process ‘fuzzy information’ that conventional computer chips fail to process,” said Shen Juncheng, a scientist from Zhejiang University. For example, it can recognize numbers written by different people, distinguish among different images, and move objects on screen by receiving a user’s brain signals. The chip is expected to be used in robotics, intelligent hardware systems and brain-computer interfaces, but its development is still in the preliminary stage, according to Ma.

 

IBM working on next Generation chips for mobile devices

IBM is working on next-generation “neuromorphic” (brain-like) computing chips to make mobile devices better at tasks that are easy for brains but tough for computers, such as speech recognition and image interpretation, the prestigious MIT Technology Review reports. At the same time, IBM is pursuing the first commercial applications of its new chips.

 

“We’re working on a next generation of the chip, but what’s most important now is commercial partners,” says IBM senior VP John Kelly. “Companies could incorporate this in all sorts of mobile devices, machinery, automotive, you name it.”

 

According to Kelly, adding neuromorphic chips to smartphones could make them capable of voice recognition and computer vision, without having to tap into cloud computing infrastructure, and using very little power.

 

“Our brain is characterized by extreme power efficiency, fault tolerance, compactness and the ability to develop and to learn. It can make predictions from noisy and unexpected input data,” says Karlheinz Meier is a professor of experimental physics at Heidelberg University in Germany. Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli.

 

IBM creates TrueNorth computer chip, that is nearly as intelligent as a Honey Bee

In 2014, Scientists at IBM Research advanced neuromorphic (brain-like) computer chip, called TrueNorth, consisting of 1 million programmable neurons and 256 million programmable synapses, comparable to the brain of Honey bee that contains 960,000 neurons and ~ 109 synapse.

 

“The [TrueNorth] chip consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt – literally a synaptic supercomputer in your palm,” noted Dharmendra Modha, who leads development of IBM’s brain-inspired chips. “A hypothetical computer to run [a human-scale] simulation in real-time would require 12GW, whereas the human brain consumes merely 20W.”

 

Massimiliano Versace, who directs the Boston University Neuromorphics Lab and worked on another part of the Pentagon contract that funded IBM’s work on TrueNorth, says the results are promising. But he notes that IBM’s chip currently comes with trade-offs.

 

It is much easier to deploy neural networks on conventional computers, thanks to software made available by Nvidia, Google, and others. And IBM’s unusual chip is much more expensive. “Ease of use and price are [the] two main factors rowing against specialized neuromorphic chips,” says Versace.

Non-von Neumann Architecture

This feat was made possible by scalable, and flexible, non–von Neumann architecture that stores and processes information in a distributed, massively parallel way. The information flows by way of neural spikes, from axons to neurons, modulated by the programmable synapses between them.According to IBM its unique architecture could solve “a wide class of problems from vision, audition, and multi-sensory fusion, and has the potential to revolutionize the computer industry by integrating brain-like capability into devices where computation is constrained by power and speed.”

 

The classical von Neumann architecture, in which the processing of information and the storage of information are kept separate, has now faced a performance bottleneck. Data travels to and from the processor and memory—but the computer can’t process and store at the same time. By the nature of the architecture, it’s a linear process, and ultimately leads to the von Neuman “bottleneck.”

 

The chip was built on Samsung’s standard CMOS 28nm process, containing 5.4 billion transistors, with 4096 neurosynaptic cores interconnected via an intrachip network. The chip was built under DARPA’s SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program, which has funded the project since 2008 with approximately $53m (£31.5m) of investment.

 

The chip shall be useful in many applications that use complex neural networks in real time, for example, multiobject detection and classification, search and rescue operations by helping robots to identify people, and new computing platform for mobile, cloud, and distributed sensor applications. IBM researchers suggest that traditional computers work like the left side of our brain, similar to a fast number-crunching calculator. They compare TrueNorth to right side of our brain, likening the system to “slow, sensory, pattern recognizing machines.”

 

Industry is also working on “neuromorphic” technology that can incorporate nano-chips into wearables modeled on the human brain. Eventually these nano-chips may be implanted into our brains artificially, augmenting human thought and reasoning capabilities

 

Analog and Digital neuromorphic chips

The chips like IMB North and  SpiNNaker, a project developed by the University of Manchester are digital, they compute the information using the binary system. However, some neuromorphic chips are analog, they consist of neuromorphic hardware elements where information is processed with analog signals; that is, they do not operate with binary values, as information is processed with continuous values.

 

In analog chips, there is no separation between hardware and software, because the hardware configuration is in charge of performing all the computation and can modify itself. A good example is the HiCANN chip, developed at the University of Heidelberg, which uses wafer-scale above-threshold analog circuits. There are also hybrid neuromorphic chips, like the Neurogrid from Stanford, which seek to make the most of each type of computing. It usually processes in analog and communicates in digital.

 

References and Resources also include:

 

image_pdfimage_print

Check Also

Swarm

Innovations in swarm behaviors improve self improve military force protection, firepower, precision effects, and ISR capabilities in urban operations

The United States military successfully launched what it’s calling “one of the world’s largest micro-drone …

error: Content is protected !!