Within AI is a large subfield called machine learning, the field of study that gives computers the ability to learn without being explicitly programmed. Instead of the laborious and hit-or-miss
approach of creating a distinct, custom program to solve each individual problem in a domain, the single machine learning algorithm simply needs to learn, via a processes called training, to handle each new problem.
Within the machine learning field, there is an area that is often referred to as brain-inspired computation which is a program or algorithm that takes some aspects of its basic form or functionality from the way the brain works. This is in contrast to attempts to create a brain, but rather the program aims to emulate some aspects of how we understand the brain to operate.
Although scientists are still exploring the details of how the brain works, it is generally believed that the main computational element of the brain is the neuron. There are approximately 86 billion neurons in the average human brain.
The neurons themselves are connected together with a number of elements entering them called dendrites and an element leaving them called an axon. The neuron accepts the signals entering it via the dendrites, performs a computation on those signals, and generates a signal on the axon. These input and output signals are referred to as activations. The axon of one neuron branches out and is connected to the dendrites of many other neurons. The connections between a branch of the axon and a dendrite is called a synapse. There are estimated to be 1014 to 1015 synapses in the average human brain.
A key characteristic of the synapse is that it can scale the signal (xi) crossing it, the factor can be referred to as a weight (wi), and the way the brain is believed to learn is through changes to the weights associated with the synapses. Thus, different weights result in different responses to input. Note that learning is the adjustment of the weights in response to a learning stimulus, while the organization (what might be thought of as the program) of the brain does not change. This characteristic makes the brain an excellent inspiration for a machine learning-style algorithm.
Neural networks take their inspiration from the notion that a neuron’s computation involves a weighted sum of the input values. These weighted sums correspond to the value scaling performed by the synapses and the combining of those values in the neuron. Furthermore, the neuron does not just output that weighted sum, since the computation associated with a cascade of neurons would then be a simple linear algebra operation. Instead there is a functional operation within the neuron that is performed on the combined inputs. This operation appears to be a nonlinear function that causes a neuron to generate an output only if the inputs cross some threshold. Thus by analogy, neural networks apply a nonlinear function to the weighted sum of the input values.
The neurons in the input layer receive some values and propagate them to the neurons in the middle layer of the network, which is also frequently called a “hidden layer.” The weighted sums from one or more hidden layers are ultimately propagated to the output layer, which presents the final outputs of the network to the user.
Within the domain of neural networks, there is an area called deep learning, in which the neural networks have more than three layers, i.e., more than one hidden layer. Today, the typical numbers of network layers used in deep learning ranges from five to more than a thousand.
Deep Neural networks (DNN)
Deep Neural networks (DNN) or large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain have been responsible for many exciting advances in artificial intelligence in recent years. DNNs are capable of learning high-level features with more complexity and abstraction than shallower neural networks.
DNNs are employed in a myriad of applications from self-driving cars, to detecting cancer to playing complex games. In many of these domains, DNNs are now able to exceed human accuracy. The superior performance of DNNs comes from its ability to allow high-level abstraction from the data, and this is helpful for automatic features extraction and for pattern analysis/classification. This is different from earlier approaches that use hand-crafted features or rules designed by experts.
The superior accuracy of DNNs, however, comes at the cost of high computational complexity. While general-purpose compute engines, especially graphics processing units (GPUs), have been the mainstay for much DNN processing, increasingly there is interest in providing more specialized acceleration of the DNN computation.
The major factor accounting for the recent success of deep neural network is the significant leap in the availability of computational processing power. When Google’s computers roundly beat the world-class Go champion Lee Sedol, it marked a milestone in artificial intelligence. The winning computer program, created by researchers at Google DeepMind in London, used an artificial neural network that took advantage of what’s known as deep learning, a strategy by which neural networks involving many layers of processing are configured in an automated fashion to solve the problem at hand. In addition the computers Google used to defeat Sedol contained special-purpose hardware—a computer card Google calls its Tensor Processing Unit. Reportedly it uses application-specific integrated circuit, or ASIC to speed up deep-learning calculations.
Accordingly, designing efficient hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems. The industry is exploring next-generation chip architectures such as on-chip memories or neuromorphic chips, to reduce the significant costs of data exchange.
Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements.
New AI chip from MIT could enable mobile devices to run deep neural networks locally
At the International Solid State Circuits Conference in San Francisco, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.
Neural networks are typically implemented using graphics processing units (GPUs), special-purpose graphics chips found in all computing devices with screens. A mobile GPU, of the type found in a cell phone, might have almost 200 cores, or processing units, making it well suited to simulating a network of distributed processors.
Smartphones can already make use of deep learning by tapping into remote servers running the software. “It’s all about, I think, instilling intelligence into devices so that they are able to understand and react to the world—by themselves,” Lane says.
“Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”
The new chip, which the researchers dubbed “Eyeriss,” could also help usher in the “Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination. “With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots,” writes Larry Hardesty of MIT News Office.
The key to Eyeriss’s efficiency is to minimize the frequency with which cores need to exchange data with distant memory banks, an operation that consumes a good deal of time and energy. Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. Moreover, the chip has a circuit that compresses data before sending it to individual cores.
Each core is also able to communicate directly with its immediate neighbors, so that if they need to share data, they don’t have to route it through main memory. This is essential in a convolutional neural network, in which so many nodes are processing the same data.
The final key to the chip’s efficiency is special-purpose circuitry that allocates tasks across cores. In its local memory, a core needs to store not only the data manipulated by the nodes it’s simulating but data describing the nodes themselves. The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximizes the amount of work that each of them can do before fetching more data from main memory.
“This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” says Mike Polley, a senior vice president at Samsung’s Mobile Processor Innovations Lab. “In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe.”
“In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe. The MIT researchers’ work was funded in part by DARPA.
Spiking computing and Spiking neural networks (SNNs)
Within the brain-inspired computing paradigm, there is a subarea called spiking computing. In this subarea, inspiration is taken from the fact that the communication on the dendrites and axons are spike-like pulses and that the information being conveyed is not just based on a spike’s amplitude. Instead, it also depends on the time the pulse arrives and that the computation that happens in the neuron is a function of not just a single value but the width of pulse and
the timing relationship between different pulses.
Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather transmit information only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value, called the threshold.
The signals take the form of what are called “spikes,” which are brief changes in the voltage across the neuron’s cell membrane. Spikes travel down axons until they reach the junctions with other cells (called synapses), at which point they’re converted to a chemical signal that travels to the nearby dendrite. This chemical signal opens up channels that allow ions to flow into the cell, starting a new spike on the receiving cell.
The receiving cell integrates a variety of information—how many spikes it has seen, whether any neurons are signaling that it should be quiet, how active it was in the past, etc.—and uses that to determine its own activity state. Once a threshold is crossed, it’ll trigger a spike down its own axons and potentially trigger activity in other cells.
Spiking neural networks can be implemented in software on traditional processors. But it’s also possible to implement them through hardware, as Intel is doing with Loihi. Another example of a project that was inspired by the spiking of the brain is the IBM TrueNorth.
Neuromorphic computing is understanding how the morphology of individual neurons, circuits, applications and overall architectures of human brain’s neural network and applying the same in creating an artificial neural system, with objective to make machines work more independently in all unconstrained environments.
High energy efficiency, fault tolerance and powerful problem-solving capabilities are all also traits that the brain possesses. For example, the brain uses roughly 20 watts of power on average, which is about half that of a standard laptop. It is also extremely fault-tolerant — information is stored redundantly (in multiple places), and even relatively serious failures of certain brain areas do not prevent general function. It can also solve novel problems and adapt to new environments very quickly.
Neuromorphic computing achieves this brainlike function and efficiency by building artificial neural systems that implement “neurons” (the actual nodes that process information) and “synapses” (the connections between those nodes) to transfer electrical signals using analog circuitry. This enables them to modulate the amount of electricity flowing between those nodes to mimic the varying degrees of strength that naturally occurring brain signals have.
Neuromorphic systems also introduce a new chip architecture that collocates memory and processing together on each individual neuron instead of having separate designated areas for each.
In Neuromorphic hardware based on Spiking neural networks (SNNs) calculations are performed by lots of small units that communicate with each other through bursts of activity called spikes and adjust their behavior based on the spikes they receive from others. While Neural networks, focused on the organizational principles of the nervous system, Spiking neural networks, which attempt to build up from the behavior of individual neurons, is another.
For neuromorphic computing, the problem is set up by configuring the axons, which determine what neurons signal to what targets, as well as the code that determines when a neuron sends spikes. From there, the rules of the system determine how the spiking behavior evolves, either from the initial state or in response to further input. The solution can then be read out by examining the spiking behavior of different neurons.
Neuromorphic systems also introduce a new chip architecture that collocates memory and processing together on each individual neuron instead of having separate designated areas for each.
A traditional computer chip architecture (known as the von Neumann architecture) typically has a separate memory unit (MU), central processing unit (CPU) and data paths. This means that information needs to be shuttled back and forth repeatedly between these different components as the computer completes a given task. This creates a bottleneck for time and energy efficiency — known as the von Neumann bottleneck.
By collocating memory, a neuromorphic chip can process information in a much more efficient way and enables chips to be simultaneously very powerful and very efficient. Each individual neuron can perform either processing or memory, depending on the task at hand.
As traditional processors struggle to meet the demands of compute-intensive artificial intelligence applications, dedicated AI chips are playing an increasingly important role in research, development, and on the cloud and edge.
Industry is working on “neuromorphic” technology that can incorporate nano-chips into wearables modeled on the human brain. Eventually these nano-chips may be implanted into our brains artificially, augmenting human thought and reasoning capabilities.
The most prominent efforts in this regard, IBM’s TrueNorth chip and Intel’s Loihi processor. IBM, which introduced its TrueNorth chip in 2014, was able to get useful work out of it even though it was clocked at a leisurely kiloHertz, and it used less than .0001 percent of the power that would be required to emulate a spiking neural network on traditional processors.
Neuromorphic computing company called BrainChip has just put the finishing touches on its silicon and is about to introduce its first commercial offering into the wild.
Implementation approaches for neuromorphic computing vary but broadly divide into those trying to use conventional digital circuits (e.g. SpiNNaker) and those trying to actually ‘create’ analog neurons in silicon (e.g. BrainScaleS).
Intel launches its next-generation neuromorphic processor Loihi
The previous-generation Loihi chip contains 128 individual cores connected by a communication network. Each of those cores has a large number of individual “neurons,” or execution units. Each of these neurons can receive input in the form of spikes from any other neuron—a neighbor in the same core, a unit in a different core on the same chip or from another chip entirely. The neuron integrates the spikes it receives over time and, based on the behavior it’s programmed with, uses that to determine when to send spikes of its own to whatever neurons it’s connected with.
All of the spike signaling happens asynchronously. At set time intervals, embedded x86 cores on the same chip force a synchronization. At that point, the neuron will redo the weights of its various connections—essentially, how much attention to pay to all the individual neurons that send signals to it.
Put in terms of an actual neuron, part of the execution unit on the chip acts as a dendrite, processing incoming signals from the communication network based in part on the weight derived from past behavior. A mathematical formula was then used to determine when activity had crossed a critical threshold and to trigger spikes of its own when it does. The “axon” of the execution unit then looks up which other execution units it communicates with, and it sends a spike to each. In the earlier iteration of Loihi, a spike simply carried a single bit of information. A neuron only registered when it received one.
Unlike a normal processor, there’s no external RAM. Instead, each neuron has a small cache of memory dedicated to its use. This includes the weights it assigns to the inputs from different neurons, a cache of recent activity, and a list of all the other neurons that spikes are sent to.
One of the other big differences between neuromorphic chips and traditional processors is energy efficiency, where neuromorphic chips come out well ahead. Mike Davies, director of Intel’s Neuromorphic Computing Lab, said Loihi can beat traditional processors by a factor of 2,000 on some specific workloads. “We’re routinely finding 100 times [less energy] for SLAM and other robotic workloads,” he added.
IBM working on next Generation chips for mobile devices
IBM is working on next-generation “neuromorphic” (brain-like) computing chips to make mobile devices better at tasks that are easy for brains but tough for computers, such as speech recognition and image interpretation, the prestigious MIT Technology Review reports. At the same time, IBM is pursuing the first commercial applications of its new chips.
“We’re working on a next generation of the chip, but what’s most important now is commercial partners,” says IBM senior VP John Kelly. “Companies could incorporate this in all sorts of mobile devices, machinery, automotive, you name it.”
According to Kelly, adding neuromorphic chips to smartphones could make them capable of voice recognition and computer vision, without having to tap into cloud computing infrastructure, and using very little power.
“Our brain is characterized by extreme power efficiency, fault tolerance, compactness and the ability to develop and to learn. It can make predictions from noisy and unexpected input data,” says Karlheinz Meier is a professor of experimental physics at Heidelberg University in Germany. Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli.
IMEC’s “Intelligent Drones”
As SNNs operate similarly to biological neural networks, IMEC’s chip incorporated with SNN would consume 100 times less power than traditional implementations while featuring a tenfold reduction in latency – enabling almost instantaneous decision-making.
IMEC firstly aims to create a low-power, highly intelligent anti-collision radar system for drones that can react much more effectively to approaching objects, with just an atom of charge.
BrainChip’s Akida™ is a revolutionary advanced neural networking processor that brings Artificial Intelligence to the Edge
Akida leverages advanced neuromorphic computing as the engine to solve critical problems such as privacy, security, latency, low power requirements, with key features such as one-shot learning and computing on the device with no dependency on the cloud. These capabilities satisfy next-generation demands by achieving efficient, effective and easy AI functionality.
“Sensors at the edge require real-time computation and managing both ultra-low power and latency requirements with traditional maching learning is extremely difficult when it comes to empowering smart intelligent edge sensors,” said Telson. “The Akida processor provides OEMs and car makers with a cost-effective and robust ability to perform real-time, in-vehicle preventative care by running noise and vibration analysis. I’m looking forward to sharing how BrainChip’s Akida makes intelligent AI to the edge easy at both my panel discussion and the roundtable with Denso and Toyota.”
Akida is high-performance, small, ultra-low power and enables a wide array of edge capabilities. The Akida (NSoC) and intellectual property, can be used in applications including Smart Home, Smart Health, Smart City and Smart Transportation. These applications include but are not limited to home automation and remote controls, industrial IoT, robotics, security cameras, sensors, unmanned aircraft, autonomous vehicles, medical instruments, object detection, sound detection, odor and taste detection, gesture control and cybersecurity.
The company has been developing hardened SoC based on FPGA-based spiking neural network (SNN) accelerator, known as Akida (Greek for spike). Instead of just supporting a spiking neural network computing model, now they’ve integrated the capability to run convolutional neural networks (CNNs) as well.
According to Roger Levinson, BrainChip’s chief operating officer, the CNN support was incorporated to make the solution a better fit for their initial target market of AI at the edge. Specifically, since convolutional neural networks have proven to be especially adept at picking up features in images, audio, and other types of sensor data using these matrix math correlations, they provide a critical capability for the kinds of AI applications commonly encountered in edge environments. Specific application areas being targeted include embedded vision, embedded audio, automated driving (LiDAR, RADAR), cybersecurity, and industrial IoT.
The trick was to integrate the CNN support in such a way as to take advantage of the natural energy efficiency of the underlying spiking behavior of the neuromorphic design. This was accomplished by using spiking converters to create discrete events (spikes) from quantized data. In keeping with the low power and more modest computation demands of edge devices, it uses 1-bit, 2-bit, or 4-bit (INT1, INT2, or INT4) precision for each CNN layer, instead of the typical 8-bit (INT8) precision. That saves both energy and memory space, usually at the cost of only a few percentage points of accuracy,
At a higher level, the Akida chip is comprised of 80 Neuromorphic Processing Units (NPU), providing 1.2 million virtual neurons and 10 billion virtual synapses. Each NPU comes with 100 KB of SRAM memory and is networked with its fellow NPUs into an on-chip mesh. Each NPU also contains eight Neural Processing Engines (NPEs) that are in charge of the convolutional support, namely matrix math, pooling, and activation. Since all of this is built around a sparse spiking-based model, the energy efficiency of these CNN operations is potentially much better than that of a GPU, CPU, or even a purpose-built AI accelerator. Levinson said the first silicon will be implemented on 28 nanometer technology and will consume between a few hundred microwatts up to a few hundred milliwatts, depending on the demands of the application. The rest of the SoC consists of I/O and data interfaces, memory interfaces, and an Arm M-class CPU, which is only used for initial setup. There’s also an interface to create a multi-chip array of Akida devices.
Akida is not, however, suitable for training models requiring high precision. It is mainly built mainly for inference work, which is the primary use case for edge devices. That said, it will also able to do some types of incremental learning on pre-trained CNNs, a capability that separates it from competing neuromorphic designs. So, for example, an Akida-powered smart doorbell outfitted with a camera, would be able to augment a facial recognition model to learn the faces of your family and friends, while the same device next door would be able to learn a different set of faces. That capability can be generalized across all sorts of applications where personalized unsupervised learning could be useful, like keyword spotting, gesture detection, and cybersecurity.
Although the first chip will be available as an 80-NPU SoC on 28 nanometer transistors, the platform can be delivered in various sizes and can be etched on any available process node. In fact, BrainChip will sell its technology either as SoCs or as an IP license, the latter for third-party chipmakers to integrate the neuromorphic technology into their own designs. Akida also comes with a full development environment for programmers, including the TensorFlow and Keras tools for standard CNNs, as well as a Python environment to build native SNNs.
At this point, Levinson said they are more focused on AI edge applications in industrial sensors, like smart city IoT, and similarly specialized applications that aren’t amenable to off-the-shelf solutions. While they believe the technology would be eminently suitable for smartphone AI applications, it’s a much longer-term effort to gain entrance into that particular supply chain. At the other end of the market, Levinson believes Akida also has a place in edge servers, which are increasingly being deployed in manufacturing and telco environments.
Imec and GLOBALFOUNDRIES Announce Breakthrough in AI Chip, Bringing Deep Neural Network Calculations to IoT Edge Devices
Imec, a world-leading research and innovation hub in nanoelectronics and digital technologies, and GLOBALFOUNDRIES® (GF®), the world’s leading specialty foundry, announced in July 2020 a hardware demonstration of a new artificial intelligence chip. Based on imec’s Analog in Memory Computing (AiMC) architecture utilizing GF’s 22FDX® solution, the new chip is optimized to perform deep neural network calculations on in-memory computing hardware in the analog domain. Achieving record-high energy efficiency up to 2,900 TOPS/W, the accelerator is a key enabler for inference-on-the-edge for low-power devices. The privacy, security and latency benefits of this new technology will have an impact on AI applications in a wide range of edge devices, from smart speakers to self-driving vehicles.
Since the early days of the digital computer age, the processor has been separated from the memory. Operations performed using a large amount of data require a similarly large number of data elements to be retrieved from the memory storage. This limitation, known as the von Neumann bottleneck, can overshadow the actual computing time, especially in neural networks – which depend on large vector matrix multiplications. These computations are performed with the precision of a digital computer and require a significant amount of energy. However, neural networks can also achieve accurate results if the vector-matrix multiplications are performed with a lower precision on analog technology.
To address this challenge, imec and its industrial partners in imec’s industrial affiliation machine learning program, including GF, developed a new architecture which eliminates the von Neumann bottleneck by performing analog computation in SRAM cells. The resulting Analog Inference Accelerator (AnIA), built on GF’s 22FDX semiconductor platform, has exceptional energy efficiency. Characterization tests demonstrate power efficiency peaking at 2,900 tera operations per second per watt (TOPS/W). Pattern recognition in tiny sensors and low-power edge devices, which is typically powered by machine learning in data centers, can now be performed locally on this power-efficient accelerator.
“The successful tape-out of AnIA marks an important step forward toward validation of Analog in Memory Computing (AiMC),” said Diederik Verkest, program director for machine learning at imec. “The reference implementation not only shows that analog in-memory calculations are possible in practice, but also that they achieve an energy efficiency ten to hundred times better than digital accelerators. In imec’s machine learning program, we tune existing and emerging memory devices to optimize them for analog in-memory computation. These promising results encourage us to further develop this technology, with the ambition to evolve towards 10,000 TOPS/W”.
“GlobalFoundries collaborated closely with imec to implement the new AnIA chip using our low-power, high-performance 22FDX platform,” said Hiren Majmudar, vice president of product management for computing and wired infrastructure at GF. “This test chip is a critical step forward in demonstrating to the industry how 22FDX can significantly reduce the power consumption of energy-intensive AI and machine learning applications.”
Looking ahead, GF will include AiMC as a feature able to be implemented on the 22FDX platform for a differentiated solution in the AI market space. GF’s 22FDX employs 22nm FD-SOI technology to deliver outstanding performance at extremely low power, with the ability to operate at 0.5 Volt ultralow power and at 1 pico amp per micron for ultralow standby leakage. 22FDX with the new AiMC feature is in development at GF’s state-of-the-art 300mm production line at Fab 1 in Dresden, Germany.
Global Neuromorphic Chip Market
The Neuromorphic Chip Market was valued at USD 22.5 million in 2020, and it is projected to be worth USD 333.6 million by 2026, registering a CAGR of 47.4% during the period of 2021-2026. Keeping the pace of advancement of disruptive technologies, such as artificial intelligence (AI) and machine learning (ML), various embedded system providers are keen to develop brain chips, where not only the chips will be processed fast but will also have responses like human brains for those systems to think and act in a human way.
Neuromorphic is a specific brain-inspired ASIC that implements the Spiked Neural Networks (SNNs). It has an object to reach the massively parallel brain processing ability in tens of watts on average. The memory and the processing units are in single abstraction (in-memory computing). This leads to the advantage of dynamic, self-programmable behavior in complex environments.
Neuromorphic chips can be designed digitally, analog, or in a mixed way. Analog chips resemble the characteristics of the biological properties of neural networks better than digital ones. In the analog architecture, few transistors are used for emulating the differential equations of neurons. Therefore, theoretically, they consume lesser energy than digital neuromorphic chips. Besides, they can extend the processing beyond its allocated time slot. Thanks to this feature, the speed can be accelerated to process faster than in real-time. However, the analog architecture leads to higher noise, which lowers the precision.
Digital ones, on the other hand, are more precise compared to analog chips. Their digital structure enhances on-chip programming. This flexibility allows artificial intelligent researchers to accurately implement various kinds of an algorithm with low-energy consumption compared to GPUs. Mixed chips try to combine the advantages of analog chips, i.e., lesser energy consumption, and the benefits of digital ones, i.e., precision.
Rising demand for artificial intelligence and machine learning is predicted to bolster market growth in future years. Additionally, the growing adoption of neuromorphic chips in the automotive, healthcare, aerospace & defense industries boosts the market size. Increasing applications in voice identification, machine vision, and video monitoring influence the market growth.
Neuromorphic architectures address challenges, such as high-power consumption, low speed, and other efficiency-related bottlenecks prevalent in the von Neumann architecture. Unlike the traditional von Neumann architecture with sudden highs and lows in binary encoding, neuromorphic chips provide a continuous analog transition in the form of spiking signals. Neuromorphic architectures integrate storage and processing, getting rid of the bus bottleneck connecting the CPU and memory.
The Global Neuromorphic Chip Market is segmented by End-User Industry (Financial Services and Cybersecurity, Automotive, Industrial, Consumer Electronics), and Geography.
Neuromorphic chips are digitally processed analog chips with a series of networks similar to human brain networks. These chips contain millions of neurons and synapses to augment self intelligence, irrespective of pre-installed codes in normal chips. As a special kind of chips, these are highly capable of manipulating data received through sensors.
Automotive is the Fastest Growing Industry to Adapt Neuromorphic Chip
The automotive industry is one of the fastest-growing industries for neuromorphic chips. All the premium car manufacturers are investing heavily to achieve Level 5 of Vehicle Autonomy, which in turn, is anticipated to generate huge demand for AI-powered neuromorphic chips.
The autonomous driving market requires constant improvement in AI algorithms for high throughput with low power requirements. Neuromorphic chips are ideal for classification tasks and could be utilized for several scenarios in autonomous driving. Compared with static deep learning solutions, they are also more efficient in a noisy environment, such as self-driving vehicles.
According to Intel, four terabytes is the estimated amount of data that an autonomous car may generate through almost an hour and a half of driving or the amount of time a general person spends in their car each day. Autonomous vehicles face a significant challenge in efficiently managing all the data generated during these trips.
The computers running the latest self-driving cars are effectively small supercomputers. The companies, such as Nvidia, aim to achieve Level 5 autonomous driving in 2022, delivering 200TOPS (trillions of operations per second) using 750W of power. However, spending 750W an hour on processing is poised to have a noticeable impact on the driving range of electric vehicles.
ADAS (Advanced Driver Assistance System) applications include image learning and recognition functions among various automotive applications of neuromorphic chips. It works like conventional ADAS functions, such as cruise control or intelligent speed, assist system in passenger cars. It can control vehicle speed by recognizing the traffic information marked on roads, such as crosswalks, school zone, road-bump, etc.
North America is Expected to Hold Major Share over the Forecast Period
- North America is home to some of the major market vendors, such as Intel Corporation and IBM Corporation. nThe market for neuromorphic chips is growing in the region due to factors such as government initiatives, investment activities, and others. For instance, in September 2020, the Department of Energy (DOE) announced USD 2 million funding for five basic research projects to advance neuromorphic computing. The initiative by DOE supports the development of both hardware and software for brain-inspired neuromorphic computing.
- The miniaturization of neuromorphic chips that help in different applications is also contributing to the market’s growth. For instance, in June 2020, MIT engineers designed a brain-on-chip smaller than a piece of confetti made from tens of thousands of artificial brain synapses known as memristors, which are silicon-based components that mimic the information transmitting synapses in the human brain. Such chips can be utilized in small and portable AI devices.
- The government of Canada is also focusing on Artificial Intelligence technology, which will create a scope for growth in neuromorphic computing over the coming years. For instance, in June 2020, the governments of Canada and Quebec joined hands to advance the responsible development of AI. The focus will be on different themes, such as future work and innovation, commercialization, data governance, and reliable AI.
- Big investments into research and development activities through partnerships are being witnessed in the region. For instance, in October 2020, Sandia National Laboratories, one of three National Nuclear Security Administration research and development laboratories in the United States, partnered with Intel to explore the value of neuromorphic computing for scaled-up computational problems.
- The penetration of neural-based chipsets in commercialized applications is also propelling the growth of the market. For instance, in November 2020, one of the biggest technology companies, Apple, launched its M1 Chip explicitly designed for its Mac products. The M1 Chip brings Apple Neural Engine to the Mac and accelerates the machine learning tasks. The 16-core architecture can perform 11 trillion operations per second, enabling up to 15x faster ML performance.
Dominant Key Players on Neuromorphic Chip market are: Samsung Electronics Limited; Hewlett Packard Enterprise; Intel Corp.; General Vision Inc.; HRL Laboratories; LLC; Applied Brain Research Inc.; aiCTX AG; Brainchip Holdings Ltd.; Qualcomm; Nepes Corp.; IBM; Innatera; INSTAR Robotics; MemComputing; Koniku; and Ceryx Medical
References and Resources also include: