Home / Technology / AI & IT / Revolutionizing Real-Time Machine Learning: The Rise of Dedicated AI Chips and Neuromorphic Computing

Revolutionizing Real-Time Machine Learning: The Rise of Dedicated AI Chips and Neuromorphic Computing

Introduction:

Artificial Intelligence (AI) has been a transformative force, aiming to enhance computers and robots with capabilities surpassing human intelligence. Tasks such as learning, reasoning, decision-making, creativity, knowledge extraction, and data prediction have been the focal points of AI development. Within the vast field of AI, machine learning (ML) has emerged as a crucial subfield, striving to enable computers to learn from data without explicit programming.

In the ever-evolving landscape of artificial intelligence (AI) and machine learning (ML), the quest for faster, more efficient processing has led to groundbreaking innovations. One of the most promising developments is the emergence of dedicated AI chips and adaptive neuromorphic computing. These technologies are poised to redefine real-time machine learning, offering unprecedented speed, efficiency, and capabilities that could reshape industries and applications across the board.

Understanding the Need:

Machine learning, within AI, relies on algorithms such as neural networks, support vector machines, and reinforcement learning. These algorithms learn from data through a training process, allowing computers to make predictions about new data based on previous examples.

Some of these algorithms are neural networks, support vector machines, or reinforcement learning. For training, they require data sets covering hundreds or even thousands of relevant features., For this reason, painstaking selection, extraction, and curation of feature sets for learning is often required.

To address the challenges of processing vast datasets for machine learning, scientists turned to brain-inspired computation within the field of machine learning. The main computational element of the brain is the neuron. There are approximately 86 billion neurons in the average human brain. The neurons are connected together with a number of elements entering them called dendrites and an element leaving them called an axon. The neuron accepts the signals entering it via the dendrites, performs a computation on those signals, and generates a signal on the axon. These input and output signals are referred to as activations. The axon of one neuron branches out and is connected to the dendrites of many other neurons. The connections between a branch of the axon and a dendrite is called a synapse. There are estimated to be 1014 to 1015 synapses in the average human brain.

Neural networks, or artificial neural networks (ANNs), organized into layers, mimic the brain’s structure. Neural networks, also known as artificial neural networks (ANNs) are typically organized into layers, containing an input layer, one or more hidden layers, and an output layer. Each layer contains a large number of processing nodes or artificial neurons, and each node has an associated weight and threshold.  Data come in the input layer and are divided up among the nodes and propagate to the neurons in the middle layer of the network, which is also frequently called a “hidden layer.”  Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The weighted sums from one or more hidden layers are ultimately propagated to the output layer, which presents the final outputs of the network to the user. The output of the final layer yields the solution to some computational problem.

Within this domain, deep learning, involving neural networks with more than three layers, has gained prominence for its ability to learn high-level features and relevant features simultaneously. DNNs are capable of learning high-level features with more complexity and abstraction than shallower neural networks. DNN also addressed the limitation of machine learning, to not only learn classifications but also learn relevant features. This capability allows deep learning systems to be trained using relatively unprocessed data (e.g., image, video, or audio data) rather than feature-based training sets.

To do this, deep learning requires massive training sets that may be an order of magnitude larger than those needed for other machine learning algorithms. When such data is available, deep learning systems typically perform significantly better than all other methods.  Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.

A grand challenge in computing is the creation of machines that can proactively interpret and learn from data in real time, solve unfamiliar problems using what they have learned, and operate with the energy efficiency of the human brain.

Dedicated AI Chips:

Traditional computing architectures, while powerful, often struggle to meet the demands of real-time machine learning applications.  As the complexity and scale of ML models continue to grow, there is a pressing need for specialized hardware that can handle the intensive computations required by these algorithms. Dedicated AI chips and neuromorphic computing aim to address this gap by providing tailor-made solutions for accelerated and adaptive processing.

Dedicated AI chips, also known as AI accelerators or inference engines, are purpose-built processors designed specifically for executing machine learning tasks. Cloud-based AI chips, such as Nvidia’s GPU and Google’s TPU, excel in high-performance tasks, while edge-based or mobile AI chips prioritize energy efficiency, response time, cost, and privacy.

Unlike general-purpose CPUs or GPUs, these chips are optimized to perform the matrix multiplication and parallel processing inherent in many ML algorithms. This specialization results in significantly faster inference times, making them ideal for real-time applications such as image recognition, natural language processing, and autonomous vehicles.

While dedicated AI chips and deep learning systems offer unprecedented capabilities, challenges persist. The need for massive training sets, high computational resources, and energy consumption are notable hurdles. AI chip development is being constrained by two bottlenecks: The Von Neumann bottleneck refers to the significant latency and energy overhead when Von-Neumann-based chips transfer massive amounts of data between storage and memory. This is a growing problem as the data used in AI applications has increased by orders of magnitude. The other bottleneck involves CMOS processes and devices. Moore’s Law is losing its pace, and future aggregative dimensional scalings of silicon CMOS are expected to reduce in effectiveness.

Overcoming Bottlenecks with The Neuromorphic Advantage

Researchers are also exploring neuromorphic chips based on silicon photonics and memristors to further enhance efficiency. Neuromorphic computing takes inspiration from the human brain’s architecture, seeking to replicate its neural networks and synapses.

AI chip development faces bottlenecks, including the Von Neumann bottleneck and limitations of CMOS processes. To overcome these constraints, researchers are exploring non-von Neumann architectures and innovative memory solutions. Neuromorphic chips, with their unique architecture and co-location of memory and processing, offer a promising solution to improve efficiency and address the challenges of real-time learning.

The human brain, with its approximately 86 billion neurons and 10^14 to 10^15 synapses, served as a model for neuromorphic computing. These brain-like systems are inherently adaptive and excel at tasks like pattern recognition and sensory processing. By mimicking the brain’s structure, neuromorphic chips offer a unique approach to machine learning that is highly efficient and well-suited for real-time applications. This adaptability enables them to learn from experience, making them particularly potent for tasks that require continuous learning and improvement.

Within the brain-inspired computing paradigm, there is a subarea called spiking computing. In this subarea, inspiration is taken from the fact that the communication on the dendrites and axons are spike-like pulses and that the information being conveyed is not just based on a spike’s amplitude. Instead, it also depends on the time the pulse arrives and that the computation that happens in the neuron is a function of not just a single value but the width of pulse and the timing relationship between different pulses.

Spiking neural networks can be implemented in software on traditional processors. But it’s also possible to implement them through hardware, as Intel is doing with Loihi. Another example of a project that was inspired by the spiking of the brain is the IBM TrueNorth. Neuromorphic systems also introduce a new chip architecture that collocates memory and processing together on each individual neuron instead of having separate designated areas for each.

Real-Time Machine Learning Applications:

The integration of dedicated AI chips and neuromorphic computing opens the door to a multitude of real-time machine learning applications:

  1. Autonomous Systems: In industries such as robotics and autonomous vehicles, the ability to process vast amounts of data in real time is crucial. Dedicated AI chips facilitate quick decision-making, enhancing the safety and efficiency of autonomous systems.
  2. Healthcare Diagnostics: Neuromorphic computing, with its ability to recognize complex patterns, holds great promise in healthcare diagnostics. Real-time analysis of medical imaging data and patient monitoring can significantly improve diagnostic accuracy and speed.
  3. Edge Computing: The deployment of AI at the edge, closer to the data source, is increasingly important. Dedicated AI chips enable efficient processing on devices with limited resources, making real-time decision-making possible without relying on centralized servers.
  4. Natural Language Processing: Applications such as virtual assistants and language translation benefit from the speed and efficiency of dedicated AI chips. Users experience quicker response times and more accurate language understanding.

The AI chips will also enable military applications.  Air Force Research Lab (AFRL) reports good results from using a “neuromorphic” chip made by IBM to identify military and civilian vehicles in radar-generated aerial imagery. The unconventional chip got the job done about as accurately as a regular high-powered computer, using less than a 20th of the energy. The AFRL awarded IBM a contract worth $550,000 in 2014 to become the first paying customer of its brain-inspired TrueNorth chip. It processes data using a network of one million elements designed to mimic the neurons of a mammalian brain, connected by 256 million “synapses.”

The  enhanced power efficiency of neuromorphic allows deploying advanced machine vision, which usually requires a lot of computing power, in places where resources and space are limited. Satellites, high-altitude aircraft, air bases reliant on generators, and small drones could all benefit, says AFRL principal electronics engineer Qing Wu. “Air Force mission domains are air, space, and cyberspace. [All are] very sensitive to power constraints,” he says.

 

Evolution of Neuromorphic chips

In 2014, Scientists at IBM Research advanced neuromorphic (brain-like) computer chip, called TrueNorth, consisting of 1 million programmable neurons and 256 million programmable synapses, comparable to the brain of Honey bee that contains 960,000 neurons and ~ 109 synapse.  The chip was built on Samsung’s standard CMOS 28nm process, containing 5.4 billion transistors, with 4096 neurosynaptic cores interconnected via an intrachip network.

“The [TrueNorth] chip consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt – literally a synaptic supercomputer in your palm,” noted Dharmendra Modha, who leads development of IBM’s brain-inspired chips. “A hypothetical computer to run [a human-scale] simulation in real-time would require 12GW, whereas the human brain consumes merely 20W.”

In July 2019, Intel unveiled its groundbreaking neuromorphic development system, Pohoiki Beach, marking a significant leap in artificial intelligence (AI) technology. This revolutionary system harnesses the power of 64 Loihi research chips to create an 8-million-neuron neuromorphic network, mirroring human brain functionality. By extending AI into realms resembling human cognition, such as interpretation and autonomous adaptation, Intel aims to address the limitations of conventional neural network approaches in AI solutions.

One crucial application highlighted by Intel is in self-driving vehicles, where the Pohoiki Beach chip could play a pivotal role. Unlike current semiconductors used in autonomous cars, which rely on predefined routes and speed control, the new AI chip allows vehicles to recognize and respond dynamically to their surroundings, enhancing adaptability. The goal is to overcome the ‘brittleness’ of existing AI solutions, which lack contextual understanding and struggle with unpredictable scenarios, a critical requirement for the advancement of self-driving technology.

To further enhance self-driving capabilities, Intel emphasizes the need to incorporate human-like experiences into AI systems, such as dealing with aggressive drivers or responding to unexpected events like a ball rolling onto the street. The decision-making processes in such scenarios rely on the chip’s perception and understanding of the environment, requiring an awareness of uncertainty inherent in these tasks.

Intel’s Pohoiki Beach chip demonstrates remarkable performance improvements over traditional central processing units (CPUs) for AI workloads. The new chip operates up to 1,000 times faster and 10,000 times more efficiently, showcasing its potential in image recognition, autonomous vehicles, and automated robots. Comprising 64 Loihi chips, each acting as 8.3 million neurons, the Pohoiki Beach chip sets the stage for further advancements in neuromorphic research, with plans to scale up to 100 million neurons later.

Loihi, Intel’s fifth-generation neuromorphic chip, serves as the building block for Pohoiki Beach. With 128 cores and approximately 131,000 computational neurons, Loihi employs an asynchronous spiking neural network to execute adaptive self-modifying parallel computations efficiently. The chip’s unique programmable microcode learning engine facilitates on-chip training, offering superior performance in specialized applications like sparse coding and constraint-satisfaction problems.

Intel’s commitment to advancing neuromorphic computing is evident in the development of Pohoiki Springs, a system that scales up the Loihi chip to over 100 million neurons. By incorporating 768 Loihi chips on Nahuku boards, Pohoiki Springs represents a major milestone in neuromorphic research, operating at a power level of under 500 watts. This scalable architecture opens avenues for accelerating workloads that conventionally run slowly on traditional architectures.

Collaborating with Cornell University, Intel tested Loihi’s capabilities in recognizing smells, leveraging a neural algorithm to mimic the brain’s olfactory circuits. The chip demonstrated efficient learning, surpassing a deep learning solution that required thousands of times more training samples. Such applications highlight the versatility and energy efficiency of Intel’s neuromorphic chip technology.

In parallel, Chinese scientists have made significant strides in neuromorphic computing with the development of the “Darwin” chip. Created by researchers from Hangzhou Dianzi University and Zhejiang University, this chip simulates human brain neural networks, boasting 2,048 neurons and four million synapses. Its ability to process “fuzzy information” gives it a unique edge, enabling tasks like recognizing handwritten numbers and interpreting diverse images. While still in the early stages of development, the Darwin chip holds promise for applications in robotics, intelligent hardware systems, and brain-computer interfaces.

China’s semiconductor industry, as highlighted by the “Thinker” chip, is positioning itself as a formidable contender in the AI hardware landscape. The “Thinker” chip’s adaptability to varying neural network requirements distinguishes it, showcasing China’s rapid response to the evolving AI trends. With a three-year plan to mass-produce neural-network processing chips, China aims to establish itself as a key player in the AI hardware market.

Furthermore, Chinese researchers have unveiled a hybrid chip architecture, Tianjic, signaling a step closer to achieving artificial general intelligence (AGI). By integrating computer science and neuroscience approaches, Tianjic demonstrates the potential for a single computing platform to run diverse machine-learning algorithms and reconfigurable building blocks. This chip’s versatility positions it as a potential catalyst for AGI, with applications across industries such as autonomous driving, robotics, and automation.

China’s commitment to AI development is evident in its State Council’s roadmap, aiming to create a domestic AI industry worth 1 trillion yuan by 2030. Closing the gap with the U.S., China’s semiconductor industry is embracing the challenge of commercializing chip designs, scaling up, and navigating the transformative landscape of AI-driven computing.

IBM’s Scaled Precision Breakthrough: IBM’s recent breakthrough in AI chip design, featuring scaled precision for both training and inference at 32, 16, and even 1 or 2 bits, marks a significant milestone. The chip’s customized data flow system and the use of a specially designed scratch pad memory tackle traditional bottlenecks, achieving 90% utilization. This innovation allows the chip to perform convolutional neural networks (CNN), multilayer perceptrons (MLP), and long short-term memory (LSTM) tasks efficiently.

Challenges and Future Developments:

While dedicated AI chips and neuromorphic computing offer tremendous potential, challenges remain. Designing and implementing these specialized architectures require expertise, and standardization is an ongoing concern. Additionally, ethical considerations surrounding the use of AI in real-time decision-making must be addressed.

Looking ahead, ongoing research and development will likely lead to even more advanced iterations of dedicated AI chips and neuromorphic computing. As these technologies mature, we can expect to see broader integration into various sectors, unlocking new possibilities for real-time machine learning applications.

IBM’s NorthPole Chip: Revolutionizing AI at the Edge

IBM’s NorthPole chip stands as a revolutionary force in AI, introducing a paradigm shift in the relationship between processing power and memory. With 22 billion transistors and 256 cores, this groundbreaking chip transforms the landscape of AI inference by integrating processing elements with embedded memory on a single chip. Unlike traditional models with distant memory banks, NorthPole establishes an “intimate relationship” between compute and memory, eliminating the need for off-chip data transfers. This not only enhances speed but also propels the chip into an active memory role, opening doors for real-time data manipulation.

The chip’s architecture draws inspiration from the human brain, adopting a neuromorphic design with 256 neural processing units interconnected through a mesh network. Each processing unit is tightly coupled with embedded memory, significantly reducing latency and enabling efficient data manipulation. NorthPole’s active memory isn’t merely storage; it actively engages in computation, pushing the boundaries of in-memory processing.

In terms of performance, NorthPole outshines existing AI chips, offering up to 20 times faster processing for inference tasks. Its energy efficiency is noteworthy, driven by reduced data movement and the low-power design of the neural processing units. The chip’s scalability is a key feature, allowing for performance enhancement by adding more processing units and memory blocks.

NorthPole’s capabilities extend to handling lower-precision data formats, maintaining accuracy while reducing memory footprint and energy consumption. Tailored for real-time processing, it finds ideal applications in edge scenarios like autonomous vehicles and robotics, where low latency is critical. The chip provides programming flexibility, supporting standard deep learning frameworks and custom languages, empowering developers with greater control.

Despite its transformative potential, NorthPole is still under development, refining its manufacturability and cost-effectiveness. While compatible with existing frameworks, optimizing software for NorthPole’s architecture presents challenges. Looking ahead, NorthPole’s edge-friendly features hold immense potential for industries like healthcare, manufacturing, and transportation. Its active memory architecture opens avenues for novel applications in neuromorphic computing and artificial creativity, marking a significant stride towards a future where intelligent devices seamlessly interact with the world.

Conclusion:

From dedicated AI chips to next-generation neuromorphic solutions, researchers and companies are pushing the boundaries of what’s possible in AI applications. These technologies not only address the computational challenges posed by increasingly complex ML models but also open doors to innovative applications that were once deemed impractical.

As the quest for energy-efficient, high-performance chips continues, the future holds exciting possibilities for AI-driven innovations across industries. As industries embrace these advancements, we are on the cusp of a new era where machines can learn, adapt, and make decisions in real time, ushering in a wave of transformative possibilities.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References and Resources also include:

 

About Rajesh Uppal

Check Also

Unveiling the Dynamics of the Defense Electronics Market: Trends, Innovations, and Future Outlook

Defense electronics systems play a paramount role in modern warfare, offering critical capabilities that enable …

error: Content is protected !!