Home / Technology / AI & IT / Brain inspired Hyperdimensional computing for extremely robust brain-computer interfaces, biosignal processing and robotics

Brain inspired Hyperdimensional computing for extremely robust brain-computer interfaces, biosignal processing and robotics

From an engineering perspective, computing is the systematized and mechanized manipulation of patterns.  A representation is a pattern in some physical medium, for example, the configuration of ONs and OFFs on a set of switches. The algorithm then tells us how to change these patterns—how to set the switches from one moment to the next based on their previous settings. Computing is the transformation of representations by algorithms that can be described by rules.

 

Modern computer architecture, known as the von Neumann architecture, is a mere 60 years old. It is based on the simple idea that data and the instructions for manipulating the data are entities of the same kind. Both can be processed and stored as data in a singe uniform memory. The phenomenal success of this architecture has made computers an ubiquitous part of our lives. However, to build computers that work at all like brains, likely requires brainlike architecture .

 

The brain’s circuits are massive in terms of numbers of neurons and synapses, suggesting that large circuits are fundamental to the brain’s computing.  Computing with 10,000-bit words takes us into the realm of very high-dimensional spaces and vectors; we will call them hyperdimensional when the dimensionality is in the thousands. and we will use hyperspace as shorthand for hyperdimensional space, and similarly hypervector.

 

The way the brain works suggests that rather than working with numbers that we are used to, computing with high-dimensional (HD) vectors, e.g., 10,000 bits is more efficient. Computing with HD vectors, referred to as “hypervectors,” offers a general and scalable model of computing as well as well-defined set of arithmetic operations that can enable fast and one-shot learning (no need of back-propagation like in neural networks).

 

They represent things in highdimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality.

Hyperdimensional computing  Applications

hyperdimensional computing  is memory-centric with embarrassingly parallel operations and is extremely robust against most failure mechanisms and noise. Such generality, robustness against data uncertainty, and one-shot learning make HD computing a prime candidate for utilization in application domains such as: brain-computer interfaces, biosignal processing (e.g., EEG/ECoG/EMG), robotics, voice/video classification, language recognition, text categorization, scene reasoning, analogical-based reasoning, etc.

 

The most important aspect of HD computing, for hardware realization, is its robustness against noise and variations in the computing platforms. Principles of HD computing allows to implement resilient controllers and state machines for extreme noisy conditions. Its tolerance in operating with faulty components and low signal-to-noise ratio (SNR) conditions is achieved by brain-inspired properties of hypervectors: (pseudo)randomness, high-dimensionality, and fully distributed holographic representations.

 

Noninvasive brain–computer interfaces and neuroprostheses aim to provide a communication and control channel based on the recognition of the subject’s intentions from spatiotemporal neural activity typically recorded by EEG electrodes. What makes it particularly challenging, however, is its susceptibility to errors over time in the recognition of human intentions. Researchrs are developing  an efficient and fast learning method based on HD computing that replaces the traditional signal processing and classification methods by directly operating with raw data from electrodes in an online fashion.

 

Robot learning from demonstration is a paradigm for enabling robots to autonomously perform new tasks. HD computing is a nice fit in this area since it naturally enables modeling relation between sensory inputs and actuator outputs of a robot by learning from few demonstrations.

 

 

Hyperdimensional Computing Theory Could Change The Way AI Works

Integration is the most important challenge facing the robotics field. A robot’s sensors and the actuators that move it are separate systems, linked together by a central learning mechanism that infers a needed action given sensor data, or vice versa.

 

The cumbersome three-part AI system–each part speaking its own language — is a slow way to get robots to accomplish sensorimotor tasks. The next step in robotics will be to integrate a robot’s perceptions with its motor capabilities. This fusion, known as “active perception,” would provide a more efficient and faster way for the robot to complete tasks.

 

A paper by University of Maryland researchers just published in the journal Science Robotics introduces a new way of combining perception and motor commands using the so-called hyperdimensional computing theory, which could fundamentally alter and improve the basic artificial intelligence (AI) task of sensorimotor representation — how agents like robots translate what they sense into what they do.

 

“Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception” was written by computer science Ph.D. students Anton Mitrokhin and Peter Sutor, Jr.; Cornelia Fermüller, an associate research scientist with the University of Maryland Institute for Advanced Computer Studies; and Computer Science Professor Yiannis Aloimonos. Mitrokhin and Sutor are advised by Aloimonos.

 

In the authors’ new computing theory, a robot’s operating system would be based on hyperdimensional binary vectors (HBVs), which exist in a sparse and extremely high-dimensional space. HBVs can represent disparate discrete things — for example, a single image, a concept, a sound or an instruction; sequences made up of discrete things; and groupings of discrete things and sequences. They can account for all these types of information in a meaningfully constructed way, binding each modality together in long vectors of 1s and 0s with equal dimension. In this system, action possibilities, sensory input and other information occupy the same space, are in the same language, and are fused, creating a kind of memory for the robot.

 

The Science Robotics paper marks the first time that perception and action have been integrated.

 

A hyperdimensional framework can turn any sequence of “instants” into new HBVs, and group existing HBVs together, all in the same vector length. This is a natural way to create semantically significant and informed “memories.” The encoding of more and more information in turn leads to “history” vectors and the ability to remember. Signals become vectors, indexing translates to memory, and learning happens through clustering.

 

The robot’s memories of what it has sensed and done in the past could lead it to expect future perception and influence its future actions. This active perception would enable the robot to become more autonomous and better able to complete tasks.

 

“An active perceiver knows why it wishes to sense, then chooses what to perceive, and determines how, when and where to achieve the perception,” says Aloimonos. “It selects and fixates on scenes, moments in time, and episodes. Then it aligns its mechanisms, sensors, and other components to act on what it wants to see, and selects viewpoints from which to best capture what it intends.”

 

“Our hyperdimensional framework can address each of these goals.”

 

Applications of the Maryland research could extend far beyond robotics. The ultimate goal is to be able to do AI itself in a fundamentally different way: from concepts to signals to language. Hyperdimensional computing could provide a faster and more efficient alternative model to the iterative neural net and deep learning AI methods currently used in computing applications such as data mining, visual recognition and translating images to text.

 

“Neural network-based AI methods are big and slow, because they are not able to remember,” says Mitrokhin. “Our hyperdimensional theory method can create memories, which will require a lot less computation, and should make such tasks much faster and more efficient.”

 

 

 

References and Resources also include:

https://scienceblog.com/507888/helping-robots-remember-hyperdimensional-computing-theory-could-change-the-way-ai-works/

http://www.rctn.org/vs265/kanerva09-hyperdimensional.pdf

http://iis-projects.ee.ethz.ch/index.php/Hyperdimensional_Computing

About Rajesh Uppal

Check Also

India’s Advances in AI Weaponization Amid Global Military AI Race

As the global military landscape evolves with advancements in Artificial Intelligence (AI), India is making …

error: Content is protected !!