Recent progress in central and peripheral neural interface technology has resulted in impressive capability demonstrations. These include the use of neural signals to control the reanimation of paralyzed muscles or to control high-dimensional prosthetic limbs, external robots, and even flight simulators. In many of these examples, sensory feedback from the external application is also delivered to the brain via neural stimulation. Successful systems to date have utilized artificial intelligence methods such as neural networks, evolutionary algorithms, and state space machine learning algorithms. While these methods have shown promise in the laboratory, a number of challenges still exist.
In Jan 2019, DAPA launched the Intelligent Neural Interfaces (INI) program seeks to establish “Third-Wave” artificial intelligence methods to improve and expand the application space of next-generation neurotechnology. DARPA officials see the campaign as part of a “third wave” of artificial intelligence research. The first wave focused on rule-based systems capable of narrowly defined tasks. The second wave, beginning in the 1990s, created statistical pattern recognizers from large amounts of data, which are capable of impressive feats of language processing, navigation and problem solving. However, they do not adapt to changing conditions, offer limited performance guarantees, and are unable to provide users with explanations of their results. The third wave, in contrast, will focus on contextual adaptation and enabling machines to function reliably despite massive volumes of changing or even incomplete information.
Teams will address two major challenges specific to central and/or peripheral neural interfaces. These challenges include: (1) decision making for sustainment and maintenance of neural interfaces to promote robustness and reliability, and (2) modeling and maximizing the information content of biological neural circuits to increase the bandwidth and computational abilities of the neural interface.
The ultimate aim for DARPA is to see if AI and neural technology can be utilized for troops to be able to control, feel, and interact with remote machines using their brains.
Artificially intelligent neural interfaces: DARPA funds Emory/GT/Northwestern research
Paralyzed people moving their limbs or operating prosthetic devices by having machines decipher the electrical impulses in their nervous systems: it’s an appealing vision, and one that is getting closer. Right now, when a computer “reads” someone’s brain, the interface between brain and machine does not stay the same, so the computer needs to be recalibrated once or multiple times a day. It’s like learning to use a tool, and having the weight and shape of the tool change.
To address this challenge, biomedical engineers at Emory and Georgia Tech, working with colleagues at Northwestern University, were awarded a $1 million grant from DARPA (Defense Advanced Research Projects Agency). The two-phase grant begins with $400,000 for six months, and can advance to a total of $1 million over 18 months.
Chethan Pandarinath, PhD and Lee Miller, PhD are combining artificial intelligence-based approaches that their laboratories have developed that enable the decoding of complex signals from the nervous system controlling movement. The scientists plan to develop algorithms that periodically and automatically recalibrate so that nervous system “intent” can be decoded smoothly and without interruption. Pandarinath and Miller have an already established collaboration, and are part of a National Science Foundation-funded project on building new approaches to handle data from the nervous system at unprecedented scale.
Pandarinath and colleagues previously developed an approach that uses artificial neural networks to decipher the complex patterns of activity in biological networks that make our everyday movements possible. Prior approaches focused on the activity of individual neurons in the brain, and attempted to relate their activity to movement variables like arm speed, movement distance or angle.
Instead, Pandarinath says, patterns that are spread out across the entire network are far more important, and uncovering these distributed patterns is the key to breakthroughs in neural interface technologies. The distributed patterns, or “manifolds,” are highly stable, lasting for months or years. Thus manifolds could provide a stable platform when engineers are seeking to build prosthetics or neural interface-controlled devices that can restore movement abilities for paralyzed people across months and years without needing any sort of manual recalibration.
Pandarinath’s manifold decoding approach will be combined with a separate neural network approach called ANMA (Adversarial Neural Manifold Alignment), developed by Miller’s team, which can adjust the manifolds to any changes in the incoming data. Together, Pandarinath and Miller call their combined technology NoMAD, for Nonlinear Manifold Alignment Decoding.
Their experiments will be based on data that has already been collected in non-human primates, in which monkeys control an on-screen cursor via wrist movements, or perform various natural behaviors. To test the resilience of their technology, the scientists plan to incorporate instabilities into their experiments, which would simulate the effects of shifting an electrode or changes in physiological conditions.
The scientists say that NoMAD will be applicable to a wide variety of neural interfaces, since manifolds integrate neural patterns in the motor, sensory and cognitive realms. Thus, beyond prosthetic devices and movement control, NoMAD could eventually refine and improve electrical stimulation therapies for Parkinson’s, epilepsy, speech, depression, or psychiatric disorders.