Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers. A computer system able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. The first wave of AI was rule-based and “second wave” was based on statistical-learning. Machine learning (ML) methods have demonstrated outstanding recent progress and, as a result, artificial intelligence (AI) systems can now be found in myriad applications, including autonomous vehicles, industrial applications, search engines, computer gaming, health record automation, and big data analysis.
The development of artificial intelligence has in part been shaped by the field of neuroscience. By understanding the human brain, scientists have attempted to build new intelligent machines capable of performing complex tasks akin to humans. Indeed, future research into artificial intelligence will continue to benefit from the study of the human brain.
Advances in automation and artificial intelligence, and especially in the area of intelligent (machine) agents, have enabled the formation of rather unique teams with human and machine members. In this context, there is much to be gained by combining AI and human intelligence (HI). Harnessing Big Data, computing power and storage capacities, and addressing societal issues emergent from algorithm applications, demand deploying HI in tandem with AI. The team is still supervised by the human with the machine as a subordinate associate or assistant, sharing responsibility, authority and autonomy over many tasks.
Kernel, a startup created by Braintree co-founder Bryan Johnson, is also trying to enhance human cognition. With more than $100 million of Johnson’s own money — the entrepreneur sold Braintree to PayPal for around $800 million in 2013 — Kernel and its growing team of neuroscientists and software engineers are working toward reversing the effects of neurodegenerative diseases and, eventually, making our brains faster and smarter and more wired.
“We know if we put a chip in the brain and release electrical signals, that we can ameliorate symptoms of Parkinson’s,” Johnson told The Verge in an interview late last year. (Johnson also confirmed Musk’s involvement with Neuralink.) “This has been done for spinal cord pain, obesity, anorexia… what hasn’t been done is the reading and writing of neural code.” Johnson says Kernel’s goal is to “work with the brain the same way we work with other complex biological systems like biology and genetics.”
The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly,” Licklider wrote in his seminal 1960 work Man-Computer Symbiosis, “and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.
The members of Forbes Technology Council selected one of the next big thing in tech as Human and Artificial Neural Networks. “The next thing would be a new way to communicate with these new technologies including AI. It could be some new kind of wearable. It might be some mechanism to fit a receiver inside a body, similar to how pets are microchipped. Basically, something to seamlessly connect humans with the immense power of AI.
Brain-Computer Interface (BCI)
The human machine teaming is also being enhanced through ongoing integration of Brain with external interfaces called Brain-Computer Interface (BCI). The brain-computer interface (BCI) allows people to use their thoughts to control not only themselves, but the world around them. Every action our body performs begins with a thought, and with every thought comes an electrical signal. The electrical signals can be received by the brain-computer interface, consisting of an electroencephalograph (EEG) or an implanted electrode, which can then be translated, and then sent to the performing hardware to produce the desired action.
Brain-computer interfaces are being applied in neuroprosthetics, through which paralyzed persons are able to control robotic arms, neurogaming where one can control keyboard, mouse etc using their thoughts and play games, neuroanalysis (psychology), and in defense to control robotic soldiers or fly planes with thoughts.
Musk also appreciates the enhancement “bandwidth,” of brain computer interfaces. though, is most intriguing because you can apply it to future human brain-machine user experiences. Musk explained that machines communicate in “a trillion bits per second” and humans, who mainly communicate by typing on a smartphone, are limited to just 10 bits per second. “Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem,” he said.
However, Musk want much more tighter interface between machines and humans through high bandwidth BCIs. Elon Musk, the billionaire entrepreneur now wants to merge machine intelligence or AI with human brains to help people keep up with machines. SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface venture called Neuralink, according to The Wall Street Journal. The company, is centered on creating devices that can be implanted in the human brain, with the eventual purpose of helping human beings merge with software and keep pace with advancements in artificial intelligence. These enhancements could improve memory or allow for more direct interfacing with computing devices.
He contextualized his comments by noting how human drivers are increasingly at risk of being replaced by autonomous cars. Self-driving vehicles are great for safety, he said. “But there are many people whose jobs are to drive. In fact I think it might be the single largest employer of people … Driving in various forms. So we need to figure out new roles for what do those people do, but it will be very disruptive and very quick.”
But there’s a huge problem: current brain-machine interfaces only use electrical signals to mimic neural computation. The brain, in contrast, has two tricks up its sleeve: electricity and chemicals, or electrochemical. Within a neuron, electricity travels up its incoming branches, through the bulbous body, then down the output branches. When electrical signals reach the neuron’s outgoing “piers,” dotted along the output branch, however, they hit a snag. A small gap exists between neurons, so to get to the other side, the electrical signals generally need to be converted into little bubble ships, packed with chemicals, and set sail to the other neuronal shore.
These explorations led to novel neuromorphic chips, or artificial neurons that “fire” like biological ones. Additional work found that it’s possible to link these chips up into powerful circuits that run deep learning with ease, with bioengineered communication nodes called artificial synapses.
Neuromorphic Chips
Deep Neural networks (DNN) or large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain have been responsible for many exciting advances in artificial intelligence in recent years. The deep learning (DL) algorithms allow high-level abstraction from the data, and this is helpful for automatic features extraction and for pattern analysis/classification.
Industry is working on “neuromorphic” technology that can incorporate nano-chips into wearables modeled on the human brain. Eventually these nano-chips may be implanted into our brains artificially, augmenting human thought and reasoning capabilities. As a potential computing hardware replacement, these systems have proven to be incredibly promising. Yet scientists soon wondered: given their similarity to biological brains, can we use them as “replacement parts” for brains that suffer from traumatic injuries, aging, or degeneration? Can we hook up neuromorphic components to the brain to restore its capabilities?
Scientists Used Dopamine to Seamlessly Merge Artificial and Biological Neurons
In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain.
Now, in a paper published June 2020 in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. The new study in Nature Materials shows that it’s possible to get an artificial neuron to communicate directly with a biological one using not just electricity, but dopamine—a chemical the brain naturally uses to change how neural circuits behave, most known for signaling reward. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).
“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.” While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.
The new study started with two neurons: the upstream, an immortalized biological cell that releases dopamine; and the downstream, an artificial neuron that the team previously introduced in 2017, made of a mix of biocompatible and electrical-conducting materials. The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution — which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.
The biological cell sits close to the first electrode. When activated, it dumps out boats of dopamine, which drift to the electrode and chemically react with it—mimicking the process of dopamine docking onto a biological neuron. This, in turn, generates a current that’s passed on to the second electrode through the conductive solution channel. When this current reaches the second electrode, it changes the electrode’s conductance—that is, how well it can pass on electrical information. This second step is analogous to docked dopamine “ships” changing how likely it is that a biological neuron will fire in the future.
How neurons learn
After confirming that biological cells can survive happily on top of the artificial one, the team performed a few tests to see if the hybrid circuit could “learn.” They used electrical methods to first activate the biological dopamine neuron, and watched the artificial one.
“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.” This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.
To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material — but they saw a permanent change in the state of their device upon the first reaction. “We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”
This biohybrid design is in such early stages that the main focus of the current research was simply to make it work. “It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.” Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.
This research was funded by the National Science Foundation, the Semiconductor Research Corporation, a Stanford Graduate Fellowship, the Knut and Alice Wallenberg Foundation for Postdoctoral Research at Stanford and the European Union’s Horizon 2020 Research and Innovation Programme.
Because these chemicals, known as “neurotransmitters,” are how biological neurons functionally link up in the brain, the study is a dramatic demonstration that it’s possible to connect artificial components with biological brain cells into a functional circuit. The team isn’t the first to pursue hybrid neural circuits. Previously, a different team hooked up two silicon-based artificial neurons with a biological one into a circuit using electrical protocols alone. Although a powerful demonstration of hybrid computing, the study relied on only one-half of the brain’s computational ability: electrical computing.
The new study now tackles the other half: chemical computing. It adds a layer of compatibility that lays the groundwork not just for brain-inspired computers, but also for brain-machine interfaces and—perhaps—a sort of “cyborg” future. After all, if your brain can’t tell the difference between an artificial neuron and your own, could you? And even if you did, would you care? Of course, that scenario is far in the future—if ever. For now, the team, led by Dr. Alberto Salleo, professor of materials science and engineering at Stanford University, collectively breathed a sigh of relief that the hybrid circuit worked. “It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”
Unfortunately for cyborg enthusiasts, the work is still in its infancy. For one, the artificial neurons are still rather bulky compared to biological ones. This means that they can’t capture and translate information from a single “boat” of dopamine. It’s also unclear if, and how, a hybrid synapse can work inside a living brain. Given the billions of synapses firing away in our heads, it’ll be a challenge to find-and-replace those that need replacement, and be able to control our memories and behaviors similar to natural ones.
Chinese helmet aimed at boosting brain power
Chinese scientists are developing a helmet to enhance brain function through monitoring and regulating brain waves and combining artificial intelligence technology. Wei Pengfei, of the Shenzhen Institute of Advanced Technology (SIAT) of the Chinese Academy of Sciences, said his team is developing a brain function enhancement system with the goal of improving the brain’s ability to perform complex tasks and regulate abnormal emotions.
The helmet could be applied in the training of special personnel to speed up an increase in memory and skills and to alleviate anxiety caused by tension. The technology is also expected to help treat children with attention deficit hyperactivity disorder (ADHD) and people suffering depression, Alzheimer’s disease, aphasia and Parkinson’s disease, said Wei.
Surgically-implanted deep-brain stimulation technology first emerged in the 1960s. At the beginning of this century, scientists developed electroencephalogram feedback technology and brain-computer interface technology. In recent years, non-invasive stimulation and regulation technology has been able to intervene in and regulate brain activities more quickly, becoming a new focus in the brain research and neuroscience field.
The helmet is based on non-invasive brain stimulation and regulation technology, said Wei. It uses flexible electrode sensors to identify brain waves when the brain is performing different tasks. Electrodes then release weak current pulses that can reach specific areas of the brain, altering brain waves, and regulating the active state of its neurons. “Since brain tissue is very complex, we need to build a computer model first, and then determine the target area and parameters for stimulation,” said Wei.
An artificial intelligence algorithm reads brain activity in real time and calculates stimulation parameters to achieve precise and personalized regulation. The research team, based at the Institute of Brain Cognition and Brain Disease of SIAT, has a research platform for rodents, nonhuman primates and humans. “Through animal experiments, we have analyzed specific brain areas related to attention cognition, emotional regulation, anxiety, drug addiction, stress and epilepsy. We hope we can intervene in these areas effectively,” Wei said.
The team has also developed tests for cognitive ability. For example, trial participants wore the helmet for about 15 minutes, and then were required to quickly memorize a string of numbers, English letters or words. The test found the average accuracy rate of their memories improved within two hours. But the data is still insufficient, said Wei. Large-scale double-blind experiments among people of different ages and groups are needed to accumulate convincing data.
“We have only tested the short-term memory of those wearing the helmet, and we’re planning to test their week-long memory,” Wei said. So far, researchers have developed the prototype of the first-generation helmet, which can implement feedback control on the brain waves of the cerebral cortex. The team is developing the second generation of the helmet, aiming to achieve deep-brain non-invasive stimulation. They also intend to cooperate with hospitals in clinical tests on patients with autism, schizophrenia and children with ADHD.
The research was recently selected as one of 30 winning projects at a contest of innovative future technologies in Shenzhen, south China’s Guangdong Province. The contest encouraged young Chinese scientists to conceive groundbreaking technologies and trigger innovation.
References and resources also include:
https://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs
http://www.xinhuanet.com/english/2018-07/24/c_137344755.htm
https://www.sciencedaily.com/releases/2020/06/200615115808.htm