Brain-computer interface (or BCI) is basically setting up a connection between the human brain and the computer device to control or to perform certain activity using brain signals. These brain signals are translated as an action for a device. The interface thus provides a one-to-one communication pathway between the brain and the target.
The technology has advanced from mechanical devices and touch systems, and now, world is approaching towards use of neural waves as the input. Even though it is not widely applied for now, it has a promising future. Especially for the physically impaired people who face difficulties in performing physical activities and lose their brain signal to move their muscles, it is the only way to function.
BCI technology can allow us to control of devices can be made easy through just our thoughts. Making a decision and then performing a task takes time, while operating a device using thoughts or technically our brain waves is easier. Re-establishing communication path from the brain to artificial limbs and assisting those affected by brain-related diseases. In case of healthy people, the brain transmits signals from the central nervous system to the muscles, and thus, they can move the muscles of the body. However, in case of people suffering from stroke or neuromuscular illness, the transmission of signals between the brain and the rest of body muscles is distorted. The patient’s body becomes paralyzed or losses the capability to control muscle movement, like cerebral palsy. It is observed that a patient may not be able to move a muscle, but a brain can transmit the neural signal. This means that the neural signal is transmitted from the CNS but not received by target muscles. A BCI can be designed to utilize those commands to control devices or computer programs.
Nervous system resembles an electrical system which passes nerve impulses as a message. A human brain contains about 86 billion neurons, each individually linked to other neurons. This neurons (brain cells) communicate by transmitting and receiving very small electrical waves, merely in range of microvolts. Now, to sense and record these signals, we require precise and advanced sensors. Every time we think or move a muscle, these neurons are at work, activated with energy. A BCI recognises these energy patterns in the brain.
A BCI system includes a device with electrodes that act as sensors and measure brain signals, an amplifier to raise the weak neural signals, and a computer which decodes the signals into controlling signals to operate devices. Mostly, the BCI device is a headset which is portable and wearable. The BCI device has two functions. Firstly, it records the data reviewed at its electrodes, and secondly, it interprets or decodes neural signals.
The human brain is an amazing three-pound organ that controls all our body functions. It processes all our thoughts, it is the neurobiological basis of human intelligence, creativity, emotion, and memory. Our brain is divided into several parts, and each part has a primary function.
Each part of the body is controlled by a particular part of the brain as shown in the figure. Using BCI techniques, it is observed which part of the brain is active and transmitting the signal. Through this, the BCI system can predict the muscle locomotion from the brain activity. BCI systems can be advanced, and multiple new applications can be developed using a fact that a variety of other brain activities can also be recognized. For instance, while one performs a numeric calculation, the frontal lobe is activated, and when one comprehends a language, Wernicke’s area is activated.
BCI signal processing for Decoding Brain Signals
The problem is which method is optimal to analyze these complex time-varying neural responses and map them accordingly to the output response desired. These signals are merely in the range of microvolts. So, these electrical signals are passed through several processes to remove noise and to gather useful signals. Next, algorithms and classification techniques are applied to the data obtained.
Brain waves are oscillating voltages bearing amplitudes from microvolts to some millivolts; there are 5 widely known brain waves bearing different frequency ranges exhibiting states of the brain.
The steps of the brain computer interface system include the following:
- Brain activity measurement/recording methods of the BCI: Electrodes are set directly on the scalp or embedded in the brain which requires surgical procedure. Noninvasive techniques do not require any surgical treatment and thus safe from causing any sort of infections; though their signal quality is low, it is still a popular means of brain signal acquisition. Those that are recorded directly from the scalp yield better results but at the risk of surgery that may induce damage in the brain. The risk of damaging brain tissues exceeds the quality obtained through the surgical method. These techniques include electroencephalography (EEG) in which the electrical activity is recorded from the scalp of the brain and magnetoencephalography (MEG) in which magnetic properties exhibited due to the difference in oxygenated and deoxygenated hemoglobin are recorded.
- Preprocessing techniques: In BCI, preprocessing techniques consist of data acquisition of brain signals to check the signal quality without losing important information; the recorded signals are cleaned and conduct noise removal to acquire relevant information encoded in the signal.
- Feature extraction: Feature extraction plays a vital role in brain-computer interface applications; the raw EEG signals are nonstationary signals that are corrupted by noise or due to artifacts present in the environment where they are being recorded, but still meaningful information can be extracted from them. The data dimensionality is also reduced to process it better, and machine learning models are applied. This method is essential to increase the classification accuracy of the BCI system.
- Machine learning implementation/classification: Deep learning is a classification tool used in a variety of daily applications which is composed of speech recognition and computer vision to natural language processing in the context of the BCI; the input features which are different brain frequency bands are classified according to what activity the user is performing at the moment.
- Translation to control signal
Currently, numerous groups are contributing to the evolution of BCIs so as to develop numerous applications, specific for each category of the consumer. BCI is an application-oriented approach and depends entirely on user training; the EEG features dictate the BCI system for speed, accuracy, bit rate, and usefulness. Each day, scientists and engineers are improving algorithms, BCI sensor devices, and techniques for quality attainment of data and improved accuracy of systems.
Steady-state visual evoked potential (SSVEP) are signals generated when we look at something flickering, typically at frequencies between 1 and 100 Hz. In this experiment, these flickering lights are blinking LED lights. These blinking lights are “stimuli”. Consider a brain-computer interface system where the goal is to decode the user’s input for one of the two possible choices, “left” or “right”. There are two stimuli, one for selecting the “left” option and another for the “right”. The two stimuli are flickering at a different frequency, 11 Hz represents a “turn left”; while “turn right” is at 15 Hz. Users pick the options by focusing on one of the stimuli. For example, by focusing on the “left” stimulus, to select the “left” option. When the user is focusing on one of the stimuli, the frequencies of that specific stimulus can be picked up at the occipital lobe. We can determine which lights the user is focusing on by extracting the stimulus’ frequency from the EEG signals. That is how a BCI system can interpret SSVEP brain signals into instructions for external devices.
The researchers for the Berlin brain-computer interface employed sensory motor rhythms, i.e., thinking of moving the left hand or right hand and used machine learning-based detection of the user specific brain states. While testing their trained model, they achieved an information transfer rate above 35 bits per minute (bpm), and overall spelling seed was 4.5 letters per minute including correcting the mistakes, using 128-channel EEG and using feedback control for untrained users in order to properly train the machine learning algorithms, thereby reducing the training user time used in the voluntary control approach.
Army-Funded Algorithm Decodes Brain Signals
Brain signals contain dynamic neural patterns that reflect a combination of activities simultaneously. For example, the brain can type a message on a keyboard and acknowledge if a person is thirsty at that same time. A standing challenge has been isolating those patterns in brain signals that relate to a specific behavior, such as finger movements. A new machine learning algorithm, developed with Army funding, can isolate patterns in brain signals that relate to a specific behavior and then decode it, potentially providing Soldiers with behavioral-based feedback, according to a Nov. 2020 press release.
As part of a Multidisciplinary University Research Initiative grant awarded by ARO and led by Maryam Shanechi, assistant professor at the University Of Southern California Viterbi School Of Engineering, researchers have developed a new machine learning algorithm to address the brain modeling and decoding challenge. The research is published in Nature Neuroscience. “Our algorithm can, for the first time, dissociate the dynamic patterns in brain signals that relate to specific behaviors and is much better at decoding these behaviors,” Dr. Maryam Shanechi, the engineering professor at the University of Southern California who led the research, said in a statement.
The researchers tested the algorithm on standard brain datasets during the performance of various arm and eye movements. They showed that their algorithm discovered neural patterns in brain signals that directed these movements but were missed with standard algorithms. They also showed that the decoding of these movements from brain signals – predicting what the movement kinematics are by just looking at brain signals that generate the movement – was much better with their algorithm. “The algorithm has significant implications for basic science discoveries,” Krim said. “The algorithm can discover shared dynamic patterns between any signals beyond brain signals, which is widely applicable for the military and many other medical and commercial applications.”
Doing so, is the first step in developing brain-machine interfaces that help restore lost function for people with neurological and mental disorders, which requires the translation of brain signals into a specific behavior, called decoding. “The impact of this work is of great importance to Army and DOD in general, as it pursues a framework for decoding behaviors from brain signals that generate them,” said Dr. Hamid Krim, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Develop Command, now known as DEVCOM, Army Research Laboratory. “As an example future application, the algorithms could provide Soldiers with needed feedback to take corrective action as a result of fatigue or stress.”
At any given time, people perform a myriad of tasks. All of the brain and behavioral signals associated with these tasks mix together to form a complicated web. Until now, this web has been difficult to untangle and translate. Shanechi said the reason for the new algorithm’s success was its ability to consider both brain signals and behavioral signals such as movement kinematics together, and then find the dynamic patterns that were common to these signals. This decoding also depends on our ability to isolate neural patterns related to the specific behavior. These neural patterns can be masked by patterns related to other activities and can be missed by standard algorithms. In the future, the new algorithm could also enhance future brain-machine interfaces by decoding behaviors better. For example, the algorithm could help allow paralyzed patients to directly control prosthetics by thinking about the movement.
Dr. Hamid Krim, a program manager at the Army Research Office, part of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, told Nextgov Shanechi and her team used the algorithm to separate what they call behaviorally relevant brain signals from behaviorally irrelevant brain signals. “This presents a potential way of reliably measuring, for instance, the mental overload of an individual, of a soldier,” Krim said. If the algorithm detects behavior indicating a soldier is stressed or overloaded, then a machine could alert that soldier before they are even able to recognize their own fatigue, Krim said. Improving self-awareness is central to the Army’s interest in this research, he added.
If the machine-learning algorithm can successfully determine which specific behaviors—like walking and breathing—belong to which specific brain signal, then it has the potential to help the military maintain a more ready force. Eventually, Krim said, this research may contribute to the development of technology that can not only interpret signals from the brain but also send signals back to help individuals take automatic corrective action for certain behaviors, he added. Imagination is the only limit when it comes to the potential of this technology, Krim said. Another futuristic application could enable soldiers to communicate with each other without ever opening their mouths. “If you’re in the theater, and you can’t talk, you can’t even whisper, but you can still communicate,” Krim said. “If you can talk to your machine, and the machine talks to the other machine, and the machine talks to the other soldier, you have basically a full link without ever uttering a word.”
References and resources also include: