The future of controlling Cars , UAVs , weaponry or Robotic Army with the speed of thought has arrived

New era has arrived when people  use only their thoughts to control not only themselves, but the world around them. Every action our body performs begins with a thought, and with every thought comes an electrical signal. The electrical signals can be received by the Brain-Computer Interface (BCI), which can electroencephalograph (EEG) or an implanted electrode, which can then be translated, and then sent to the performing hardware to produce the desired action.

Brain-computer interfaces are being applied in neuroprosthetics, through which paralyzed persons are able to control robotic arms, neurogaming where one can control keyboard, mouse etc using their thoughts and play games, neuroanalysis (psychology), and now in military and defense to control robotic soldiers or fly planes with thoughts.

Russian scientists  have developed  the first electric car in the world that will be controlled by brain  through an innovative system of direction by ‘telekinesis’ that will allow the exchange of information between the brain and the vehicle’s control systems. The ‘neuromobile’, as its creators call it, will allow people with limited mobility to cover long distances without help.

The military is  interested in developing mind-controlled weaponry and remotely-piloted aircraft, which could make  their operations and reactions far faster. The troops of tomorrow may be able to pull the trigger using only their minds. Mind controlled weapons could allow  soldiers to understand and act faster with the incredible speed of cyber warfare, missiles and other threats. In one DARPA experiment, a quadriplegic first controlled an artificial limb and then flew a flight simulator. Future systems might monitor the users’ nervous system and compensate for stress, fatigue, or injury.

Eventually, brain-computer interfaces could let people control augmented reality and virtual reality experiences with their mind instead of a screen or controller. Facebook’s CEO and CTO teased these details of this “direct brain interface” technology over the last two days at F8.

https://www.youtube.com/watch?v=iy1HdFqTr6A

Brain-Computer Interface (BCI) technology

A Brain-Computer Interface (BCI) is a setup permitting the control of external devices by decoding brain activity. The users intentions, reflected by signals of the central nervous system (CNS), are translated by the BCI system, which returns a desired output: communication via computer or control of an external device.

Brain-Computer Interface (BCI) technology relies on the real-time detection of specific neural patterns in order to circumvent the brain’s normal output channels of peripheral nerves and muscles and thus, to implement a direct mind-control of external devices.

Electroencephalography (EEG) has been extensively used for decoding brain activity since it is non-invasive, cheap, portable, and has high temporal resolution to allow real-time operation. Due to its poor spatial specificity, BCIs based on EEG can require extensive training and multiple trials to decode brain activity (consequently slowing down the operation of the BCI).

On the other hand, BCIs based on functional magnetic resonance imaging (fMRI) are more accurate owing to its superior spatial resolution and sensitivity to underlying neuronal processes which are functionally localized. However, due to its relatively low temporal resolution, high cost, and lack of portability, fMRI is unlikely to be used for routine BCI.

Researchers from Auburn University, UCLA and others  have proposed a new approach for transferring the capabilities of fMRI to EEG, which includes simultaneous EEG/fMRI sessions for finding a mapping from EEG to fMRI, followed by a BCI run from only EEG data, but driven by fMRI-like features obtained from the mapping identified previously

Consumer-grade EEG sensors and BCI systems are now available on the market, while BCI softwares are available for free and open-source and EEG signal processing algorithms improved.

Japanese, Russian, Chinese Researchers develop brain-to-vehicle technology

Nissan  has developed its ‘brain-to-vehicle’ (B2V) technology that would redefine “how people interact with their cars” and challenges conventional thinking on a future of autonomous vehicles. “When most people think about autonomous driving, they have a very impersonal vision of the future, where humans relinquish control to the machines,” Nissan executive vice president Daniele Schillaci said. “Yet B2V technology does the opposite, by using signals from their own brain to make the drive even more exciting and enjoyable.”

Nissan said it has been researching how to “predict a driver’s actions and detect discomfort” through the use of “brain decoding technology”. “By catching signs that the driver’s brain is about to initiate a movement – such as turning the steering wheel or pushing the accelerator pedal – driver assist technologies can begin the action more quickly,” Nissan said. “This can improve reaction times and enhance manual driving.” “By anticipating intended movement, the systems can take actions – such as turning the steering wheel or slowing the car – 0.2 to 0.5 seconds faster than the driver, while remaining largely imperceptible,” Nissan said.

B2V technology could also be used to allow drivers to alter the environment in which they are driving, Nissan said. “The technology can use augmented reality to adjust what the driver sees and create a more relaxing environment,” it said.

The mind controlled full-scale electric car  has been designed by specialists from Russian university N.E. Lobachevsky situated in Nizhniy Novgorod. The creators of the prototype said that they had to develop a system for recording brain signals of different modalities and special algorithms to allow the car to decipher commands. The most innovative part of the vehicle,  is its control system, as it includes the use of special algorithms capable of reading different signal modes and transferring them to the machine’s control system .

Specifically, the algorithms will classify “a set of data on the physical state of the person” received by the control system to “discern the mental instructions of the pilot”, as explained at the university, reports RG. In other words, the system “will determine the variant of maneuver in a situation of circulation in which the driver thought”. Then the driver’s mental commands will be transmitted to the actuators of the car’s control system.

The developers believe that the unique car will go into mass production within three years and will be used primarily by handicapped citizens. The creators also stressed that the Russian neuro-mobile will prove to be not only advanced in terms of technology, but will also be economical and affordable. “A significant advantage is the low cost 550 000 — 990 000 rubles [$9000 to $16,000] while the purchase of foreign cars with similar characteristics will cost 50%.

Researchers at Nankai University in Tianjin, China are working alongside Chinese Automaker Great Wall Motor to design a car which can be controlled by the mind alone. During the test the subject using a 16 sensor headset vehicle, was able to command the car to accelerate, break and open and shut the doors. “There are two starting points of this project. The first one is to provide a driving method without using hands or feet for the disabled who are unable to move freely; and  secondly, to provide healthy people with a new and more intellectualized driving mode,” researcher Zhang Zhao told Reuters.

According to the researchers, the ultimate plan could be to integrate this technology with driverless cars, so it is more of a complementary service than an alternative to physical driving. Professor Duan Feng, who led the project, told Reuters, “In the end, cars, whether driverless or not, and machines are serving for people. Under such circumstances, people’s intentions must be recognized. In our project, it makes the cars better serve human beings.

A team of researchers at the Free University of Berlin has also explored brain interfaces to steer vehicles. The German-based team, led by artificial lab professor Dr. Raul Rojas, used a headset and electroencephalography (EEG) sensors designed by bioinformatics company Emotiv. The system was able to interpret the driver’s thoughts such as desire to turn left, right, accelerate and brake, and create computer commands.

 

Facebook’s Typing by brain project

Facebook announced that it’s working on a “typing by brain” project. Regina Dugan, the head of Facebook’s R&D division Building 8, explained to conference attendees that the goal is to eventually allow people to type at 100 words per minute, 5X faster than typing on a phone, with just your mind.

The team plans to use optical imaging to scan your brain’s speech center a hundred times per second to detect you speaking silently in your head, and translate it into text. The brain-computer interface will decode signals from the brain’s speech center at the remarkable rate of 100 words per minute.

“This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share by sending them to the speech center of your brain. “Our brains produce enough data to stream 4 HD movies every second. The problem that the best way we have to get information out into the world – speech – can only transmit about the same amount of data as a 1980s modem. We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today,” said Mark Zuckerberg.

Facebook has research agreement with APL  on developing a silent speech interface that will allow users to type 100 words per minute — five times faster than typing on a smartphone — using only their thoughts. Earlier this year, in collaboration with Johns Hopkins Medicine, APL demonstrated the ability to decode semantic information — information about the meanings of words — from neural signals measured using electrodes placed on the surface of the brain in patients undergoing treatment for epilepsy.

Researchers recorded ECoG while patients named objects from 12 different semantic categories, such as animals, foods and vehicles. “By learning the relationship between the semantic attributes associated with objects and the neural activity recorded when patients named these objects, we found that new objects could be decoded with very high accuracies,” said Michael Wolmetz, a cognitive neuroscientist at the Johns Hopkins Applied Physics Laboratory, and one of the paper’s authors. “Using these methods, we observed how different semantic dimensions — whether an object is manmade or natural, how large it typically is, whether it’s edible, for example — were organized in each person’s brain.”

DARPA’s Mind Controlled Prosthetic Arm

DARPA’s Prosthetic Arm is set to take over a real arm, letting the receiver control it with their thoughts, fired up by brain cells. The robotic arm is connected by wires that link up to the wearer’s motor cortex—the part of the brain that controls muscle movement—and sensory cortex, which identifies tactile sensations when you touch things. In essence, they claim, it allows its user to feel things with their robotic hand.

In their research, the HAPTIX team is implanting electrodes in a patient’s muscles between the elbow and shoulder, as well as in individual nerve fascicles that correspond to wrist and finger control. According to the release, the researchers are also looking to develop minimally invasive procedures to implant electrodes in the spinal cord. The HAPTIX researchers seek to acquire and decode neural signals that could provide intuitive prosthetic control and restore sensory feedback using these neural interface systems.

“We want to re-establish communication between the motor parts of the nervous system and the prosthetic hand through the use of implantable electronics,” Weber said in a press release.

The HAPTIX program is in its second phase, which is scheduled to continue through 2018. The third phase is scheduled for 2019, when transradial amputees will be allowed to take home a HAPTIX-controlled system for extended trials outside the laboratory, the press release noted.

Researchers at the University of Pittsburgh were able to increase the maneuverability of the mind controlled robotic arm from seven dimensions (7D) to 10 dimensions (10D). The extra dimensions come from four hand movements–finger abduction, a scoop, thumb extension and a pinch—enabling one to pick up, grasp and move a range of objects much more precisely than with the previous 7D control. This in turn help paralyzed persons to control a robotic arm with a range of complex hand movements.

“The ultimate goal for HAPTIX is to create a device that is safe, effective, and reliable enough for use in everyday activities,” explains Doug Weber, the DARPA HAPTIX program manager.

 

California Institute of Technology’s BCI allow Paralyzed Man to Drink Beer on his own

A paralyzed man named Erik Sorto has been able to drink beer, shake hands and even play “rock, paper and scissors,” thanks to a robotic arm controlled solely by his mind.

For this experiment California Institute of Technology neuroscientist Richard Andersen implanted the electrodes of BCI in different area of the brain, the posterior parietal cortex, which is located on the top of the brain near the back. The parietal cortex is a center of higher-level cognition that processes the planning of movements, rather than the details of how movements are executed.

An implant in this area, allow the goal of an action to be conveyed directly to the robotic limb, producing more natural fluid motions as well as reducing the number of neural signals needed to control its movement.

The implants differ from those in Braingate, which placed electrodes in the motor cortex, the part of the brain directs voluntary physical activity. Since motor cortex directly controls many different muscles, so for any one gesture, patients had to painstakingly focus on which muscles to activate for each specific component of the gesture. With these implants the patients could still control a robotic limb, however the movement was delayed and jerky.

 Entertainment and gaming

Entertainment and gaming applications have opened the market for nonmedical brain computer interfaces. Various games are presented like in where helicopters are made to fly to any point in either a 2D or 3D virtual world. BrainArena, allows the players to play a collaborative or competitive football game by means of two BCIs. They can score goals by imagining left or right hand movements.

Emotiv EPOC allows one to control keyboard and mouse of your laptop as well as move characters in games. MUSE is to allow you to control your iPhone or Android device with your mind power. With ThynkWare, anyone can use their thoughts to control their smartphones, tablets, home, office, tv, robots, and even clothing.

Mind-controlled telepresence robot

A relatively new field of research is Telepresence that allows a human operator to have an at-a-distance presence in a remote environment via a brain-actuated robot.

A telepresence robot developed at the École Polytechnique Fédérale de Lausanne (EPFL) that can be controlled by thought may give people with severe motor disabilities a greater level of independence. Successfully put through its paces by 19 people scattered around Central Europe – nine of whom are quadriplegic and all of whom were hooked up to a brain-machine interface – the robot handled obstacle detection and avoidance on its own while the person controlling it gave general navigation instructions.

Military Applications

The new report, “Neuroscience, Conflict and Security”, formed part of a series that examined the impact of neuroscience on society, dealing specifically with the potential application of advances in neuroscience to the armed forces and security personnel.

A key advance in neuroscience has been improvements in real-time neuro-imaging, which can indicate in great detail which parts of the brain “light up” when undertaking certain activities. One of its applications could be to screen potential recruits for a specific role, for example to see if they are temperamentally suited to be a commander, pilot or diver.

“At the moment it’s very much a case of taking people on and subjecting them to high-stress exercises and choosing the ones who make it,” says Flower. “If they could be subjected to imaging during assessment you could identify who has good risk-taking behaviour, strategy and planning ability, or 3D analytical skills.”

Brain scanning could also speed up and improve target recognition or identify changes in surveillance satellite images by recognising subconscious objective identification rather than an operator having to process and actively react.

“It has been discovered that when you show the brain different images, it spots the differences between them even though they may not reach conscious awareness,” says Flower. “Wearing a helmet like a hairnet can pick up a spike in brain activity which you can correlate to differences identified between two images, even if they were flashed up too quickly to process consciously.”

That potentially has the ability not only to speed up the process of target selection but also improve accuracy. It could also reduce problems associated with fatigue, which is a big issue facing people whose job involves scanning images for a long time, especially in the dark, such as surveillance UAV operators.

The obvious application for the military is mind-controlled weaponry and remotely-piloted aircraft, which could make operation and reactions far faster. “If you couple that with your subconscious mind being much faster at dealing with information you can see a situation sometime in the future where you’re not thinking about flying the aircraft, but your subconscious is doing it without interfering in any way,” says Flower. “You would probably have a much better appreciation of an incoming threat and fire off a couple of missiles without having to consciously think.”

Like automated weaponry and battlefield robotics, however, these new techniques could require an overhaul of ethical guidelines, especially with regards to civilian casualties. Currently the last person who gave the order to fire is responsible, but if it came from the operator’s subconscious, the line becomes blurred.

US DOD is now planning to teach AI to monitor its user’s level of stress, exhaustion, distraction, and so on helps the machine adapt itself to better serve the human — instead of the other way around. Teaching AI to instantly detect its user’s intention to give a command, instead of requiring a relatively laborious push of a button, helps the human keep control — instead of having to let the AI off the leash because no human can keep up with it.

“Making our robots wait for human permission would slow them down so much that enemy AI without such constraints would beat us,” said Frank Kendall Carter’s undersecretary of acquisition and technology. Vice-Chairman of the Joint Chiefs, Gen. Paul Selva, calls this the “Terminator Conundrum.” Neuroscience suggests a way out of this dilemma: Instead of slowing the AIs down, make the humans’ orders come faster.

“Can we develop precise neurotechnologies that can go to those circuits in the brain or the peripheral nervous system in real time?” Justin Sanchez, director of DARPA‘s Biological Technologies Office said. “Do we have computational systems that allow us to understand what the changes in those signals (mean)? And can we give meaningful feedback, either to the person or to the machine to help them to do their job better?”

 

Flying manned Aircrafts and Weaponized UAVs by Mind

The University of Florida recently held an event organizers claimed was the “world’s first brain drone race,” featuring unmanned aerial vehicles powered by the brain activity of contestants. The race was billed as a “competition of one’s cognitive ability and mental endurance requiring competitors to out-focus an opponent in a drone drag race fueled by electrical signals emitted from the brain.

Pilots don electroencephalogram headsets that are calibrated to each wearer’s brain. For example, neuron activity will be recorded when the wearer is told to think about pushing something forward. This activity is then bound to the forward stick on the drone’s controller, so future similar neuron activity will move the drone forward. “Organizers of the event describe BCI as “the utilization of a brain imaging device for the purpose of controlling machines with the human brain and to understand the human’s emotional condition or state.”

University of Minnesota carried out a successful demonstration of a thought-controlled mini-helicopter capable of being piloted through obstacles with around 90% accuracy.

A team of engineers at Technische Universität München in Germany developed an algorithm that can convert brain waves into flight commands. The EEG cables sent electrical signals to a computer, which through mind-control algorithm; could target the pilot’s plane-control thoughts and the computer then converted the electric signals into an action that was carried out wirelessly.

In future Soldiers would be able to control both manned aircraft and weaponized UAVs in all their phases of flight

A researcher from Arizona State University has found a way to control multiple drones using nothing but the power of thought.The controller wears a skull cap which contains hundreds of electrodes that are wired to a computer. The wearer then thinks specific commands, the computer translates them into instructions and the robots obey.

 

 Russian Scientists Develop Mind-Controlled Quadcopter

Zelenograd-based company Neurobotics has designed a Mind-controlled quadcopter, that is able to fly not only to four directions — forward, backwards, right and left — but it can also reach a specific target point, the report said.

“Commands, or ‘conditions’ as we call them, are generated by the sensors on the head of an operator. The person thinks about certain actions at right moments which the system then recognizes and identifies,” Neurobotics director Vladimir Konyshev explained, as cited by the report.

The new technology has a great potential in the future. It would not only help a great deal to the Russian Armed Forces on the battlefield, but its interface could also be used to help people with limited mobility, Konyshev added.

However, the basic limitations of current interfaces are that they allow only decoding of basic commands like left right. The future devices would be capable of capturing and decoding brain signals that are responsible for small, precise movements, to be able to accomplish complex task like landing an aircraft. The accuracy of the system is also required to be enhanced.

Brain Controlled Military robots

Brain-Robot Interaction (BRI) refers to the ability to control a robot system via brain signals and is expected to play an important role in the application of robotic devices in many fields

China Developing mind controlled robot Army

Students at the People’s Liberation Army Information Engineering University in Zhengzhou showed in a recent demonstration that they were able to control the movement of small robots using only their minds. At a demonstration at the People’s Liberation Army Information Engineering University in Zhengzhou, students used the device to send robots trundling in different directions. They were also able to turn the robot’s heads and get them to pick up objects. The Chinese army of the future could see robot soldiers controlled by military commanders’ minds.

The technology faces three major engineering challenges that need to be addressed before soldiers will be seamlessly controlling remote military robots in the battleground. First is need to enhance the efficiency and accuracy of  non-invasive BCIs that are slow and somewhat uncertain at present, secondly, they tend to make high cognitive demands on the user, and finally, especially for tele-operation via the internet, variable communication delays are a significant problem.

Until then, most of the systems being developed currently are adopting a ‘shared-control’ approach, equipping the robot agent with a degree of intelligence to allow it to work semi-autonomously.

 

BNCI Horizon 2020 project

European Commission has released a roadmap on BCI: “BNCI Horizon 2020 project”, with the objective of providing a global perspective on the BCI field now and in the future. Many of the applications are centered on the needs of the disabled community for serious injuries. BCIs will provide people with more awareness of their own biological and mental state. BCIs will also promote amelioration of lost function.

Hence, interfacing with the brain directly will be on the forefront of both societal and medical evolution. Relevant application areas include social interaction and recreation, occupational safety, quality of life, independent living in old age, and (occupational) rehabilitation.

In the year 2025, there is expected to be a broad range of brain-controlled applications which, according to the BCI roadmap, will be standard in medical treatment and therapy and also in monitoring personal health.

BCI technology has been advancing at the rapid phase, so that it has now become possible to externally control computers, smartphones, or even vehicles, with thought.

 

References and Resources also include: