DARPA’s Safe Gene editing program aims to prevent Global Bioerror and Biothreat

CRISPR allows removing a single (defective) gene from a genome and replacing it with another one, to prevent genetic diseases.  CRISPR “has transformed labs around the world,” says Jing-Ruey Joanna Yeh, a chemical biologist at Massachusetts General Hospital’s Cardiovascular Research Center, in Charlestown, who contributed to the development of the technology. “Because this system is so simple and efficient, any lab can do it.” Editing with CRISPR is like placing a cursor between two letters in a word processing document and hitting “delete” or clicking “paste.” And the tool can cost less than US $50 to assemble.


Recently, China announced it was genetically engineering hyper-muscular SUPER-DOGS. The dogs, which are test tube bred in a lab, have twice the muscle mass of their natural counterparts and are considerably stronger and faster. An army of super-humans has been a staple of science fiction and superhero comics for decades – but the super-dog technology brings it closer to reality. The beagle puppy, one of 27, was genetically engineered by ‘deleting’ a gene called myostatin, giving it double the muscle mass of a normal beagle.


The advance genetic editing technology has been touted as a breakthrough which could herald the dawn of ‘superbreeds’, which could be stronger, faster, better at running and hunting. The Chinese official line is that the dogs could potentially be deployed to frontline service to assist police officers. Dr Lai Liangxue, researcher at Guangzhou institute of biological medicine and health, said: “This is a breakthrough, marking China as only the second country in the world to independently master dog-somatic clone technology, after South Korea.”


US DOD is also applying gene editing technology for military applications. During the second biennial Department of Defense Lab Day May 18, 2017, One AFRL exhibit, highlighted research into how geneticists and medical researchers edit parts of the genome by removing, adding or altering sections of the DNA sequence in order to remove a virus or disease caused by harmful chemical, biological or environmental agents a warfighter may have contact with.


Yet without careful precautions, a gene drive released into the wild could spread or change in unexpected ways. Accidently, a lethal gene engineered into a pest species, say, might jump (or, as biologists put it, “horizontally transfer”) into another species that’s a crucial part of an ecosystem.

Kevin Esvelt, head of the Sculpting Evolution lab at MIT Media Lab, which is applying for Safe Genes funding in collaboration with eight other research groups, predicts that eventually, perhaps around 15 years from now, an accident will allow a drive with potential to spread globally to escape laboratory controls. “It’s not going to be bioterror,” he says, “it’s going to be ‘bioerror.’”


This summer, the Daily Star  warned that the terrorist group ISIS is using gene drives to make “supercharged killer mosquitoes.” Experts regard that as unlikely. But the idea that gene drives pose a biosecurity threat is anything but. Because the technology to create a gene drive is widely accessible and inexpensive, biologist Kevin Esvelt of the Wyss Institute for Biologically Inspired Engineering at Harvard University warned the scientific panel at an earlier meeting, “We have never dealt with anything like this before,” as reported by Sharon Begley Senior Writer, Science and Discovery.


The possibilities for “weaponizing” gene drives range from suppressing pollinators, which could destroy an entire country’s agriculture system, to giving innocuous insects the ability to carry diseases such as dengue, said MIT political scientist Kenneth Oye, who briefed the bioweapons office. Gene drive is particularly worrisome because “it’s not just one or two labs that are capable of doing the work,” Oye said — and the “capable” could include do-it-yourself “garage biologists.”


The U.S. Defense Advanced Research Projects Agency (DARPA) has awarded a combined $65 million over four years to seven research teams toward projects designed to make gene editing technologies safer, more targeted and potentially even reversible. The DARPA’s Safe Genes program aims to deliver novel biological capabilities to facilitate the safe and expedient pursuit of advanced genome editing applications, while also providing the tools and methodologies to mitigate the risk of unintentional consequences or intentional misuse of these technologies.



Setting a Safe Course for Gene Editing Research: DARPA

Gene editing technologies have captured increasing attention from healthcare professionals, policymakers, and community leaders in recent years for their potential to selectively disable cancerous cells in the body, control populations of disease-spreading mosquitos, and defend native flora and fauna against invasive species, among other uses. The potential national security applications and implications of these technologies are equally profound, including protection of troops against infectious disease, mitigation of threats posed by irresponsible or nefarious use of biological technologies, and enhanced development of new resources derived from synthetic biology, such as novel chemicals, materials, and coatings with useful, unique properties, says DARPA.


Achieving such ambitious goals, however, will require more complete knowledge about how gene editors, and derivative technologies including gene drives, function at various physical and temporal scales under different environmental conditions, across multiple generations of an organism. In parallel, demonstrating the ability to precisely control gene edits, turning them on and off under certain conditions or even reversing their effects entirely, will be paramount to translation of these tools to practical applications. By establishing empirical foundations and removing lingering unknowns through laboratory-based demonstrations, the Safe Genes teams will work to substantially minimize the risks inherent in such powerful tools.


A new DARPA program could help unlock the potential of advanced gene editing technologies by developing a set of tools to address potential risks of this rapidly advancing field. The Safe Genes program envisions addressing key safety gaps by using those tools to restrict or reverse the propagation of engineered genetic constructs.


“Gene editing holds incredible promise to advance the biological sciences, but right now responsible actors are constrained by the number of unknowns and a lack of controls,” said Renee Wegrzyn, DARPA program manager. “DARPA wants to develop controls for gene editing and derivative technologies to support responsible research and defend against irresponsible actors who might intentionally or accidentally release modified organisms.”


Safe Genes was inspired in part by recent advances in the field of “gene drives,” which can alter the genetic character of a population of organisms by ensuring that certain edited genetic traits are passed down to almost every individual in subsequent generations. Scientists have studied self-perpetuating gene drives for decades, but the 2012 development of the genetic tool CRISPR-Cas9, which facilitates extremely precise genetic edits, radically increased the potential value of—and in some quarters the demand for—experimental gene drives.


Traditional biosafety and biosecurity measures including physical biocontainment, research moratoria, self-governance, and regulation are not designed for technologies that are, in fact, explicitly intended for environmental release and are widely available to users who operate outside of conventional institutions. The goal of Safe Genes is to build in biosafety for new biotechnologies at their inception, provide a range of options to respond to synthetic genetic threats, and create an understanding of what is possible, probable, and vulnerable with regard to emergent gene editing technologies. “DARPA is pursuing a suite of versatile tools that can be applied independently or in combination to support bio-innovation or combat bio-threats,” Wegrzyn said.


From a national security perspective, Safe Genes addresses the inherent risks that arise from the rapid democratization of gene editing tools. The steep drop in the costs of genomic sequencing and gene editing toolkits, along with the increasing accessibility of this technology, translates into greater opportunity to experiment with genetic modifications. This convergence of low cost and high availability means that applications for gene editing—both positive and negative—could arise from people or states operating outside of the traditional scientific community.


DARPA Awards $65M to Improve Gene-Editing Safety, Accuracy

The U.S. Defense Advanced Research Projects Agency (DARPA) has awarded a combined $65 million over four years to seven research teams toward projects designed to improve the safety and accuracy of gene editing.


The funding is being awarded under DARPA’s Safe Genes program, designed to gain fundamental understanding of how gene-editing technologies function; devise means to safely, responsibly, and predictably harness them for beneficial ends; and address potential health and security concerns related to their accidental or intentional misuse.


Efforts funded under the Safe Genes program fall into two broad categories: gene drive and genetic remediation technologies, and in vivo therapeutic applications of gene editors in mammals. Much of the research will look at ways to inhibit gene drive systems. The obvious concern with gene drive techniques is that it’s impossible to know the full ramifications of releasing a genetic modification into the environment until it is actually happening.


DARPA said the seven teams chosen for the funding will be pursuing one or more of three technical objectives:

  • Develop genetic constructs—biomolecular “instructions”—that provide spatial, temporal, and reversible control of genome editors in living systems;
  • Devise new drug-based countermeasures that provide prophylactic and treatment options to limit genome editing in organisms and protect genome integrity in populations of organisms; and
  • Create a capability to eliminate unwanted engineered genes from systems and restore them to genetic baseline states.


  1. A team led by Dr. Amit Choudhary (Broad Institute/Brigham and Women’s Hospital-Renal Division/Harvard Medical School) is developing means to switch on and off genome editing in bacteria, mammals, and insects, including control of gene drives in a mosquito vector for malaria, Anopheles stephensi. The team seeks to build a general platform for the rapid and cost-effective identification of chemicals that will block contemporary and next-generation genome editors. Such chemicals could propel the development of therapeutic applications of genome editors by limiting off-target effects or protect against future biological threats. The team will also construct synthetic genome editors for precision genome engineering.


  1. A Harvard Medical School team led by Dr. George Church seeks to develop systems to safeguard genomes by detecting, preventing, and ultimately reversing mutations that may arise from exposure to radiation. This work will involve creation of novel computational and molecular tools to enable the development of precise editors that can distinguish between highly similar genetic sequences. The team also plans to screen the effectiveness of natural and synthetic drugs to inhibit gene editing activity.


  1. A Massachusetts General Hospital (MGH) team led by Dr. Keith Joung aims to develop novel, highly sensitive methods to control and measure on-target genome editing activity—and limit and measure off-target activity—and apply these methods to regulate the activity of mosquito gene drive systems over multiple generations. State-of-the-art technologies for measuring on- and off-target activity require specialized expertise; the MGH team hopes to enable orders of magnitude higher sensitivity than what is available with existing methods and make this process routine and scalable. The team will also develop novel strategies to achieve control over genome editors, including drug-regulated versions of these molecules. The team will take advantage of contained facilities that simulate natural environments to study how drive systems perform in mosquitos under conditions approximating the real world.


  1. A Massachusetts Institute of Technology (MIT) team led by Dr. Kevin Esvelt has been selected to pursue modular “daisy drive” platforms with the potential to safely, efficiently, and reversibly edit local sub-populations of organisms within a geographic region of interest. Daisy drive systems are self-exhausting because they sequentially lose genetic elements until the drive system stops spreading. In one proposed variant, natural selection is anticipated to favor the edited or original version depending on which is in the majority, keeping genetic alterations confined to a specified region and potentially allowing targeted populations of organisms to be restored to wild-type genetics. MIT plans to conduct the majority of its work in nematodes, a simple type of worm that reproduces rapidly, enabling high-throughput testing of different drive configurations and predictive models over multiple generations. The team then aims to adapt this system in the laboratory for up to three key mosquito species relevant to human and animal health, gradually improving performance in mosquitos through an iterative cycle of model, test, and refine.


  1. A North Carolina State University (NCSU) team led by Dr. John Godwin aims to develop and test a mammalian gene drive system in rodents. The team’s genetic technique targets population-specific genetic variants found only in particular invasive communities of animals. If successful, the work will expand the tools available to manage invasive species that threaten biodiversity and human food security, and that serve as potential reservoirs of infectious diseases affecting native animal and human populations. The team also plans to develop mathematical models of how drives would function in mice, and then perform testing in contained, simulated natural environments to gauge the robustness, spatial limitation, and reversibility of the drives.


  1. A University of California, Berkeley team led by Dr. Jennifer Doudna will investigate the development of novel, safe gene editing tools for use as antiviral agents in animal models, targeting the Zika and Ebola viruses. The team will also aim to identify anti-CRISPR proteins capable of inhibiting unwanted genome-editing activity, while developing novel strategies for delivery of genome editors and inhibitors.


  1. A University of California, Riverside team led by Dr. Omar Akbari seeks to develop robust and reversible gene drive systems for control of Aedes aegypti mosquito populations, to be tested in contained, simulated natural environments. Preliminary testing will be conducted in high-throughput, rapidly reproducing populations of yeast as a model system. As part of this effort, the team will establish new temporal and environmental, context-dependent molecular strategies programmed to limit gene editor activity, create multiple capabilities to eliminate unwanted gene drives from populations through passive or active reversal, and establish mathematical models to inform design of gene drive systems and establish criteria for remediation strategies. In support of these goals, the team will sample the diversity of wild populations of Ae. aegypti.


“Part of our challenge and commitment under Safe Genes is to make sense of the ethical implications of gene-editing technologies, understanding people’s concerns, and directing our research to proactively address them so that stakeholders are equipped with data to inform future choices,” Renee Wegrzyn, Ph.D., manager of the Safe Genes program, said in a statement.


“As with all powerful capabilities, society can and should weigh the risks and merits of responsibly using such tools. We believe that further research and development can inform that conversation by helping people to understand and shape what is possible, probable, and vulnerable with these technologies.”


References and resources also include:





DARPA developing high bandwidth neural interfaces for treating sensory disorders, and developing Brain Warfare systems

DARPA announced NESD in January 2016 with the goal of developing an implantable system able to provide precision communication between the brain and the digital world. Such an interface would convert the electrochemical signaling used by neurons in the brain into the ones and zeros that constitute the language of information technology, and do so at far greater scale than is currently possible. The work has the potential to significantly advance scientists’ understanding of the neural underpinnings of vision, hearing, and speech and could eventually lead to new treatments for people living with sensory deficits.

Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”

“The NESD program looks ahead to a future in which advanced neural devices offer improved fidelity, resolution, and precision sensory interface for therapeutic applications,” said Phillip Alvelda, the founding NESD Program Manager. “By increasing the capacity of advanced neural interfaces to engage more than one million neurons in parallel, NESD aims to enable rich two-way communication with the brain at a scale that will help deepen our understanding of that organ’s underlying biology, complexity, and function.”

Although the goal of communicating with one million neurons sounds lofty, Alvelda noted, “A million neurons represents a miniscule percentage of the 86 billion neurons in the human brain. Its deeper complexities are going to remain a mystery for some time to come. But if we’re successful in delivering rich sensory signals directly to the brain, NESD will lay a broad foundation for new neurological therapies.”

The Research would enable highly efficient Brain-computer interfaces that could be applied in neuroprosthetics, through which paralyzed persons are able to control robotic arms, neurogaming where one can control keyboard, mouse etc using their thoughts and play games, neuroanalysis (psychology), and in defense to control robotic soldiers or fly planes with thoughts.

This would also result in efficient in efficient Brain control devices. Researchers at the University of Zurich have identified the brain mechanism that governs decisions between honesty and self-interest. Using non-invasive brain stimulation, they could even increase honest behavior. Government is  also interested in controlling mind control of people to spread its propoganda while disrupting dissent. Military is  interested in mind control of  soldiers. Whistleblower has recently  revealed about secret DARPA project  military mind control project at major university.


DARPA has awarded contracts to five research organizations and one company that will support the Neural Engineering System Design (NESD) program: Brown University; Columbia University; Fondation Voir et Entendre (The Seeing and Hearing Foundation); John B. Pierce Laboratory; Paradromics, Inc.; and the University of California, Berkeley.

These organizations have formed teams to develop the fundamental research and component technologies required to pursue the NESD vision of a high-resolution neural interface and integrate them to create and demonstrate working systems able to support potential future therapies for sensory restoration. Four of the teams will focus on vision and two will focus on aspects of hearing and speech.


DARPA’s  “Neural Engineering System Design” program

DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program’s first year will focus on making fundamental breakthroughs in hardware, software, and neuroscience, and testing those advances in animals and cultured cells. Phase II of the program calls for ongoing basic studies, along with progress in miniaturization and integration, with attention to possible pathways to regulatory approval for human safety testing of newly developed devices. As part of that effort, researchers will cooperate with the U.S. Food and Drug Administration (FDA) to begin exploration of issues such as long-term safety, privacy, information security, compatibility with other devices, and the numerous other aspects regulators consider as they evaluate potential applications of new technologies.

The NESD call for proposals laid out a series of specific technical goals, including development of an implantable package that accounts for power, communications, and biocompatibility concerns. Part of the fundamental research challenge will be developing a deep understanding of how the brain processes hearing, speech, and vision simultaneously with individual neuron-level precision and at a scale sufficient to represent detailed imagery and sound. The selected teams will apply insights into those biological processes to the development of strategies for interpreting neuronal activity quickly and with minimal power and computational resources.

“Significant technical challenges lie ahead, but the teams we assembled have formulated feasible plans to deliver coordinated breakthroughs across a range of disciplines and integrate those efforts into end-to-end systems,” Alvelda said

Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.

Successful NESD proposals must culminate in the delivery of complete, functional, implantable neural interface systems and the functional demonstration thereof. The final system must read at least one million independent channels of single-neuron information and stimulate at least one hundred thousand channels of independent neural action potentials in real-time. The system must also perform continuous, simultaneous full-duplex interaction with at least one thousand neurons. While DARPA desires a single 1 cm3 device that satisfies all of these capabilities (read, write, and full-duplex), proposers may propose a design wherein each capability is embodied in separate 1 cm3 devices. Proposed implementations must not require tethers or percutaneous connectors for powering or facilitating communication between the implanted and external portions of the system.

DARPA anticipates investing up to $60 million in the NESD program over four years.NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative.


Details of DARPA’s NESD awards

The teams’ approaches include a mix of fundamental research and applied science and engineering. The teams will either pursue development and integration of complete NESD systems, or advance particular aspects of the research, engineering, and mathematics required to achieve the NESD vision, providing new tools, capabilities, and understanding. Summaries of the teams’ proposed research appear below:

A Brown University team led by Dr. Arto Nurmikko will seek to decode neural processing of speech, focusing on the tone and vocalization aspects of auditory perception. The team’s proposed interface would be composed of networks of up to 100,000 untethered, submillimeter-sized “neurograin” sensors implanted onto or into the cerebral cortex. A separate RF unit worn or implanted as a flexible electronic patch would passively power the neurograins and serve as the hub for relaying data to and from an external command center that transcodes and processes neural and digital signals.

What we’re developing is essentially a micro-scale wireless network in the brain enabling us to communicate directly with neurons on a scale that hasn’t previously been possible,” Arto Nurmikko, a professor of engineering at Brown said in a statement. “The understanding of the brain we can get from such a system will hopefully lead to new therapeutic strategies involving neural stimulation of the brain, which we can implement with this new neurotechnology.”

A Columbia University team led by Dr. Ken Shepard will study vision and aims to develop a non-penetrating bioelectric interface to the visual cortex  that could eventually enable computers to see what we see — or potentially allow human brains to tap directly into video feeds. The team envisions layering over the cortex a single, flexible complementary metal-oxide semiconductor (CMOS) integrated circuit containing an integrated electrode array. A relay station transceiver worn on the head would wirelessly power and communicate with the implanted device.

A Fondation Voir et Entendre team led by Drs. Jose-Alain Sahel and Serge Picaud will study vision. The team aims to apply techniques from the field of optogenetics to enable communication between neurons in the visual cortex and a camera-based, high-definition artificial retina worn over the eyes, facilitated by a system of implanted electronics and micro-LED optical technology.

A John B. Pierce Laboratory team led by Dr. Vincent Pieribone will also study vision. The team will pursue an interface system in which modified neurons capable of bioluminescence and responsive to optogenetic stimulation communicate with an all-optical prosthesis for the visual cortex.

A Paradromics, Inc., team led by Dr. Matthew Angle aims to create a high-data-rate cortical interface using large arrays of penetrating microwire electrodes for high-resolution recording and stimulation of neurons. As part of the NESD program, the team will seek to build an implantable device to support speech restoration. Paradromics’ microwire array technology exploits the reliability of traditional wire electrodes, but by bonding these wires to specialized CMOS electronics the team seeks to overcome the scalability and bandwidth limitations of previous approaches using wire electrodes.

A University of California, Berkeley, team led by Dr. Ehud Isacoff aims to develop a novel “light field” holographic microscope that can detect and modulate the activity of up to a million neurons in the cerebral cortex. The team will attempt to create quantitative encoding models to predict the responses of neurons to external visual and tactile stimuli, and then apply those predictions to structure photo-stimulation patterns that elicit sensory percepts in the visual or somatosensory cortices, where the device could replace lost vision or serve as a brain-machine interface for control of an artificial limb.

DARPA structured the NESD program to facilitate commercial transition of successful technologies. Key to ensuring a smooth path to practical applications, teams will have access to design assistance, rapid prototyping, and fabrication services provided by industry partners whose participation as facilitators was organized by DARPA and who will operate as sub-contractors to the teams.


References and Resources also include:





Psychological warfare essential element of Russia’s Gerasimov doctrine to China’s three Warfares to DARPA’s mind control

Psychological warfare consists of attempts to make your enemy lose confidence, give up hope, or feel afraid, so that you can win. Psychological warfare involves the planned use of propaganda and other psychological operations to influence the opinions, emotions, motives, reasoning,  attitudes, and behavior of opposition groups. Psychological operations target foreign governments, organizations, groups and individuals. It is used to induce confessions or reinforce attitudes and behaviors favorable to the originator’s objectives, and are sometimes combined with black operations or false flag tactics.



According to U.S. military analysts, attacking the enemy’s mind is an important element of the People’s Republic of China’s military strategy. This type of warfare is rooted in the Chinese Stratagems outlined by Sun Tzu in The Art of War and Thirty-Six Stratagems.


It is also used to destroy the morale of enemies through tactics that aim to depress troops’ psychological states. Civilians of foreign territories can also be targeted by technology and media so as to cause an effect in the government of their country. Psychological warfare (PSYWAR), or the basic aspects of modern psychological operations (PSYOP), have been known by many other names or terms, including MISO, Psy Ops, Political Warfare, “Hearts and Minds”, and propaganda.


In 2016, Russia was accused of using thousands of covert human agents and robot computer programs to spread disinformation referencing the stolen campaign emails of Hillary Clinton, amplifying their effect. Recently Social media has become important medium to conduct psychological warfare for terrorists to Nation states.  Russian   influence operations on the  social media, Russia have been reported to alter the course of events in the U.S. by manipulating public opinion.


Facebook – which testified in front of Congress alongside Google and Twitter – admitted in October that the Russia-backed content reached as many as 126 million Americans on the social network during the 2016 presidential election. In October, Twitter released to the US Congress a list of 2,752 accounts it believes were created by Russian actors in an attempt to sway the election.  In October Collins asked Facebook to investigate its own records for evidence that Russia-linked accounts were used to interfere in the EU referendum, and later asked Twitter to do similar.


Facebook has now  launched a new tool to allow users to see if they’ve liked or followed Russian propaganda accounts.The social network says its  tool will  allow  users to see whether they interacted with a Facebook page or Instagram account created by the Internet Research Agency (IRA), a state-backed organisation based in St Petersburg that carries out online misinformation operations.

Psychological warfare

US DOD categorize, PSYWAR as a type of information operation (IO), previously referred to as command and control warfare (C2W). IO consists of five core capabilities that are used in concert and with any related capabilities to influence, disrupt, corrupt, or takeover an enemy’s decision making process. They include: psychological operations (PsyOp), military deception (MILDEC), operations security (OPSEC), and electronic warfare (EW), and computer network operations (CNO). IO is basically a way of interfering with the various systems that a person uses to make decisions.


DOD defines PSYOP as planned operations to convey selected information to targeted foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign governments, organizations, groups, and
individuals. For example, during the Operation Iraqi Freedom (OIF), broadcast messages were sent from Air Force EC-130E aircraft, and from Navy ships operating in the Persian Gulf, along with a barrage of e-mail, faxes, and cell phone calls to
numerous Iraqi leaders encouraging them to abandon support for Saddam Hussein.


At the same time, the civilian Al Jazeera news network, based in Qatar, beams its messages to well over 35 million viewers in the Middle East, and is considered by many to be a “market competitor” for U.S. PSYOP. Terrorist groups can also use the Internet to quickly place their own messages before an international audience.


Some observers have stated that the U.S. will continue to lose ground in the global media wars until it develops a coordinated strategic communications strategy to counter competitive civilian news media, such as Al Jazeera. Partly in response to this observation, DOD now emphases that PSYOP must be improved and focused against potential adversary decisionmaking, sometimes well in advance of times of conflict. Products created for PSYOP must be based on in-depth knowledge of the audience’s decision-making processes. Using this knowledge, the PSYOPS products then must be produced rapidly, and disseminated directly to targeted audiences throughout the area of operations.


Neocortical warfare is RAND’s version of PsyOp that controls the behavior of the enemy without physically harming them. RAND describes the neocortical system as consciousness, perception, and will. Neocortical warfare regulates the enemy’s neocortical system by interfering with their continuous cycle of observation, orientation, decision, and action. It presents the enemy with perceptions, sensory, and cognitive data designed to result in a narrow set of conclusions, and ultimately actions.


The success of Psychological warfare is due to peculiarities of our minds. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Another study found that  “Once formed,” the impressions are remarkably perseverant.” Stanford researchers have found that even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted.


China’s three Warfares strategy

In 2003, the Central Military Commission (CMC) approved the guiding conceptual umbrella for information operations for the People’s Liberation Army (PLA) – the “Three Warfares” (san zhong zhanfa). The concept is based on three mutually reinforcing strategies: (1) the coordinated use of strategic psychological operations; (2) overt and covert media manipulation; and (3) legal warfare designed to manipulate strategies, defense policies, and perceptions of target audiences abroad.


At the operational level, the “Three Warfares” became the responsibility for the PLA’s General Political Department’s Liaison Department (GPD/LD), which conducts diverse political, financial, military, and intelligence operations.


Traditionally, the primary target for China’s information and political warfare campaigns has been Taiwan, with the GPD-LD activities and operations attempting to exploit political, cultural, and social frictions inside Taiwan, undermining trust between varying political-military authorities, delegitimizing Taiwan’s international position, and gradually subverting Taiwan’s public perceptions to “reunite” Taiwan on Beijing’s terms. In the process, the GPD-LD has directed, managed, or guided a number of political, military, academic, media, and intelligence assets that have either overtly or covertly served as agents of influence.


In 2016, this concept was at work after the UNCLOS tribunal ruled against China in a comprehensive verdict dismissing China’s claims in the South China Sea. Despite the fact that the Philippines achieved a major international victory against the depredations of a more powerful but more aggressive neighbour, China, with its application of the Three Warfares, was able to successfully co-opt Rodrigo Duterte (Phillipines President) to its side.


The most recent incident was comprehensive  psychological warfare  campaign unleashed by China in recent India-China Doklam crisis. Beijing tried to exploit the political divisions  to sow dissensions in India by calling Sushma Swaraj a “liar”, reaching out to Modi’s opponents, including Rahul Gandhi, and attacking his “Hindu nationalism.” .  The aim was to use Indians to put pressure on the Indian government and get them to withdraw, largely by doubting India’s own assertions.


The  daily threats to teach India a lesson, intimidating India with escalation with aggressive warnings, carrying out military exercises and issuing dire warnings about great loss in war, reminding  India of the earlier defeat of 1962  and how weak it is, warning that China would rescind its decision on Sikkim or “free” Sikkim from Indian oppression; or that it could interfere in J&K” — all intended to “undermine India’s ability to conduct combat operations through psychological operations aimed at deterring, shocking and demoralizing enemy military personnel.”

 Russia’s Gerasimov doctrine

US media  has discovered  “Gerasimov Doctrine” – based on a 2013 essay where the Chief of the General Staff of the Armed Forces of Russia, Valery Gerasimov, mentioned different types of modern warfare, which could be loosely termed as “hybrid war.”


In roughly 2,000 words, Mr. Gerasimov outlines a new theory of modern warfare, which turns hackers, media, social networks, and businessmen into weapons of war — and keys to victory. “The role of nonmilitary means of achieving political and strategic goals has grown,” Mr. Gerasimov writes, “and, in many cases, they have exceeded the power of force of weapons in their effectiveness. … All this is supplemented by military means of a concealed character.”


Russian military according to experts practice a repertoire of lethal tricks known as maskirovka, or masking or operations of deceit and disguise. The idea behind maskirovka is to keep the enemy guessing, never admitting your true intentions, always denying your activities and using all means political and military to maintain an edge of surprise for your soldiers. The doctrine, military analysts say, is in this sense “multilevel.” It draws no distinction between disguising a soldier as a bush or a tree with green and patterned clothing, a lie of a sort, and highlevel political disinformation and cunning evasions.


However, RT  calling it a Hoax writes , “The FT attempts to backup its argument with mentions of Crimea, allegations of US election hacking and information war. Using these as examples of a sudden Russian discovery of non-linear methods. Yet, the author is not self-aware enough to realize that the US has been using composite techniques like sanctions and revolutions, whether color or otherwise, to achieve strategic goals for decades. “Economic penalties or the removal of legitimate governments are clearly forms of “hybrid war” which pre-date Gerasimov, Makarov, and Putin himself. ”

Use of Social media for psychological warfare

Social media has enabled the use of disinformation on a wide scale. Analysts have found evidence of doctored or misleading photographs spread by social media in the Syrian Civil War and 2014 Russian military intervention in Ukraine, possibly with state involvement.


The 15-member UN Security council body expressed its grave concern at the increase of foreign fighters joining the Islamic State in Iraq and the Levant/Sham (ISIL/ISIS or Da’esh), Al-Qaida and other groups to over 25,000. BAN KI-MOON, Secretary-General of the United Nations, said that the 70 per cent increase in foreign terrorist combatants between the middle of 2014 and March 2015 meant more fighters on the front lines in Syria and Iraq, as well as in Afghanistan, Yemen and Libya.


One of the reasons of large increase in foreign fighters is their successful use of social media to recruit, radicalise and raise funds. Terrorist groups increasingly using social media platforms like YouTube, Facebook and Twitter to further their goals and spread their message, because of its convenience, affordability and broad reach of social media.


DARPA Is Using Mind Control Techniques to Manipulate Social Media

DARPA launched its SMISC program in 2011 to examine ways social networks could be used for propaganda under Military Information Support Operations (MISO), formerly known as psychological operations.


“With the spread of blogs, social networking sites and media-sharing technology, and the rapid propagation of ideas enabled by these advances, the conditions under which the nation’s military forces conduct operations are changing nearly as fast as the speed of thought. DARPA has an interest in addressing this new dynamic and understanding how social network communication affects events on the ground as part of its mission of preventing strategic surprise.”


The general goal of the Social Media in Strategic Communication (SMISC) program is to develop a new science of social networks built on an emerging technology base. Through the program, DARPA seeks to develop tools to help identify misinformation or deception campaigns and counter them with truthful information, reducing adversaries’ ability to manipulate events.


To accomplish this, SMISC will focus research on linguistic cues, patterns of information flow and detection of sentiment or opinion in information generated and spread through social media. Researchers will also attempt to track ideas and concepts to analyze patterns and cultural narratives. If successful, they should be able to model emergent communities and analyze narratives and their participants, as well as characterize generation of automated content, such as by bots, in social media and crowd sourcing.


SMISC researchers will create a closed and controlled environment where large amounts of data are collected, with experiments performed in support of development and testing. One example of such an environment might be a closed social media network of 2,000 to 5,000 people who have agreed to conduct social media-based activities in this network and agree to participate in required data collection and experiments. This network might be formed within a single organization, or span several. Another example might be a role-player game where use of social media is central to that game and where players have again agreed to participate in data collection and experiments.


Some of the research projects funded by the SMISC program included studies that analyzed the Twitter followings of Lady Gaga and Justin Bieber among others; investigations into the spread of Internet memes; a study by the Georgia Tech Research Institute into automatically identifying deceptive content in social media with linguistic cues; and “Modeling User Attitude toward Controversial Topics in Online Social Media”—an IBM Research study that tapped into Twitter feeds to track responses to topics like “fracking” for natural gas.

Defense Advanced Research Projects Agency psychological warfare tool: “Sonic Projector”

“The Air Force has experimented with microwaves that create sounds in people’s head (which they’ve called a possible psychological warfare tool), and American Technologies can “beam” sounds to specific targets with their patented HyperSound,” wrote Sharon Weinberger and “yes, I’ve heard/seen them demonstrate the speakers, and they are shockingly effective”.


DARPA had earlier launched their Sonic Projector” program: The goal of the Sonic Projector program is to provide Special Forces with a method of surreptitious audio communication at distances over 1 km. Sonic Projector technology is based on the non-linear interaction of sound in air translating an ultrasonic signal into audible sound. The Sonic Projector will be designed to be a man-deployable system, using high power acoustic transducer technology and signal processing algorithms which result in no, or unintelligible, sound everywhere but at the intended target. The Sonic Projector system could be used to conceal communications for special operations forces and hostage rescue missions, and to disrupt enemy activities.


Changing Characteristics of Psychological warfare past to present

Psychological warfare is ancient as warfare itself. Genghis Khan, leader of the Mongolian Empire in the 13th century AD believed that defeating the will of the enemy before having to attack and reaching a consented settlement was preferable to actually fighting. The Mongol generals demanded submission to the Khan, and threatened the initially captured villages with complete destruction if they refused to surrender. If they had to fight to take the settlement, the Mongol generals fulfilled their threats and massacred the survivors. Tales of the encroaching horde spread to the next villages and created an aura of insecurity that undermined the possibility of future resistance.


The Khan also employed tactics that made his numbers seem greater than they actually were. During night operations he ordered each soldier to light three torches at dusk to give the illusion of an overwhelming army and deceive and intimidate enemy scouts. He also sometimes had objects tied to the tails of his horses, so that riding on open and dry fields raised a cloud of dust that gave the enemy the impression of great numbers. His soldiers used arrows specially notched to whistle as they flew through the air, creating a terrifying noise. Another tactic favoured by the Mongols was catapulting severed human heads over city walls to frighten the inhabitants and spread disease in the besieged city’s closed confines.


Military employs many methods for psychological warfare such as: Demoralization by distributing pamphlets that encourage desertion or supply instructions on how to surrender, Shock and awe military strategy such as that used in the Iraq War by the United States to psychologically maim, and break the will of the Iraqi Army to fight.


Other methods are Projecting repetitive and annoying sounds and music for long periods at high volume towards groups under siege like during Operation Nifty Package, propaganda radio stations, such as Lord Haw-Haw in World War II on the “Germany calling” station, The CIA has extensively used propaganda broadcasts against the Cuban government through TV Marti, based in Miami, Florida. However, the Cuban government has been successful at jamming the signal of TV Marti.


Renaming cities and other places when captured, such as the renaming of Saigon to Ho Chi Minh City after Vietnamese victory in the Vietnam War, False flag events, Use of loudspeaker systems to communicate with enemy soldiers, Terrorism and The threat of chemical weapons.


More recently, it has been used by totalitarian regimes such as Fascist Italy, Nazi Germany, and militaristic Japan. It was used during WWII by both the US and Germany. It was used by US forces in Panama and Cuba, where pirated TV broadcasts were transmitted, as well as Guatemala, Iran, the first Gulf War, Vietnam, and other places.


“One of the most famous example was Colin Powell’s speech in the UN in 2003 where he presented false information about the so called weapons of mass destruction in Iraq which again lead to the disastrous war on Iraq. Norway’s war on Libya, which the whole Parliament supported, and which destroyed that country, was, as is well known, built on lies that Moammar al Gadaf was about to kill his own people,” writes Pål Steigan.

References and Resources also include:



On-skin health monitoring electronics is next revolution in medical field to diagnose diseases to monitoring soldiers’ health and stress levels in combat

Printed and Flexible electronics has started to revolutionize medical field with medical test strips with diagnostic electrodes. Engineers at the University of California San Diego have developed a flexible wearable sensor that can accurately measure a person’s blood alcohol level from sweat and transmit the data wirelessly to a laptop, smartphone or other mobile device.

Fitness trackers that monitor heart rate and step count are very popular, but wearable, non-invasive biosensors would be extremely beneficial for managing diseases,” said Prasad, the Cecil H. and Ida Green Professor in Systems Biology Science.

Wearable Biosensors are being developed that measure EEG, ECG, and EMG (electroencephalograms, electrocardiograms, and electromyography, tests which monitor brain, heart, and muscle activity). The next generation Wearable sensors employ lightweight, highly elastic materials attached directly onto the skin for more sensitive, precise measurements.

At the Seoul National University in Korea researchers have created a highly flexible electronic patch capable of doing basic ECG monitoring while amplifying and storing the data locally within novel nanocrystal floating gates. The patch is made of a flexible and stretchable silicon membrane on top of which gold nanoparticles are placed so as to draw the conductive components. This eliminates conductive films that have their unique limitations while increasing the memory capacity of the device.

A soft, flexible skin patch that monitors biomarkers in sweat can determine whether the wearer is dehydrated, measure the person’s blood sugar level and even detect disease. The invention is part of an emerging field of wearable diagnostics. Human sweat contains many of the same biomarkers as blood; however, analyzing sweat using a skin patch doesn’t hurt like a needle stick, and the results can be obtained more quickly.

“Cosmetics companies are interested in sweat using these devices in their research labs to evaluate their antiperspirants and deodorants and so on,” Rogers said. “So sweat loss and sweat chemistry is interesting in that domain, as well. And then we have contracts with the military that are interested sort of in continuous monitoring of health status of war fighters.”

Skin Patch Uses Sweat to Monitor Health

The skin patch, described in the journal Science Translational Medicine, is made of flexible material, and is about the size and thickness of a U.S. quarter. The so-called microfluidic device sticks to the forearm or back like an adhesive bandage, collecting and analyzing sweat.

The first-of-its-kind patch is aimed primarily at athletes, but the flexible electronics device will in all likelihood find a place in medicine and even the cosmetics industry.

“We’ve been interested in the development of skin-like technologies that can mount directly on the body, to capture important information that relates to physiological health,” said John Rogers, a materials scientist and bioengineer at Northwestern University in Illinois, and one of a number of developers of the skin patch. “And what we’ve demonstrated here is a technology that allows for the precise collection, capture and chemical analysis of biomarkers in sweat and perspiration.”

The sweat is routed through microscopic tubules to four different reservoirs that measure pH and chloride, important indicators of hydration levels, lactate — which reveals exercise tolerance — and glucose. It can also track the perspiration rate.

The skin patch could potentially be used to diagnose the lung disease cystic fibrosis by analyzing the chloride content in sweat. Wireless electronics transmit the color-coded results to a smartphone app, which analyzes them.

Bioengineers create sweat-based sensor to monitor glucose

Researchers at The University of Texas at Dallas have developed a wearable device that can monitor an individual’s glucose level via perspiration on the skin. In a study recently published online in the journal Sensors and Actuators B: Chemical, Dr. Shalini Prasad, professor of bioengineering in the Erik Jonsson School of Engineering and Computer Science, and her co-authors demonstrated the capabilities of a biosensor they designed to reliably detect and quantify glucose in human sweat.

“Fitness trackers that monitor heart rate and step count are very popular, but wearable, non-invasive biosensors would be extremely beneficial for managing diseases,” said Prasad, the Cecil H. and Ida Green Professor in Systems Biology Science. But for diabetics and those at risk for diabetes, self-monitoring of blood glucose, or blood sugar, is an important part of managing their conditions.

Typical home-use blood glucose monitors require a user to obtain a small blood sample, usually through the prick of a finger and often several times a day. However, the UT Dallas textile-based sensor detects glucose in the small amount of ambient sweat on a person’s skin. The team has previously demonstrated that their technology can detect cortisol in perspiration.

“In our sensor mechanism, we use the same chemistry and enzymatic reaction that are incorporated into blood glucose testing strips,” Prasad said. “But in our design, we had to account for the low volume of ambient sweat that would be present in areas such as under a watch or wrist device, or under a patch that lies next to the skin.”

For now, the skin patch is intended for use by sweaty athletes to measure biomarkers of performance, and Rogers sees the patch being sold with sports drinks; but, he said, a number of industries have expressed an interest in the sweat-based technology.


Nanomesh technology results in inflammation-free, on-skin health monitoring electronics

Minimal invasiveness is highly desirable when applying wearable electronics directly onto human skin. However, manufacturing such on-skin electronics on planar substrates results in limited gas permeability. The lack of breathability is deemed unsafe for long-term use: dermatological tests show the fine, stretchable materials prevent sweating and block airflow around the skin, causing irritation and inflammation, which ultimately could lead to lasting physiological and psychological effects.

According to a new study in Nature Nanotechnology, a new approach to this technology using a nanomesh structure could have positive implications for long-term health monitoring.

The new sensors are inflammation-free, are very gas permeable, and they’re thin and lightweight, without the use of any pesky substrates that can contribute to skin discomfort. That means they can be directly laminated onto human skin for longer periods of time.

The sensors’ mesh structure is made of biocompatible polyvinyl alcohol, which enables that gas permeability without blocking sweat glands, and it’s stretchable without causing any additional discomfort, even if it’s affixed for a considerable amount of time.

A one-week skin patch test revealed that the risk of inflammation caused by on-skin sensors can be significantly suppressed by using the nanomesh sensors. Furthermore, a wireless system that can detect touch, temperature and pressure is successfully demonstrated using a nanomesh with excellent mechanical durability. In addition, electromyogram recordings were successfully taken with minimal discomfort to the user.

They’re also versatile. The mesh conductors can attach to irregular skin surfaces — say, the tip of a person’s finger — and maintain their functionality even when a person’s natural body movements folds and elongates the skin. Nanofibres with a diameter of 300 to 500 nm were prepared by electrospinning a PVA solution, and were intertwined to form a mesh-like sheet. When the nanomesh conductors were placed on the skin and sprayed with water, the PVA nanofibers easily dissolved, and the nanomesh conductor attached to the skin.

According to the study, the approach has opened up a new possibility for the integration of electronic devices with skin for continuous, long-term health monitoring. “We learned that devices that can be worn for a week or longer for continuous monitoring were needed for practical use in medical and sports applications,” says Professor Takao Someya at the University of Tokyo’s Graduate School of Engineering whose research group had previously developed an on-skin patch that measured oxygen in blood.

Furthermore, the scientists proved the device’s mechanical durability through repeated bending and stretching, exceeding 10,000 times, of a conductor attached on the forefinger; they also established its reliability as an electrode for electromyogram recordings when its readings of the electrical activity of muscles were comparable to those obtained through conventional gel electrodes.

“It will become possible to monitor patients’ vital signs without causing any stress or discomfort,” says Someya about the future implications of the team’s research. In addition to nursing care and medical applications, the new device promises to enable continuous, precise monitoring of athletes’ physiological signals and bodily motion without impeding their training or performance.


Military requirements

Many militaries including those of US, China and others have expressed the desire to cut their manpower, along with stagnant growth or cuts in military budgets. On the other hands the increase in threat levels and employment of militaries in diverse and complex kind of missions has led to manifold increase in number of missions. Technological advances, such as night vision devices, have led to increase in duration of missions; militaries now operate around the clock during times of conflict.  Some of the missions the soldiers perform can take weeks, away from in difficult terrain like deserts and mountains which requires maintaining an incredibly high level of physical fitness.

Krueger (1991) reported that the efficiency of combatants in sustained operations can be significantly compromised by inadequate sleep. Vigilance and attention suffer, reaction time is impaired, mood declines, and some personnel begin to experience perceptual disturbances. Naitoh and Kelly (1992) warned that poor sleep management in extended operations quickly leads to motivational decrements, impaired attention, short-term memory loss, carelessness, reduced physical endurance, degraded verbal communication skills, and impaired judgment. Angus and Heslegrave (1985) noted that cognitive abilities suffer 30 percent reductions after only 1 night without sleep, and 60 percent reductions after a second night.

Around the world, armies are recognizing the importance of maximizing the effectiveness of Soldiers physically, perceptually, and cognitively. Militaries are therefore studying effects of frustration, mental workload, stress, fear and fatigue on both cognitive and physical performance.

In November, the Office of Naval Research awarded a $150,000 grant to Titus and the tech firm Sentience Science to develop tools that could monitor an individual’s stress levels in combat and automatically generate alerts when they reach dangerous levels.




References and resources also include:






Threat of DNA engineering of humans and genetic extinction technologies by US and China

Within only a few years, research labs worldwide have adopted a new technology referred to as “CRISPR,”that facilitates making specific changes in the DNA of humans, other animals, and plants. Compared to previous techniques for modifying DNA, this new approach is much faster and easier. CRISPR allows removing a single (defective) gene from a genome and replacing it with another one, to prevent genetic diseases.

US and China are leaders in applications  of CRISPER technology.

A US military agency DARPA is investing $100m in genetic extinction technologies that could wipe out malarial mosquitoes, invasive rodents or other species, emails released under freedom of information rules show. Cutting-edge gene editing tools such as Crispr-Cas9 work by using a synthetic ribonucleic acid (RNA) to cut into DNA strands and then insert, alter or remove targeted traits. These might, for example, distort the sex-ratio of mosquitoes to effectively wipe out malarial populations.

In 2016, a Chinese group has become the first to inject a person with cells that contain genes edited using the revolutionary CRISPR–Cas9 technique. Earlier Scientists of Chinese Kunming Biomedical International and its affiliated Yunnan Key Laboratory of Primate Biomedical Research used CRISPR to create a pair of macaque monkeys with precise genetic mutations. Chinese scientists say they were among the first in using Crispr to make wheat resistant to a common fungal disease, dogs more muscular and pigs with leaner meat.

The introduction of CRISPR, which is simpler and more efficient than other techniques, will probably accelerate the race to get gene-edited cells into the clinic across the world, says Carl June, who specializes in immunotherapy at the University of Pennsylvania in Philadelphia and led one of the earlier studies. “I think this is going to trigger ‘Sputnik 2.0’, a biomedical duel on progress between China and the United States, which is important since competition usually improves the end product,” he says.

China is leading in applying CRISPER

On 28 October, a team led by oncologist Lu You at Sichuan University in Chengdu delivered the modified cells into a patient with aggressive lung cancer as part of a clinical trial at the West China Hospital, also in Chengdu. Lu’s team extracted  immune cells called T cells from the blood of the enrolled patients, and then disabled a gene in them using CRISPR–Cas9, which combines a DNA-cutting enzyme with a molecular guide that can be programmed to tell the enzyme precisely where to cut. The disabled gene codes for the protein PD-1, which normally puts the brakes on a cell’s immune response: cancers take advantage of that function to proliferate. Lu’s team then cultured the edited cells, increasing their number, and injected them back into the patient, who has metastatic non-small-cell lung cancer. The hope is that, without PD-1, the edited cells will attack and defeat the cancer.

Normally, a parent organism with a given trait passes that genetic code to offspring about half the time. Recent advances combining the gene-editing tool CRISPR–Cas9 are now making it easier for scientists to modify a genome such that nearly all offspring inherit the desired trait.

Recently, China announced it was genetically engineering hyper-muscular SUPER-DOGS.The dogs, which are test tube bred in a lab, have twice the muscle mass of their natural counterparts and are considerably stronger and faster. An army of super-humans has been a staple of science fiction and superhero comics for decades – but the super-dog technology brings it closer to reality. The beagle puppy, one of 27, was genetically engineered by ‘deleting’ a gene called myostatin, giving it double the muscle mass of a normal beagle.

The advance genetic editing technology has been touted as a breakthrough which could herald the dawn of ‘superbreeds’, which could be stronger, faster, better at running and hunting. The Chinese official line is that the dogs could potentially be deployed to frontline service to assist police officers. Dr Lai Liangxue, researcher at Guangzhou institute of biological medicine and health, said: “This is a breakthrough, marking China as only the second country in the world to independently master dog-somatic clone technology, after South Korea.”

China has had a reputation for moving fast — sometimes too fast — with CRISPR, says Tetsuya Ishii, a bioethicist at Hokkaido University in Sapporo, Japan. Ishii notes that if the clinical trial begins as planned, it would be the latest in a series of firsts for China in the field of CRISPR gene editing, including the first CRISPR-edited human embryos, and the first CRISPR-edited monkeys. “When it comes to gene editing, China goes first,” says Ishii.


Threat of Military applications of  gene editing technology

Scientists have learned how to harness CRISPR technology in the lab  to make precise changes in the genes of organisms as diverse as fruit flies, fish, mice, plants and even human cells. Researchers  are using crispr to knock out genes in animal models to study their function, give crops new agronomic traits, synthesize microbes that produce drugs, create gene therapies to treat disease, and to genetically correct heritable diseases in human embryos.

CRISPR “has transformed labs around the world,” says Jing-Ruey Joanna Yeh, a chemical biologist at Massachusetts General Hospital’s Cardiovascular Research Center, in Charlestown, who contributed to the development of the technology. “Because this system is so simple and efficient, any lab can do it.” Editing with CRISPR is like placing a cursor between two letters in a word processing document and hitting “delete” or clicking “paste.” And the tool can cost less than US $50 to assemble.

However, CRISPR is also known to cause gene edits at the wrong place in the genome, which could potentially cause some harmful effects.  There is also threat of  potential harm to environment.

The  steep fall in the costs of gene-editing toolkits has created a greater opportunity for hostile or rogue actors to experiment with the technology. This has led to growth of Global Biohackers,  a term for biologists who work outside of traditional labs and sell genetic engineering kits. Terrorists could employ them to make Bio weapons or lethal viruses.

CRISPER is also being used in military applications. During the second biennial Department of Defense Lab Day May 18, 2017, One AFRL exhibit, called Military Applications of Gene Editing Technology, highlighted research into how geneticists and medical researchers edit parts of the genome by removing, adding or altering sections of the DNA sequence in order to remove a virus or disease caused by harmful chemical, biological or environmental agents a warfighter may have contact with.

Jim Thomas, a co-director of the ETC group which obtained the emails, said the US military influence they revealed would strengthen the case for a ban. “The dual use nature of altering and eradicating entire populations is as much a threat to peace and food security as it is a threat to ecosystems,” he said. “Militarisation of gene drive funding may even contravene the Enmod convention against hostile uses of environmental modification technologies.”


Human Gene Manipulation using CRISPR

CRISPR has been used to edit animal embryos and adult stem cells, but until Chinese trial, no one has reported using the technique to edit the genome of human embryos due to ethical issues.

In UK CRISPR is considered a controversial topic, with doubts that it could result in ‘designer babies’ if exploited.The rise of genetic screenings of human embryos allow scientists to create organisms by design, rather than leave it up to chance. This has also been made simpler by genetic sequencing technology that has expanded humanity’s genetic toolbox dramatically.

While in US an advisory panel of the US National Institutes of Health (NIH) has approved a planned US trial that would also use CRISPR–Cas9-modified cells for the treatment of cancer. The US researchers have said they could start their clinical trial by the end of this year.

Chinese team also reported that out of 71 of the embryos used in the CRISPR experiment the technique worked properly on just a fraction of the total, and only small percentage of those managed to relay the new gene properly when they split. They also found that sometimes the procedure wound up splicing the wrong gene segment, which led to inserting new genes in the wrong places—which in normal embryos could lead to a new disease.

The most dramatic possibility raised by the primate work, of course, would be using CRISPR to change the genetic makeup of human embryos during in vitro fertilization. Pentagon scientists are researching gene manipulation to build the soldiers of tomorrow that will be able to run at Olympic speed, and won’t need food or sleep. It will also be possible to trigger the cells of injured soldiers’ bodies to rebuild lost limbs.

Using Crispr to cure disease “is probably ethical,” said Eric Hendrickson, a professor at the University of Minnesota Medical School, whose research uses Crispr techniques for DNA repair. “To use that technology to make your child run faster or jump higher is uniformly frowned upon. The technology to do that, however, will soon be in place.”


References and resources also include:









New Brain Computer Interfaces being developed for treating neurological disorders, and controlling military robots with thought

The brain-computer interface (BCI) allows people to use their thoughts to control not only themselves, but the world around them. Every action our body performs begins with a thought, and with every thought comes an electrical signal. The electrical signals can be received by the brain-computer interface, consisting of an electroencephalograph (EEG) or an implanted electrode, which can then be translated, and then sent to the performing hardware to produce the desired action.


Brain-computer interfaces are being applied in neuroprosthetics, through which paralyzed persons are able to control robotic arms, neurogaming where one can control keyboard, mouse etc using their thoughts and play games, neuroanalysis (psychology), and in defense to control robotic soldiers or fly planes with thoughts.


Current implantable devices are not well matched with body tissues in terms of their mechanical, chemical, and physical properties. The tissues that may be excited or interrogated by implants (e.g., brain, spinal cord, or cardiac muscle) are mechanically compliant, curvilinear, and perform their functions by modulating the flow of ions, Wei-Chen Huang, Haosheng Wu and Christopher J. Bettinger of Carnegie Mellon University (CMU). Conversely, most implantable silicon-based devices are mechanically rigid, and use electrons or holes as their primary information currency.


“These elements of mismatch reduce the overall performance of current implantable technology in three ways. First, the difference in mechanical properties (i.e., the elasticity) can cause local tissue damage that compromises the fidelity of measurements. Second, changing between ionic and electronic transduction decreases the information density and stimulation specificity. Finally, the materials that are typically used in microelectronic implants are susceptible to rapid protein adsorption, which initiates a cascade of local inflammation and scarring. The biological response to the presence of foreign material (such as an implant) can also compromise bidirectional communication.”


Instead of invasive brain surgery, DARPA has developed small brain modem  that enters the bloodstream via a catheter and then transmits data. The US military recently successfully implanted and tested its first ‘brain modem’ on an animal subject. Neurologists injected tiny sensors into livestocks’ veins and then recorded the electrical impulses that control the animals’ movements for six months.


The tiny, implanted chip, developed by the Defense Advanced Research Projects Agency (Darpa), uses a tiny sensor that travels through blood vessels, lodges in the brain and records neural activity. The sensor, called a ‘stentrode’, a combination of the words ‘stent’ and ‘electrode’, is the first step in the military’s desire to allow soldiers to control machinery with their minds. The stentrode is the size of a paperclip, flexible and injectable.


BCI are also vulnerable to hacking, they can be used to control soldier’s brains and their actions, and they could also be used by criminals to manipulate thoughts or even cause death. According to some Analysts Human Brain is going to become sixth war fighting domain.

Brain Computer Interfaces

There are three fundamental techniques to interface with the brain; non-inasive such as electro-encephalography (EEG), invasive through direct connections and electro-corticography (ECoG), also known as intracranial EEG – a sort of half-way house involving electrodes placed on the brain’s exposed surface, rather than hardwired into the brain itself.


Invasive BCI  are technologies that provide high resolution but require neurosurgery. They  require regulatory approvals, hence manufacturers are less willing to fund clinical trials associated with the approval process.


Non-Invasive BCI have gained popularity in the recent times and are expected to grow at a fast pace in the near future because it provides least discomfort and negligible chance of infection due to electrode use. Progress in non-invasive electroencephalography (EEG)-based brain-computer interface (BCI) research, development and innovation has accelerated in recent years. New brain signal signatures for inferring user intent and more complex control strategies have been the focus of many recent developments. Major advances in recording technology, signal processing techniques and clinical applications, tested with patient cohorts as well as non-clinical applications have been reported, writes Damien Coyle.


Non-invasive BCI has found multiple uses in the areas of medicine such as motor restoration, wheelchair assistance, and treatment of neurological disorders. However noninvasive BCIs suffer from poor efficiency and accuracy, are slow and somewhat uncertain at present, they also tend to make high cognitive demands on the user.


U C Berkeley engineers have built the first dust-sized, wireless sensors that can be implanted in the body without surgery, bringing closer the day when a Fitbit-like device could monitor internal nerves, muscles or organs in real time.Because these batteryless sensors could also be used to stimulate nerves and muscles, the technology also opens the door to “electroceuticals” to treat disorders such as epilepsy or to stimulate the immune system or tamp down inflammation.


“The original goal of the neural dust project was to imagine the next generation of brain-machine interfaces, and to make it a viable clinical technology,” said neuroscience graduate student Ryan Neely. “If a paraplegic wants to control a computer or a robotic arm, you would just implant this electrode in the brain and it would last essentially a lifetime.”


Russian Scientists Create Mind-Reading ‘Neuro-Balalaika’

Researchers from the Immanuel Kant Baltic Federal University’s Institute of Living Systems in Kaliningrad have completed development of a new neural interface device called the Balalaika, capable of simultaneously recording a variety of electrophysiological signals.

The device, designed and built from scratch in Russia, is simultaneously able to conduct electroencephalographic monitoring (recording electrical activity of the brain), electroencephalographic monitoring (measuring electrical activity of muscle fibers), electrooculographic monitoring (measurement of bioelectric potential during eye movement), photoplethysmographic monitoring (measuring blood flow), and measurement of skin temperature.

Using the Balalaika, users can play computer games hands-free, operate a wheelchair or even an exoskeleton. If a disabled person does not feel well enough to go to a clinic for testing and monitoring, he or she can do so from home, sending the results remotely to their doctor.

Researchers are now busy at work on an ‘avatar’, a program capable of matching human and robot activity via remote control. “Figuratively speaking, when a person raises his hand, a robot standing in the distance also raises his hand,” institute director Maxim Patrushev explained.

The multi-measurement features of the Balalaika’s instrumentation have allowed researchers to confirm that the simultaneous use of electroencephalographic, electrooculographic and photoplethysmographic signals significantly improves accuracy in the interpretation of planned physical activity on the basis of brain signals. It is assumed that the use of multiple signals helps to bring the probability of error-free remote control of external devices using brain power comes close to 100%, making it a major technical breakthrough for robotics and the development of technology to assist people with motor system diseases.


BCI in form of Artificial Skin

John Rogers at the University of Illinois at Urbana-Champaign and his team have built a Brain Computer Interface, in the form of flexible electronic skin that conforms to the body. The interface comprising just of small patch of gold electrodes sticks to the skin through van der Waals forces like a digital tattoo. The patch applied behind the ear, falls off when the build-up of dead skin beneath it loosens its grip.


Their solution does away with the cumbersome electrodes, annoying gels and wires of conventional EEGs described by Rogers as a “rat’s nest of wires attached to devices that interface to the skin with tape and gels and bulky metallic objects”. The team is now working on wireless transmission of data and power, allowing it to work even if the wearer is moving.


Invasive BCI

Invasive BCI have greater application in neuroprosthetics compared to non-invasive BCI since in order to understand/regulate the neural connectivity of specific brain areas, it becomes necessary to introduce neural implants (electrodes). One of the critical technologies is material used to make electrodes used to make Brain Computer Interfaces.


Lund University’s Breakthrough for electrode implants

“There are several elements that must go hand in hand for us to be able to record neuronal signals from the brain with decisive results. First, the electrode must be bio-friendly, that is, we have to be confident that it does not cause any significant damage to the brain tissue. Second, the electrode must be flexible in relation to the brain tissue. Remember that the brain floats in fluid inside the skull and moves around when we, for instance, breathe or turn our heads.”


The Lund researchers’ Professor Jens Schouenborg and Dr Lina Pettersson have developed tailored electrodes, which they call 3-D electrodes, are unique in that they are extremely soft and flexible in all three dimensions, in a way that enables stable recordings from the neurons over a long time.


In order to implant such electrodes, the researchers have developed a technique for encapsulating the electrodes in a hard but dissolvable gelatine material that is also very gentle on the brain. The electrodes are made of 4 mm gold leads and individually insulated with 4 mm parylene. The array of electrodes consists of eight flexible channels, designed to follow the movement of the brain. Both the electrode and implantation technology, which have been tested on rats, are patented by NRC researchers, in Europe and the US, among other places.


“This technology retains the electrodes in their original form inside the brain and can monitor what happens inside virtually undisturbed and normally functioning brain tissue”, says Johan Agorelius, a doctoral student in the project.


Until now, developed flexible electrodes have not been able to maintain their shape when implanted, which is why they have been fixated on a solid chip that limits their flexibility, among other things. Other types of electrodes that are used are much stiffer. The result in both cases is that they rub against and irritate the brain tissue, and the nerve cells around the electrodes die.


“The signals then become misleading or completely non-existent. Our new technology enables us to implant as flexible electrodes as we want, and retain the exact shape of the electrode within the brain”, says Johan Agorelius.


“This creates entirely new conditions for our understanding of what happens inside the brain and for the development of more effective treatments for diseases such as Parkinson’s disease and chronic pain conditions than can be achieved using today’s techniques”, concludes Jens Schouenborg.

Electronic dura mater for long-term multimodal neural interfaces

Team of researchers at a Swiss technology institute , Pavel Musienko and others have developed a new ultra flexible electrodes modeled on dura matter, the protective membrane of the brain and spinal cord, that can both stimulate and record from neurons.


Most of current electrode implants—even thin, plastic interfaces—present high elastic moduli in the gigapascal range, thus are rigid compared to neural tissues. “The mechanical mismatch between soft neural tissues and stiff neural implants hinders the long-term performance of implantable neuroprostheses. Here, we designed and fabricated soft neural implants with the shape and elasticity of dura mater, the protective membrane of the brain and spinal cord.”


“The implant, which we called electronic dura mater or e-dura, integrates a transparent silicone from substrate (120 mm in thickness), stretchable gold interconnects (35 nm in thickness), soft electrodes coated with a platinum-silicone composite (300 mm in diameter), and a compliant fluidic microchannel (100 mm by 50 mm in cross section).” The interconnects and electrodes transmit electrical excitation and transfer electrophysiological signals. The microfluidic channel, termed chemotrode , delivers drugs locally .


They next tested the long-term biointegration of soft implants compared to stiff, plastic implants (6 weeks of implantation). Both types of implants were inserted into the subdural space of lumbosacral segments in healthy rats. They found that rats with the stiff implant began to have trouble walking within just a few weeks, and later examination showed both inflammation and deformation of their spinal cords. The rats with the e-dura implant displayed no such motor problems or physiological degradation. The electrodes also proved to be effective in accurately recorded from and stimulated neurons in the brain and spinal cord.


Carbon Micro thread electrodes

In 2014, Scientists at University of Michigan have come up with micro thread electrode which is delicate enough not to damage nerve tissue and still resilient enough to last decades. This seven micrometer carbon fiber thread is 100 times thinner than common metal electrodes. It has its tip coated with polymer to pick of signals even from a single neuron.


The electrodes may lead to development of long lasting brain machine interfaces through which paralytic persons could control robotic limbs or computer mouse. However there is still many challenges to overcome like finding ways to insert such fine electrodes.


Carbon nanotube fibers make superior brain electrodes

Carbon nanotube fibers invented at Rice University may provide the best way to communicate directly with the brain. “They’re like extension cords,” said Mehdi Razavi, the director of electrophysiology clinical research at the Texas Heart Institute and the project’s lead investigator. “They allow us to pick up charge from one side of the scar and deliver it to the other side. Essentially, we’re short-circuiting the short circuit.”


The fibers have proven superior to metal electrodes for deep brain stimulation and to read signals from a neuronal network. Because they provide a two-way connection, they show promise for treating patients with neurological disorders while monitoring the real-time response of neural circuits in areas that control movement, mood and bodily functions.


“The brain is basically the consistency of pudding and doesn’t interact well with stiff metal electrodes,” Caleb Kemere, a Rice assistant professor said. “The dream is to have electrodes with the same consistency, and that’s why we’re really excited about these flexible carbon nanotube fibers and their long-term biocompatibility.”


The fibers were created by the Rice lab of chemist and chemical engineer Matteo Pasquali.” We developed these fibers as high-strength, high-conductivity materials,” Pasquali said. “Yet, once we had them in our hand, we realized that they had an unexpected property: They are really soft, much like a thread of silk. Their unique combination of strength, conductivity and softness makes them ideal for interfacing with the electrical function of the human body.” The working end of the fiber is the exposed tip, which is about the width of a neuron. The rest is encased with a three-micron layer of a flexible, biocompatible polymer with excellent insulating properties.


The challenge is in placing the tips. “That’s really just a matter of having a brain atlas, and during the experiment adjusting the electrodes very delicately and putting them into the right place,” said Kemere, whose lab studies ways to connect signal-processing systems and the brain’s memory and cognitive centers.


Kemere foresees a closed-loop system that can read neuronal signals and adapt stimulation therapy in real time. He anticipates building a device with many electrodes that can be addressed individually to gain fine control over stimulation and monitoring from a small, implantable device. The Welch Foundation, the National Science Foundation and the Air Force Office of Scientific Research supported the research.


University of Melbourne scientists develop BCI which gets implanted in the brain without surgery

Australian scientists funded by the US Defense Advanced Research Projects Agency (Darpa) have developed a tiny, matchstick-sized Brain Computer interface called a stentrode. This stentrode is flexible enough to be able to pass through the blood vessels and get implanted into the motor cortex, the brain’s control centre – bypassing the need for complex invasive brain surgery.


The device would capture and decode the brain signals and then wirelessly transmit appropriate commands through the skin to enable control of an exoskeleton attached to their limbs simply by thinking about it.


The stentrode could also benefit people with Parkinson’s disease, motor neurone disease, obsessive compulsive disorder and depression and could even predict and manage seizures in epileptic patients. The work is the result of close collaboration between the University of Melbourne, the Royal Melbourne Hospital and the Florey Institute of Neuroscience and Mental Health.


In late 2017, a select group of paralysed patients from the Royal Melbourne and Austin Hospitals in Australia will be chosen for the trial, where they will be implanted with the stentrode. If the trial succeeds, the technology could become commercially available in as little as six years.


References and Resources also include:


Living Structural Materials to reduce emissions in cities and build self growing & self repairable military bases

The cities of today are built with concrete and steel between they are responsible for as much as a tenth of worldwide carbon emissions. Before they ever reach a construction site, both steel and concrete must be processed at very high temperatures – which take a lot of energy. And yet, our cities are completely dependent on these two unsustainable materials.


Researchers have now turned to biology for design of next generation smart materials for structures to support an ever-expanding population, while keeping carbon emissions under control. Bioengineer Dr Michelle Oyen of Cambridge’s Department of Engineering is working in the field of biomimetics and with funding support from the US Army Corps of Engineers, is constructing small samples of artificial bone and eggshell, which could be used as medical implants, or even be scaled up and used as low-carbon building materials.


A team of interdisciplinary researchers at UCLA has been working on developing a new building material made by capturing carbon from power plant smokestacks and fabricating them using 3D printers. “This technology could change the economic incentives associated with these power plants in their operations and turn the smokestack flue gas into a resource countries can use, to build up their cities, extend their road systems,” DeShazo said. “It takes what was a problem and turns it into a benefit in products and services that are going to be very much needed and valued in places like India and China.”


DOD and the military services own and operate hundreds of thousands of buildings and other structures across more than 5,000 locations in support of their various defense-related missions. Those installations are located throughout the United States and the world and are subject to a wide range of geographic and climatic conditions. The structural materials that are currently used to construct homes, buildings, and infrastructure are expensive to produce and transport, wear out due to age and damage, and have limited ability to respond to changes in their immediate surroundings. As a result, the energy and financial costs of building and infrastructure construction and repair, to both the DoD and the nation, are enormous.


Now, Darpa is looking to new methods inspired from Living biological materials, such as bone, skin, bark, and coral, for example that have attributes that provide advantages over the non-living materials people build with, in that they can be grown where needed, self-repair when damaged, and respond to changes in their surroundings. The inclusion of living materials in human-built environments could offer significant benefits; however, today scientists and engineers are unable to easily control the size and shape of living materials in ways that would make them useful for construction.


DARPA is launching the Engineered Living Materials (ELM) program with a goal of creating a new class of materials that combines the structural properties of traditional building materials with attributes of living systems.


UCLA researchers turn carbon dioxide into sustainable concrete

The production of cement, which when mixed with water forms the binding agent in concrete, is also one of the biggest contributors to greenhouse gas emissions. In fact, about 5 percent of the planet’s greenhouse gas emissions comes from concrete.


An even larger source of carbon dioxide emissions is flue gas emitted from smokestacks at power plants around the world. Carbon emissions from those plants are the largest source of harmful global greenhouse gas in the world.


A team of interdisciplinary researchers at UCLA has been working on a unique solution that may help eliminate these sources of greenhouse gases. “What this technology does is take something that we have viewed as a nuisance — carbon dioxide that’s emitted from smokestacks — and to use it to create a new kind of building material that will replace cement,” said J.R. DeShazo, professor of public policy at the UCLA Luskin School of Public Affairs and director of the UCLA Luskin Center for Innovation.


Thus far, the new construction material has been produced only at a lab scale, using 3-D printers to shape it into tiny cones. “We have proof of concept that we can do this,” DeShazo said. “But we need to begin the process of increasing the volume of material and then think about how to pilot it commercially. It’s one thing to prove these technologies in the laboratory. It’s another to take them out into the field and see how they work under real-world conditions.”


“We can demonstrate a process where we take lime and combine it with carbon dioxide to produce a cement-like material,” Sant said. “The big challenge we foresee with this is we’re not just trying to develop a building material. We’re trying to develop a process solution, an integrated technology which goes right from CO2 to a finished product.


“3-D printing has been done for some time in the biomedical world,” Sant said, “but when you do it in a biomedical setting, you’re interested in resolution. You’re interested in precision. In construction, all of these things are important but not at the same scale. There is a scale challenge, because rather than print something that’s 5 centimeters long, we want to be able to print a beam that’s 5 meters long. The size scalability is a really important part.”


DeShazo has provided the public policy and economic guidance for this research. The scientific contributions have been led by Gaurav Sant, associate professor and Henry Samueli Fellow in Civil and Environmental Engineering; Richard Kaner, distinguished professor in chemistry and biochemistry, and materials science and engineering; Laurent Pilon, professor in mechanical and aerospace engineering and bioengineering; and Matthieu Bauchy, assistant professor in civil and environmental engineering.


The researchers are excited about the possibility of reducing greenhouse gas in the U.S., especially in regions where coal-fired power plants are abundant. “But even more so is the promise to reduce the emissions in China and India,” DeShazo said. “China is currently the largest greenhouse gas producer in the world, and India will soon be number two, surpassing us.”


DARPA’s Engineered Living Materials (ELM) program

Living materials represent a new opportunity to leverage engineered biology to solve existing problems associated with the construction and maintenance of built environments, and to create new capabilities to craft smart infrastructure that dynamically responds to its surroundings.


“The vision of the ELM program is to grow materials on demand where they are needed,” said ELM program manager Justin Gallivan. “Imagine that instead of shipping finished materials, we can ship precursors and rapidly grow them on site using local resources. And, since the materials will be alive, they will be able to respond to changes in their environment and heal themselves in response to damage.”


Successful completion of ELM program objectives will require innovations in the ability to functionally unite living components with inert structural materials, to program structural features into living systems, and to extend the scale of synthetic biology building blocks from the molecular to the cellular. The deliverables from this program will comprise a suite of technologies that enable the production of living structural materials tailored to design specifications, such as those provided by architects and builders.


A major inspiration for the ELM program is the recent development of biologically-sourced structural materials that are grown to specified size and shape from inexpensive feedstocks. For example, mycelia can be grown on agricultural byproducts to produce materials that are drop-in replacements for polystyrene. Similarly, bacteria can be used to bind sand to produce drop-in replacements for bricks. That factory-scale production of grown materials can be economically competitive with materials as common as polystyrene and brick, demonstrates the feasibility of using biological approaches to reduce the energy and waste associated with the manufacture of structural materials.


However, these products are rendered inert during the manufacturing process, so they exhibit few of their components’ original biological advantages. Scientists are making progress with three-dimensional printing of living tissues and organs, using scaffolding materials that sustain the long-term viability of the living cells. These cells are derived from existing natural tissues, however, and are not engineered to perform synthetic functions. And current cell-printing methods are too expensive to produce building materials at necessary scales.


ELM looks to merge the best features of these existing technologies and build on them to create hybrid materials composed of non-living scaffolds that give structure to and support the long-term viability of engineered living cells. DARPA intends to develop platform technologies that are scalable and generalizable to facilitate a quick transition from laboratory to commercial applications.


The long-term objective of the ELM program is to develop an ability to engineer structural properties directly into the genomes of biological systems so that neither scaffolds nor external development cues are needed for an organism to realize the desired shape and properties. Achieving this goal will require significant breakthroughs in scientists’ understanding of developmental pathways and how those pathways direct the three-dimensional development of multicellular systems.


Examples DARPA suggests are roofs that control airflow in a structure by breathing; chimneys that heal after smoke damage; and driveways, roads, or runways that literally eat oil spills. Work on ELM will be fundamental research carried out in controlled laboratory settings. DARPA does not anticipate environmental release during the program.


Ecovative gets $9.1M DARPA contract for living materials

Ecovative Design has been awarded a contract valued at up to $9.1 million from the Defense Advanced Research Projects Agency (DARPA) to develop next generation building materials: living materials that are more versatile, more efficient, and more cost effective in rapidly creating structures, by literally growing those structures in places where they are needed. Ecovative, the pioneer and world leader in the design and manufacturing of mycelium-based biomaterials, will work in collaboration with leading researchers in synthetic biology, biochemistry, and systems biology from Columbia University, New York University (NYU), and the Massachusetts Institute of Technology (MIT). This four-year project aims to create these living systems, and to demonstrate that the materials can be manufactured at scale


Columbia University Helena Rubenstein Professor, Departments of Chemistry and Systems Biology, Dr. Virginia Cornish said, “We are excited to bring our expertise in chemical and synthetic biology to this collaboration with Ecovative Design. Our team seeks to functionalize the surface of Ecovative Design’s fungal material with an engineered yeast skin that can sense (and respond to) the environment, and enables the material to heal after damage. Like this, we are developing an adaptable living material that challenges the paradigm of traditional building materials.”


MIT Synthetic Biology Center (SBC) co-director, and Professor of Biological Engineering, Christopher A. Voigt, Ph.D. said: “Living cells can function as atomic architects in the construct of functional nanomaterials with a precision impossible using chemistry and materials science. The marriage of synthetic biology and grown materials provides a means to harness this precision engineering at the bulk scale.”


We have used biology to grow materials with exceptional properties that are simply unattainable through conventional chemistry. We have clearly demonstrated these products can be manufactured at scale. This project will demonstrate what is possible when we start taking advantage of biomaterials most powerful property: life itself,” says Bayer. “These same techniques can be used to develop new housing solutions in conventional architecture, as well as support rapidly deployable relief structures in area’s struck by natural disasters”


“During the last ten years we created a new materials science by demonstrating the versatility of mycelium, in combination with agriculture waste, to create sustainable products with a range of properties and functions,” Ecovative’s Chief Scientist Gavin McIntyre, said. “This next logical step is to determine how to tap the power and potential of a consortia of organisms to create the next generation of advanced, biomaterials. Microbial communities power our bodies and ecosystems, and Ecovative is uniquely positioned to exploit a novel microbiome to propel material science into new frontiers.”


The U.S. Department of Defense, of which DARPA is a part, must at times establish shelters or other structures for military or civilian purposes in unfamiliar or under-resourced environments. Imagine a U.S. military unit arriving in a conflict zone, or a humanitarian disaster site, and creating its base of operations by literally growing building materials or the required structures themselves – shelter, barriers, furniture and more.


Ecovative envisions a day when U.S. forces will arrive at a location and be able to do just that. Ecovative Design says the contract with the Defense Advanced Research Projects Agency, or DARPA, is to develop living building materials. An idea being explored would be to literally grow shelters or other structures in places where they are needed.




References and Resources also include:





DARPA demonstrates speeding up learning by 40% , enable soldiers speedier learning of new foreign language, analyzing intelligence and cryptography techniques

In March 2016, DARPA launched the Targeted Neuroplasticity Training (TNT) program, that seeks to advance the pace and effectiveness of a specific kind of learning—cognitive skills training—through the precise activation of peripheral nerves that can in turn promote and strengthen neuronal connections in the brain. TNT will pursue development of a platform technology to enhance learning of a wide range of cognitive skills, with a goal of reducing the cost and duration of the Defense Department’s extensive training regimen, while improving outcomes.


“Military personnel are required to utilize a wide variety of complex perceptual, motor and cognitive skills under challenging conditions,” said Dr. Robert Rennaker, Texas Instruments Distinguished Chair in Bioengineering, director of the TxBDC and chairman of the Department of Bioengineering. “Mastery of these difficult skills, including fluency in foreign language, typically requires thousands of hours of practice,” said Rennaker, who served in the U.S. Marine Corps. DARPA’s TNT program aims to develop an optimized strategy to accelerate acquisition of complex skills, which would significantly reduce the time needed to train foreign language specialists, intelligence analysts, cryptographers and others.



That kind of neural tuning can “influence cognitive state—how awake you are, or how much attention you’re paying to something you’re viewing or performing,” says Doug Weber, a bioengineer at DARPA who heads up the TNT project. If it works—if researchers can improve a person’s ability to learn—the DoD could reduce the amount of time spent training soldiers and intelligence agents. “Foreign language training is one of our primary application areas because it’s very time intensive,” says Weber. Language courses last more than a year, and only about 10 percent of trainees reach the level of proficiency needed for their jobs, he says.


Weber says he envisions intelligence agents or soldiers wearing some kind of noninvasive stimulation device that delivers precise electrical pulses as they practice their skills. And unlike caffeine or energy drinks, the stimulation can be turned off and, hopefully, causes fewer side effects.

Targeted Neuroplasticity Training (TNT)

The TNT program seeks to use peripheral nerve stimulation to speed up learning processes in the brain by boosting release of brain chemicals, such as acetylcholine, dopamine, serotonin, and norepinephrine. These so-called neuromodulators play a role in regulating synaptic plasticity, the process by which connections between neurons change to improve brain function during learning. By combining peripheral neurostimulation with conventional training practices, the TNT program seeks to leverage endogenous neural circuitry to enhance learning by facilitating tuning of neural networks responsible for cognitive functions.


DARPA is taking a layered approach to exploring this new terrain. Fundamental research will focus on gaining a clearer and more complete understanding of how nerve stimulation influences synaptic plasticity, how cognitive skill learning processes are regulated in the brain, and how to boost these processes to safely accelerate skill acquisition while avoiding potential side effects. The engineering side of the program will concentrate on developing non-invasive methods to deliver peripheral nerve stimulation that enhances plasticity in brain regions responsible for cognitive functions. The goal is to optimize training and stimulation protocols that expedite the rate of learning and maximize long-term retention of cognitive skills.


DARPA’s TNT efforts differ from the Agency’s previous neuroscience and neurotechnology endeavors by seeking not to restore lost function but to advance capabilities in healthy individuals. “DARPA is approaching the study of synaptic plasticity from multiple angles to determine whether there are safe and responsible ways to enhance learning and accelerate training for skills relevant to national security missions,” said Doug Weber, TNT program manager at DARPA.


TNT is part of a broader portfolio of programs within DARPA that support the White House BRAIN initiative.


DARPA demonstrated technological breakthrough to raise learning speed by 40%

THE research arm of the US military has discovered a way to increase learning speed by up to 40 per cent in a move which could lead to a generation of super soldiers. DARPA has demontrated how a non-invasive method which could massively boost learning speed


Their study, working with scientists from the HRL Laboratories in California, McGill University in Montreal, Canada, and Soterix Medical in New York, shows through a transcranial direct current stimulation (tDCS) – using magnets to stimulate a small area of the brain – device to stimulate the prefrontal cortex of the brain, learning speed was increased by 40 per cent.


The researchers tested their method on macaques and then prompted them to perform tasks in order to earn a reward. The monkeys that wore the device mastered the tasks to earn their reward in 12 trials, but those without took 21 trials, according to the research published in Current Biology.


Lead HRL researcher Praveen Pilly said: “In this experiment, we targeted the prefrontal cortex with individualised non-invasive stimulation montages. “That is the region that controls many executive functions, including decision-making, cognitive control, and contextual memory retrieval. “It is connected to almost all the other cortical areas of the brain, and stimulating it has widespread effects.”


TNT Program Partners

DARPA, awarded multimillion-dollar contracts to eight university groups in April 2017 that will develop technology to enhance the brain’s ability to learn and speed up training process using electrical stimulation. DARPA’s goal is to enhance the learning rates by 30 percent at the end of the four-year program. Studies will be conducted on human volunteers and animals.


The teams awarded the research contracts will start with the vagus and trigeminal nerves. A team headed up by Stephen Helms Tillery, a neuroscientist at Arizona State University, for example, will study the anatomy and role of the trigeminal nerve—a cranial nerve responsible for sensations and motor function in the face.


Evidence suggests that this nerve complex has access to areas of the brain stem that release norepinephrine, a chemical associated with attention, and dopamine, a chemical linked to the brain’s ability to adapt. Helms Tillery’s team will study the anatomy and function of the trigeminal nerve in rhesus macaques.


Tillery’s team will also stimulate the trigeminal nerve in human volunteers to see how it affects behavior. In one experiment, with help from the U.S. Air Force Research Laboratory, volunteers will watch surveillance video and try to identify a person carrying a weapon. In another experiment, in partnership with a military research laboratory called USARIEM, volunteers will fire rifles at long ranges in a virtual shooting range while their behavior and performance are quantified.


Other TNT awardees are focusing on the vagus nerve—a major neural throughway that connects most of the body’s key organs. The vagus nerve travels from the base of the brain to the chest and abdomen, carrying a wide assortment of signals to and from the brain. It supplies the heart, lungs, digestive tract, pancreas and other organs.


Wright State is currently in collaboration with the Air Force Research Laboratory (AFRL), Vanderbilt University and Ibis Biosciences, and has received an award of up to $9.1 million from the Defense Advanced Research Projects Agency (DARPA) to improve learning using a handheld, low-power electrical stimulator, which is applied to the neck.


Researchers involved in the new Learning through Electrical Augmentation of Plasticity (LEAP) project believe vagal nerve stimulation can be used with healthy subjects to stimulate a change in neurons that increases the ability to learn. LEAP will improve the understanding of fundamental molecular mechanisms of nerve stimulation, in addition to studying the way genes are expressed, known as epigenetics. Epigenetics, meaning “upon genetics”, describes how environmental factors turn genes on and off, thereby changing traits.

DARPA is funding eight efforts at seven institutions in a coordinated research program that focuses initially on the fundamental science of brain plasticity and aims to conclude with human trials in healthy volunteers. To facilitate transition into real-world applications, some of the teams will work with intelligence analysts and foreign language specialists to understand how they train currently so that the TNT platform might be refined around their needs. The program will also compare the efficacy of invasive (via an implanted device) versus non-invasive stimulation, investigate how to avoid potential risks and side effects of stimulation, and hold a workshop on the ethics of using neurostimulation to enhance learning.

The first half of the TNT program focuses on deciphering the neural mechanisms underlying the influence of nerve stimulation on brain plasticity; discovering physiological indicators that can verify when stimulation is working effectively; and identifying and mitigating any potential side effects of nerve stimulation. The second half of the program will focus on using the technology in a variety of training exercises to measure improvements in the rate and extent of learning.

The institutions listed below are leading teams exploring aspects of using stimulation to activate plasticity.

  • An Arizona State University team led by Dr. Stephen Helms Tillery is targeting stimulation of the trigeminal nerve to promote synaptic plasticity in the sensorimotor and visual systems of the brain. Through partnerships with the Air Force Research Laboratory, the U.S. Air Force’s 711th Human Performance Wing, and the U.S. Army Research Institute of Environmental Medicine, the team will evaluate TNT stimulation protocols with two groups of volunteers—one studying intelligence, surveillance, and reconnaissance, and another practicing marksmanship and decision-making.


  • A Johns Hopkins University team led by Dr. Xiaoqin Wang is focusing on regions of the brain involved in speech and hearing to understand the effects of plasticity on language learning. The team will compare the efficacy of invasive versus non-invasive vagal nerve stimulation (VNS), testing the ability of volunteers to discriminate phonemes, learn words and grammar, and produce the unique sounds demanded by some foreign languages.


  • In one of two efforts DARPA is funding at the University of Florida, a team led by Dr. Kevin Otto is identifying which neural pathways in the brain VNS activates. The team will also conduct behavioral studies in rodents to determine the impact of VNS on perception, executive function, decision-making, and spatial navigation.


  • In the second University of Florida effort, a team led by Dr. Karim Oweiss will use an all-optical approach combining fluorescent imaging and optogenetics to interrogate the neural circuity that connects neuromodulatory centers in the deep brain to decision-making regions in the prefrontal cortex, and optimize VNS parameters around this circuitry to accelerate learning of auditory discrimination tasks by rodents.


  • A University of Maryland effort led by Dr. Henk Haarmann is studying the impact of VNS on foreign language learning. His team will use electroencephalography (EEG) to examine the effects of VNS on neural function during speech perception, vocabulary, and grammar training.


  • A University of Texas at Dallas team led by Dr. Mike Kilgard is identifying optimal stimulation parameters to maximize plasticity, and comparing the effects of invasive versus non-invasive stimulation in individuals with tinnitus as they perform complex skill-learning tasks such as acquiring a foreign language. The team will also investigate the longevity of stimulation effects to determine if follow-up training is needed for long-term retention of learned skills.


  • A University of Wisconsin team led by Dr. Justin Williams is using state-of-the-art optical imaging, electrophysiology, and neurochemical sensing techniques in animal models to measure the influence of vagal and trigeminal nerve stimulation on boosting activity of neuromodulatory neurons in the brain.


  • A Wright State University team led by Dr. Timothy Broderick is focusing on identifying epigenetic markers of neuroplasticity and indicators of an individual’s response to VNS. Through a partnership with the Air Force Research Laboratory and the U.S. Air Force’s 711th Human Performance Wing, the team will also work with volunteer intelligence analyst trainees studying object and threat recognition to determine the impact of non-invasive VNS on that training.


Recognizing that these new technologies for learning and training could raise social and ethical issues, the TNT program is funding Arizona State University to host a national ethics workshop within the first year of the program. The workshop will engage scientists, bioethicists, regulators, military specialists, and others in discussion of those issues, and will produce for wider consideration a report on potential ethical issues relating to cognitive enhancement for warfighters.


References and resources also include:






Media a critical element of National Security and a Force Multiplier for Military

In this new age of information revolution, Media has become an important pillar of national Power, with power to influence public opinion. Media shapes the perception of decision-makers and people. In addition, based on these perceptions the political decision-makers formulate policies, choose lines of actions. Abraham Lincoln, the 16th President of the US, who also led his country through its bloodiest civil war in history, stated, “Public opinion is everything. With it nothing can fail, without it nothing can succeed.”


Another complex issue in the media-national security connection is the new media or the so-called social media considered by some specialists a challenge for democracies. Because social media channels such as social networks and blogs present powerful tools to spread information to the masses.


Social media platforms like YouTube, Facebook and Twitter have also become important for terrorists  to further their goals and spread their message, because of its convenience, affordability and broad reach of social media. Al-Qaeda has an Internet presence spanning nearly two decades. Al-Qaeda terrorists use the internet to distribute material anonymously or ‘meet in dark spaces’. The Czech Military Intelligence Service commented that Al-Qaeda are spreading its ideology among the Muslim community in Europe, mainly through the means of social media. ISIS use of social media platforms has been as phenomenal as its successes in the battlefield.


Social media has also become element of Psychological warfare that involves the planned use of propaganda and other psychological operations to influence the opinions, emotions, motives, reasoning, attitudes, and behavior of opposition groups. Psychological operations target foreign governments, organizations, groups and individuals.


The most recent incident was comprehensive psychological warfare campaign unleashed by China in recent India-China Doklam crisis. Beijing tried to exploit the political divisions to sow dissensions in India by calling Sushma Swaraj a “liar”, reaching out to Modi’s opponents, including Rahul Gandhi, and attacking his “Hindu nationalism.”  The aim was to use Indians to put pressure on the Indian government and get them to withdraw, largely by doubting India’s own assertions. In J & K, Social Media has been used to rally people against the State and the Armed Forces.


Social Media is all pervasive and at the professional level it impacts operations, administration, motivation, morale, functioning and fighting at all levels from the lowest to the highest, said Lt Gen Vinod Bhatia, PVSM, AVSM, SM (Retd), Director CENJOWS. He wondered whether the armed forces are ready to face this new warfare and said that the truth is somewhere in between. He emphasized that for the Indian Armed Forces to effectively exploit Social Media there was a requirement to look into the existing procedures, policies and structures


Therefore experts feel it is imperative to develop a coordinated strategic communications strategy and a specific agency to deal with these media threats and efficient media management. There is also need for armed forces to use Social Media as a effective force multiplier in the security domain.


Media management

However the management of media has become increasingly difficult due to wide proliferation of media in the form of number of TV channels, print and social media. There are also challenges due to increasingly commercialization of media through business houses and politicians having a variety of interests of its own and set goals to be achieved. The news media also functions independently, without rules, regulations or code of conduct because of lack of clear legal guidelines and conflicting interests.


The leaders have called for countering radicalisation conducive to terrorism and the use of internet for terrorist purposes. The counter terrorism actions must continue to be part of a comprehensive approach, including combating radicalisation and recruitment, hampering terrorist movements and countering terrorist propaganda.


Management of media is also vital for Military in both peace and conflict situations. Armed forces want to control media since any leaks can be used by adversary to gather intelligence and jeopardize the mission. Deception and surprise is the most potent weapon in the commander’s armoury which requires careful media management. There is need for greater understanding between media and armed forces, in understanding each other’s objectives and work together in harmony.


Experts point to the requirement of clear and concise information dissemination policy and establishment of Department of Sentiment Analysis to gauge the sentiments and thereafter choose the best methodology to shape the sentiments favorably.


The public information, media management, information warfare and social media are closely inter-related and need close coordination and management and require a dedicated agency according to experts.



References and Resources also include:




The future of controlling anything from Prosthetic Arm to Cars to UAVs to Robotic Army with the speed of thought has arrived

The brain-computer interface (BCI) allows people to use their thoughts to control not only themselves, but the world around them. Every action our body performs begins with a thought, and with every thought comes an electrical signal. The electrical signals can be received by the brain-computer interface, through an electroencephalograph (EEG) or an implanted electrode, which can then be translated, and then sent to the performing hardware to produce the desired action.

Brain-computer interfaces are being applied in neuroprosthetics, through which paralyzed persons are able to control robotic arms, neurogaming where one can control keyboard, mouse etc using their thoughts and play games, neuroanalysis (psychology), and now in military and defense to control robotic soldiers or fly planes with thoughts. The obvious application for the military is mind-controlled weaponry and remotely-piloted aircraft, which could make operation and reactions far faster.

Russian scientists  have developed  the first electric car in the world that will be controlled by brain  through an innovative system of direction by ‘telekinesis’ that will allow the exchange of information between the brain and the vehicle’s control systems. The ‘neuromobile’, as its creators call it, will allow people with limited mobility to cover long distances without help.

Eventually, brain-computer interfaces could let people control augmented reality and virtual reality experiences with their mind instead of a screen or controller. Facebook’s CEO and CTO teased these details of this “direct brain interface” technology over the last two days at F8.


Brian Control Interfaces

Facebook revealed it has a team of 60 engineers working on building a braincomputer interface that will let you type with just your mind without invasive implants. The team plans to use optical imaging to scan your brain a hundred times per second to detect you speaking silently in your head, and translate it into text. Facebook tells Josh Constine “This isn’t about decoding random thoughts. This is about decoding the words you’ve already decided to share by sending them to the speech center of your brain. Regina Dugan, the head of Facebook’s R&D division Building 8, explained to conference attendees that the goal is to eventually allow people to type at 100 words per minute, 5X faster than typing on a phone, with just your mind.

“Our brains produce enough data to stream 4 HD movies every second. The problem that the best way we have to get information out into the world – speech – can only transmit about the same amount of data as a 1980s modem. We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today,” said Mark Zuckerberg.

Earlier this year, in collaboration with Johns Hopkins Medicine, APL demonstrated the ability to decode semantic information — information about the meanings of words — from neural signals measured using electrodes placed on the surface of the brain in patients undergoing treatment for epilepsy. Similarly, APL has been designing noninvasive optical imaging methods to replace the use of implanted electrodes in order to make these technologies accessible beyond clinical applications. APL is working on project that focuses on developing a silent speech interface that will allow users to type 100 words per minute — five times faster than typing on a smartphone — using only their thoughts.  “The research agreement with Facebook has also allowed us to expand our pioneering brain–machine interface work, and further combine our expertise in neuroscience with our expertise in optical imaging.”

Researchers recorded ECoG while patients named objects from 12 different semantic categories, such as animals, foods and vehicles. “By learning the relationship between the semantic attributes associated with objects and the neural activity recorded when patients named these objects, we found that new objects could be decoded with very high accuracies,” said Michael Wolmetz, a cognitive neuroscientist at the Johns Hopkins Applied Physics Laboratory, and one of the paper’s authors. “Using these methods, we observed how different semantic dimensions — whether an object is manmade or natural, how large it typically is, whether it’s edible, for example — were organized in each person’s brain.”

Russian, Chinese and German Researchers develop cars controlled by mind alone

The mind controlled full-scale electric car  has been designed by specialists from Russian university N.E. Lobachevsky situated in Nizhniy Novgorod. The creators of the prototype said that they had to develop a system for recording brain signals of different modalities and special algorithms to allow the car to decipher commands. The most innovative part of the vehicle,  is its control system, as it includes the use of special algorithms capable of reading different signal modes and transferring them to the machine’s control system .

Specifically, the algorithms will classify “a set of data on the physical state of the person” received by the control system to “discern the mental instructions of the pilot”, as explained at the university, reports RG. In other words, the system “will determine the variant of maneuver in a situation of circulation in which the driver thought”. Then the driver’s mental commands will be transmitted to the actuators of the car’s control system.

The developers believe that the unique car will go into mass production within three years and will be used primarily by handicapped citizens. The creators also stressed that the Russian neuro-mobile will prove to be not only advanced in terms of technology, but will also be economical and affordable.

“A significant advantage is the low cost 550 000 — 990 000 rubles [$9000 to $16,000] while the purchase of foreign cars with similar characteristics will cost 50%


Researchers at Nankai University in Tianjin, China are working alongside Chinese Automaker Great Wall Motor to design a car which can be controlled by the mind alone. During the test the subject using a 16 sensor headset vehicle, was able to command the car to accelerate, break and open and shut the doors.

“There are two starting points of this project. The first one is to provide a driving method without using hands or feet for the disabled who are unable to move freely; and secondly, to provide healthy people with a new and more intellectualized driving mode,” researcher Zhang Zhao told Reuters.

According to the researchers, the ultimate plan could be to integrate this technology with driverless cars, so it is more of a complementary service than an alternative to physical driving. Professor Duan Feng, who led the project, told Reuters, “In the end, cars, whether driverless or not, and machines are serving for people. Under such circumstances, people’s intentions must be recognized. In our project, it makes the cars better serve human beings.


A team of researchers at the Free University of Berlin has also explored brain interfaces to steer vehicles. The German-based team, led by artificial lab professor Dr. Raul Rojas, used a headset and electroencephalography (EEG) sensors designed by bioinformatics company Emotiv. The system was able to interpret the driver’s thoughts such as desire to turn left, right, accelerate and brake, and create computer commands.


DARPA’s Mind Controlled Prosthetic Arm

DARPA’s Prosthetic Arm is set to take over a real arm, letting the receiver control it with their thoughts, fired up by brain cells. The robotic arm is connected by wires that link up to the wearer’s motor cortex—the part of the brain that controls muscle movement—and sensory cortex, which identifies tactile sensations when you touch things. In essence, they claim, it allows its user to feel things with their robotic hand.

In their research, the HAPTIX team is implanting electrodes in a patient’s muscles between the elbow and shoulder, as well as in individual nerve fascicles that correspond to wrist and finger control. According to the release, the researchers are also looking to develop minimally invasive procedures to implant electrodes in the spinal cord. The HAPTIX researchers seek to acquire and decode neural signals that could provide intuitive prosthetic control and restore sensory feedback using these neural interface systems.

“We want to re-establish communication between the motor parts of the nervous system and the prosthetic hand through the use of implantable electronics,” Weber said in a press release.

The HAPTIX program is in its second phase, which is scheduled to continue through 2018. The third phase is scheduled for 2019, when transradial amputees will be allowed to take home a HAPTIX-controlled system for extended trials outside the laboratory, the press release noted.

Researchers at the University of Pittsburgh were able to increase the maneuverability of the mind controlled robotic arm from seven dimensions (7D) to 10 dimensions (10D). The extra dimensions come from four hand movements–finger abduction, a scoop, thumb extension and a pinch—enabling one to pick up, grasp and move a range of objects much more precisely than with the previous 7D control. This in turn help paralyzed persons to control a robotic arm with a range of complex hand movements.

“The ultimate goal for HAPTIX is to create a device that is safe, effective, and reliable enough for use in everyday activities,” explains Doug Weber, the DARPA HAPTIX program manager.


California Institute of Technology’s BCI allow Paralyzed Man to Drink Beer on his own

A paralyzed man named Erik Sorto has been able to drink beer, shake hands and even play “rock, paper and scissors,” thanks to a robotic arm controlled solely by his mind.

For this experiment California Institute of Technology neuroscientist Richard Andersen implanted the electrodes of BCI in different area of the brain, the posterior parietal cortex, which is located on the top of the brain near the back. The parietal cortex is a center of higher-level cognition that processes the planning of movements, rather than the details of how movements are executed.

An implant in this area, allow the goal of an action to be conveyed directly to the robotic limb, producing more natural fluid motions as well as reducing the number of neural signals needed to control its movement.

The implants differ from those in Braingate, which placed electrodes in the motor cortex, the part of the brain directs voluntary physical activity. Since motor cortex directly controls many different muscles, so for any one gesture, patients had to painstakingly focus on which muscles to activate for each specific component of the gesture. With these implants the patients could still control a robotic limb, however the movement was delayed and jerky.

 Entertainment and gaming

Entertainment and gaming applications have opened the market for nonmedical brain computer interfaces. Various games are presented like in where helicopters are made to fly to any point in either a 2D or 3D virtual world. BrainArena, allows the players to play a collaborative or competitive football game by means of two BCIs. They can score goals by imagining left or right hand movements.

Emotiv EPOC allows one to control keyboard and mouse of your laptop as well as move characters in games. MUSE is to allow you to control your iPhone or Android device with your mind power. With ThynkWare, anyone can use their thoughts to control their smartphones, tablets, home, office, tv, robots, and even clothing.

 Mind-controlled telepresence robot

A relatively new field of research is Telepresence that allows a human operator to have an at-a-distance presence in a remote environment via a brain-actuated robot.

A telepresence robot developed at the École Polytechnique Fédérale de Lausanne (EPFL) that can be controlled by thought may give people with severe motor disabilities a greater level of independence. Successfully put through its paces by 19 people scattered around Central Europe – nine of whom are quadriplegic and all of whom were hooked up to a brain-machine interface – the robot handled obstacle detection and avoidance on its own while the person controlling it gave general navigation instructions.

Military Applications

The new report, “Neuroscience, Conflict and Security”, formed part of a series that examined the impact of neuroscience on society, dealing specifically with the potential application of advances in neuroscience to the armed forces and security personnel.

A key advance in neuroscience has been improvements in real-time neuro-imaging, which can indicate in great detail which parts of the brain “light up” when undertaking certain activities. One of its applications could be to screen potential recruits for a specific role, for example to see if they are temperamentally suited to be a commander, pilot or diver.

“At the moment it’s very much a case of taking people on and subjecting them to high-stress exercises and choosing the ones who make it,” says Flower. “If they could be subjected to imaging during assessment you could identify who has good risk-taking behaviour, strategy and planning ability, or 3D analytical skills.”

Brain scanning could also speed up and improve target recognition or identify changes in surveillance satellite images by recognising subconscious objective identification rather than an operator having to process and actively react.

“It has been discovered that when you show the brain different images, it spots the differences between them even though they may not reach conscious awareness,” says Flower. “Wearing a helmet like a hairnet can pick up a spike in brain activity which you can correlate to differences identified between two images, even if they were flashed up too quickly to process consciously.”

That potentially has the ability not only to speed up the process of target selection but also improve accuracy. It could also reduce problems associated with fatigue, which is a big issue facing people whose job involves scanning images for a long time, especially in the dark, such as surveillance UAV operators.

The obvious application for the military is mind-controlled weaponry and remotely-piloted aircraft, which could make operation and reactions far faster. “If you couple that with your subconscious mind being much faster at dealing with information you can see a situation sometime in the future where you’re not thinking about flying the aircraft, but your subconscious is doing it without interfering in any way,” says Flower. “You would probably have a much better appreciation of an incoming threat and fire off a couple of missiles without having to consciously think.”

Like automated weaponry and battlefield robotics, however, these new techniques could require an overhaul of ethical guidelines, especially with regards to civilian casualties. Currently the last person who gave the order to fire is responsible, but if it came from the operator’s subconscious, the line becomes blurred.


Flying manned Aircrafts and Weaponized UAVs by Mind

The University of Florida recently held an event organizers claimed was the “world’s first brain drone race,” featuring unmanned aerial vehicles powered by the brain activity of contestants. The race was billed as a “competition of one’s cognitive ability and mental endurance requiring competitors to out-focus an opponent in a drone drag race fueled by electrical signals emitted from the brain.

Pilots don electroencephalogram headsets that are calibrated to each wearer’s brain. For example, neuron activity will be recorded when the wearer is told to think about pushing something forward. This activity is then bound to the forward stick on the drone’s controller, so future similar neuron activity will move the drone forward. “Organizers of the event describe BCI as “the utilization of a brain imaging device for the purpose of controlling machines with the human brain and to understand the human’s emotional condition or state.”

University of Minnesota carried out a successful demonstration of a thought-controlled mini-helicopter capable of being piloted through obstacles with around 90% accuracy.

A team of engineers at Technische Universität München in Germany developed an algorithm that can convert brain waves into flight commands. The EEG cables sent electrical signals to a computer, which through mind-control algorithm; could target the pilot’s plane-control thoughts and the computer then converted the electric signals into an action that was carried out wirelessly.

In future Soldiers would be able to control both manned aircraft and weaponized UAVs in all their phases of flight

A researcher from Arizona State University has found a way to control multiple drones using nothing but the power of thought.The controller wears a skull cap which contains hundreds of electrodes that are wired to a computer. The wearer then thinks specific commands, the computer translates them into instructions and the robots obey.


 Russian Scientists Develop Mind-Controlled Quadcopter

Zelenograd-based company Neurobotics has designed a Mind-controlled quadcopter, that is able to fly not only to four directions — forward, backwards, right and left — but it can also reach a specific target point, the report said.

“Commands, or ‘conditions’ as we call them, are generated by the sensors on the head of an operator. The person thinks about certain actions at right moments which the system then recognizes and identifies,” Neurobotics director Vladimir Konyshev explained, as cited by the report.

The new technology has a great potential in the future. It would not only help a great deal to the Russian Armed Forces on the battlefield, but its interface could also be used to help people with limited mobility, Konyshev added.

However, the basic limitations of current interfaces are that they allow only decoding of basic commands like left right. The future devices would be capable of capturing and decoding brain signals that are responsible for small, precise movements, to be able to accomplish complex task like landing an aircraft. The accuracy of the system is also required to be enhanced.

Brain Controlled Military robots

Brain-Robot Interaction (BRI) refers to the ability to control a robot system via brain signals and is expected to play an important role in the application of robotic devices in many fields

China Developing mind controlled robot Army

Students at the People’s Liberation Army Information Engineering University in Zhengzhou showed in a recent demonstration that they were able to control the movement of small robots using only their minds. At a demonstration at the People’s Liberation Army Information Engineering University in Zhengzhou, students used the device to send robots trundling in different directions. They were also able to turn the robot’s heads and get them to pick up objects. The Chinese army of the future could see robot soldiers controlled by military commanders’ minds.

The technology faces three major engineering challenges that need to be addressed before soldiers will be seamlessly controlling remote military robots in the battleground. First is need to enhance the efficiency and accuracy of  non-invasive BCIs that are slow and somewhat uncertain at present, secondly, they tend to make high cognitive demands on the user, and finally, especially for tele-operation via the internet, variable communication delays are a significant problem.

Until then, most of the systems being developed currently are adopting a ‘shared-control’ approach, equipping the robot agent with a degree of intelligence to allow it to work semi-autonomously.


BNCI Horizon 2020 project

European Commission has released a roadmap on BCI: “BNCI Horizon 2020 project”, with the objective of providing a global perspective on the BCI field now and in the future. Many of the applications are centered on the needs of the disabled community for serious injuries. BCIs will provide people with more awareness of their own biological and mental state. BCIs will also promote amelioration of lost function.

Hence, interfacing with the brain directly will be on the forefront of both societal and medical evolution. Relevant application areas include social interaction and recreation, occupational safety, quality of life, independent living in old age, and (occupational) rehabilitation.

In the year 2025, there is expected to be a broad range of brain-controlled applications which, according to the BCI roadmap, will be standard in medical treatment and therapy and also in monitoring personal health.

BCI technology has been advancing at the rapid phase, so that it has now become possible to externally control computers, smartphones, or even vehicles, with thought.


References and Resources also include: