US DOD’s JUMP program to develop high performance, energy efficient and secure microelectronics for dominance in future Battlefield Internet of Things

The Joint University Microelectronics Program (“JUMP”), is a collaborative effort between the Department of Defense, U.S. universities and the industrial participants with a goal to substantially increase the performance, efficiency, and capabilities of broad classes of electronics systems for both commercial and military applications.

The collaborative, multidisciplinary, multi-university consortium will support long-term research focused on high performance, energy efficient microelectronics for end-to-end sensing and actuation, signal and information processing, communication, computing, and storage solutions that are cost-effective and secure.

These research and development efforts should  provide the Department of Defense with an unmatched technological edge in advanced radar, communications, and weapons systems, and provide the U.S. economy with unique information technology and processing capabilities critical to commercial competitiveness and future economic growth.

The Consortium seeks to address existing and emerging challenges in electronics and systems technologies by concentrating resources on high-risk, high-payoff, long-range innovative research to accelerate the productivity growth and performance enhancement of electronic technologies and circuits, sub-systems, and multi-scale systems. To this end, JUMP is focused on exploratory research on an 8-12 year time horizon that is anticipated to lead to defense and commercial opportunities in the 2025-2030 timeframe.

Research will continue for five years and commence in January 2018, with funding support coming from industry and government partners. Total JUMP funding for the five-year period will be in excess of $150M, including funds committed by DARPA (Defense Advanced Research Projects Agency,, IBM Corporation, Northrop Grumman Corporation, Micron Technology, Inc., Intel Corporation, EMD Performance Materials (a Merck KGaA affiliate), Analog Devices Inc., Raytheon Company, Taiwan Semiconductor Manufacturing Company Ltd., and Lockheed Martin Corporation.

Current planning supports six research themes across six JUMP centers and utilizes vertical and horizontal centers to capture the intersections of ideas. While the vertical research centers emphasize breakthrough technologies and products, Horizontal research centers will drive foundational developments in a specific discipline, and create disruptive breakthroughs in areas of interest.


“Vertical” Application-Focused Centers

“Vertical” research Centers emphasize application-oriented goals that focus on key issues facing the industry by addressing the full span of multi-disciplined science and engineering required to achieve breakthrough technologies and products. The centers will create complex systems with capabilities well beyond those available today and that will be ready for transfer in the 5 year time frame and implementation in ~10 years. Technology areas of interest for the JUMP “vertical” Centers include:

RF to THz Sensor and Communication Systems.

This theme seeks research in two general, synergistic application areas – RF Sensors and RF Communications Systems – that operate at microwave, millimeter wave or THz frequencies in support of consumer, military, industrial, scientific and medical applications. System examples may include radar, communication, reconnaissance and/or mmwave/THz imaging.

As an example, it is envisioned that future RF sensor systems will require novel, energy-efficient devices, circuits, algorithms, and architectures for adaptively sensing the environment, extracting/manipulating/processing information, and autonomously reacting/responding to the information.

Another example is cognitive communication systems – systems which will operate in complicated radio environments with interference, jamming and rapidly changing network topology, will obtain (sense) information about their environment (aware of their environment and the available resources ) and will dynamically adjust their operation (e.g., efficient spectrum use, interference mitigation, spectrum prioritization) to provide required services to end users.

These future systems should also have Agility, reconfigurable, adaptive, multi-function, multi-mode, self-calibrating sensors with increased degrees of freedom for efficient use of EM spectrum (including: spectrum agility, instantaneous bandwidth/ waveform agility, (very) wide bandwidth, high dynamic range). Autonomous operation and decision making is also desirable. (e.g., embedded real-time learning, ability to recognize threat scenarios, ability to do local-processing before transmitting the data/information)

Super-linear communication links (enabling high modulation formats) and integrated communications components for IoT and distributed sensor systems that enable ultra-low power, high data rate, long-range sensor communications with high linearity in up/down conversion

To address these applications, centers focusing on this vertically integrated application must drive breakthrough research in materials, devices, components, circuits, integration and packaging, connectivity, architectures (e.g., subsystems/arrays), and algorithms that are aimed at efficiently generating, modulating, manipulating, processing (mainly in or very closely coupled to the RF/mm wave /THz domain), communicating (transmitting) and sensing/detecting radiated signals.


Distributed Computing and Networking.

Importantly, new application requirements coupled with physics-based implementation constraints on latency and energy call for novel architectural solutions to computing-at-scale, requiring innovations in interconnect and networking at all levels, from on-chip to between datacenters.

The purpose of this theme is to explore the challenges of extremely large-scale distributed architectures. Novel, multi-tier, wired and wirelessly-connected heterogeneous systems are expected; tiers may be sensor/actuator, aggregation, cloud/datacenter, or combinations thereof. All tiers are expected to be highly scalable, and heterogeneity is expected both within and across the tiers.

Dramatic advances over today’s systems (cloud, mobile, etc.) and capabilities are required. Proposers are expected to define and tackle a grand challenge in the Distributed Computing and Networking space; the grand challenge should focus attention on research issues that would benefit a broad range of civilian and defense applications (e.g. society-scale digital currencies; battlefield command-and-control in denied environments; smart grid optimization; disaster management in digital cities .

It also calls for Development of new distributed computing systems for new applications besides IoT and big data. Novel computing architectures to reduce the energy and time used to process and transport data, locally and remotely for hyperspectral sensing, data fusion, decision making, and safe effector actuation in a distributed computing environment. Provide cooperative and coordinated distributed-system concepts that are scalable and function in communications-challenged environments (where both wired and wireless environments are not guaranteed to be available, reliable, or safe); address approaches to allow for proper operation in isolation environments, and that can intelligently synchronize when communications are restored, including only partial restoration

This theme will primarily focus on digital computing. All tiers are expected to be highly scalable, and heterogeneity is expected both within and across the tiers.


Cognitive Computing.

The Cognitive Computing theme aims to create cognitive computing systems that can learn at scale, perform reasoning and decision making with purpose, and interact with humans naturally and in real-time. Realizing these novel systems may heavily leverage non-traditional computing methods, such as analog computing, stochastic computing, Shannon inspired computing, approximate computing, and bio/brain-inspired models including neuromorphic computing for a broad application space.

This theme seeks to explore multiple approaches for building machine intelligent systems with both cognitive and autonomous characteristics. Such systems can be solely non-traditional, solely von-Neumann or a combination of both elements. A key goal is creating systems that, without explicit objectives, operate in the natural world on their own by forming and extending models of the world they perceive around them, and by interacting with local human decision makers and with global distributed intelligent networks in performing actions to achieve useful yet complex goals.

A full-system approach is required to achieve the goals of this theme. In addition, the proposed research should address the technology advances that are needed for fundamental improvements in performance, capabilities, and energy efficiency through improvements in programming paradigms, algorithms, architectures, circuits, and device technologies.


Intelligent Memory and Storage.

Advances in information technology have pushed data generation rates and quantities to a point where memory and storage are the focal point of optimization of computer systems. Transfer energy, latency and bandwidth are critical to performance and energy efficiency of these systems. The solutions to many modern computing problems involve many-many relationships that can benefit from high cross-sectional bandwidth of the distributed computing platform. As an example, large scale graph analytics involve high cross-data-set evaluation of numerous neighbor relationships ultimately demanding high the highest possible cross-sectional bandwidth of the system.

This research vector seeks a holistic, vertically-integrated, approach to high-performance Intelligent Storage systems encompassing the operating system, programming models, memory management technologies, and a prototype system architecture. A primary focus area for this center will be in establishing an operating system framework allowing run-time optimization of the system based on system configuration preferences, programmer preferences, and the current state of the system.

New Architecture and Programming paradigms, Self-optimizing Systems Allowing for Appropriate Programmer Control. 10X more power efficient computing platform scalable from high performance application processors to less-demanding processors for IoT/sensors/etc. with cost awareness. Small, Probably Low Cost, Compute+Memory+Sensor Node Capable of making Basic Decisions/observations and Reporting to a Larger System.

The technology can span across material, devices, packaging, circuits/systems techniques, computer architecture including but not limited to heterogeneous computing, memory technology (including NVM) and high-speed interface (on-chip and off-chip), etc.


“Horizontal” Disciplinary-Focused Centers

“Horizontal” research centers will drive foundational developments in a specific discipline, or set of like-minded disciplines, will build expertise in and around key disciplinary building blocks, and create disruptive breakthroughs in areas of interest to JUMP sponsors. These centers have a mission to identify and accelerate progress for new technologies that look beyond traditional CMOS. Proposers are expected to define a set of key metrics that their center will use to benchmark and drive efforts in their research space. Technology areas of interest for our JUMP “horizontal” Centers include:

Advanced Architecture and Algorithms.

Today’s system architectures, including distributed clusters, symmetric multiprocessors (SMPs), and communications systems, are generally comprised of homogeneous hardware components that are difficult to modify once deployed. Heterogeneous architectures and elements, such as accelerators, will increasingly be needed to enable scaling of performance, energy efficiency, and cost.

This theme must lay the foundations for new paradigms in scalable, heterogeneous architectures, co-designed with algorithmic implications and vice versa. A major goal of this theme is to address the design and integration challenges of a broad variety of accelerators, both on-chip and off-chip, along with the algorithmic and system software innovations needed to readily incorporate them into both existing and future systems (e.g, information processing, communications, sensing/imaging, etc.).

Centers should address the design and integration challenges of: systems composed of on-chip and off-chip accelerators, computation in and/or near data, and non-traditional computing. Employing novel co-design to bridge the gap between architectures and algorithms for optimization, combinatorics, computational geometry, distributed systems, learning theory, online algorithms, cryptography, etc. are within scope. Benchmarking of the novel architectures is expected. Modeling and software innovations should be used to remove barriers to hardware implementation or mass adoption.


Advanced Devices, Packaging, and Materials.

This theme will address advanced active and passive devices, interconnect, and packaging concepts, based on physics of new materials and unconventional syntheses.

This technology is needed to enable the next breakthrough paradigms in computation (including analog) and information sensing, processing, and storage that will provide further scaling and energy efficiencies. These new materials and devices will provide new functionalities and properties that can augment and/or surpass conventional semiconductor technologies, and will potentially enable novel 3D options. Material development, device demonstration and viable process integration are all within scope. Experimental demonstrations as well as ab-initio material and process modeling are expected.

Energy harvesting and energy storage devices: novel materials for high efficiency energy harvesting, supercapacitors, integrated batteries, power delivery




References and Resources


US DOD and NATO plan Battlefield Internet of Things connecting sensors, wearables, weapons, minitions, platforms and networks for information dominance

The Internet-of-Things is an emerging revolution in the ICT sector under which there is shift from an “Internet used for interconnecting end-user devices” to an “Internet used for interconnecting physical objects that communicate with each other and/or with humans in order to offer a given service”.

The increasing miniaturization of electronics has enabled tiny sensors and processors to be integrated into everyday objects, making them ‘‘smart’’ , such as smart watches, fitness monitoring products, food items, home appliances, plant control systems, equipment monitoring and maintenance sensors and industrial robots. By means of wireless and wired connections, they are able to interact and cooperate with each other to create new applications/services in order to reach common goals. By 2025, it is predicted that there can be as many as 100 billion connected IoT devices or network of everyday objects as well as sensors that will be infused with intelligence and computing capability.

The rapid growth in IOT devices, however will offer new opportunities for hacking, identity theft, disruption, and other malicious activities affecting the people, infrastructures and economy. Some incidents have already happened, FDA issued an alert about a connected hospital medicine pump that could be compromised and have its dosage changed. Jeep Cherokee was sensationally remote-controlled by hackers in 2015.

The military operations will be significantly affected by widespread adoption of IoT technologies. Analogous to IoT, Military internet of things (MIOT) comprising multitude of platforms, ranging from ships to aircraft to ground vehicles to weapon systems, is expected to be developed. MIoT offers high potential for the military to achieve significant efficiencies, improve safety and delivery of services, and produce major cost savings.

Some of the military applications include fully immersive virtual simulations for soldiers’ training; autonomous vehicles; the ability to use smart inventory systems to consolidate warehouses using a web-based delivery and inventory system; and business systems like the Army Strategic Management System to manage energy, utilities and environmental sensors.  The military has begun taking steps towards implementing IoT technologies—some troops have been issued with helmets containing built-in monitoring devices to detect potential concussions and other brain injuries.

“With strategy concepts such as “net centric,” “information dominance,” and the emergence of cyber as an entirely new domain of operations, information always has and will remain central to the military’s efficiency and effectiveness. Naturally, IoT technologies and architectures that are designed to move and process information more quickly and in distributed environments seem like natural fits for military applications,” write Joe Mariani, Brian Williams, Brett Loubert.



Military  Internet of Things

The vision of military internet of things (MIOT) is to realize “anytime, anyplace connectivity for anything, ubiquitous network with ubiquitous computing” in military domain. Commanders make decisions based on real-time analysis generated by integrating Sensors data from unmanned sensors and reports from the field. These commanders shall benefit from a wide range of information supplied by sensors and cameras mounted on the ground, and manned or unmanned vehicles or soldiers.

The DOD has been using IoT in various ways for years, Pellegrino noted, especially for managing its energy usage and physical infrastructure. Connected energy management solutions have allowed the military to reduce total energy consumption by 23 percent since 2002. The military has about 8,000 smart meters installed, with 66 percent of them reporting to an integrated management system. Connected water management has allowed the military to cut portable water use intensity by 27 percent since 2007, he said.

The University of Illinois is leading a $25 million initiative to develop an “internet of battlefield things.” Officials say the initiative aims to have humans and technology work together in a seamless network. They say the initiative will connect soldiers with smart technology in armor, radios, weapons and other objects to give troops a better understanding of battlefield situations and help them assess risks. Experts say future military operations will rely less on human soldiers and more on interconnected technology. They say unmanned systems and machine intelligence advances can be used to improve military capabilities.

Soldiers need a continual flow of information to make the best decisions possible in battle because they are constantly making quick decisions in the face of adverse conditions, UI computer science professor Tarek Abdelzaher said. “You need to connect to the right sensors, the right cameras, the right devices to collect the right pieces of information,” Abdelzaher said.

The present application researches of MIOT are almost limited on how to improve working efficiency in logistic domain using IOT technologies. In future MIOT can be Equipment Maintenance, Smart Bases, Personal Sensing, Soldier Healthcare, Battlefield Awareness, C4ISR and Fire-Control Systems. Joe Mariani, Brian Williams, Brett Loubert  categorize IoT applications according to those that aim to improve cost efficiency, those that aim to improve warfighter effectiveness, and rare cases that aim for both.

Some of the applications of MIoT are:

  1. Military Equipment Logistics – IoT can be huge enabler of efficiency, visibility and military equipment in the right hands at right time. Deploying radio frequency identification tags and standardized barcodes to track individual supplies down to the tactical level could provide real-time supply chain visibility and allow the military to order parts and supplies on demand.  The ability to use smart inventory systems to consolidate warehouses using a web-based delivery and inventory system.
  2. Equipment Maintenance: The harsh conditions and extended deployments put extensive wear and tear on equipment. IoT can enable enhanced equipment maintenance and management through monitoring, optimizing and appropriately allocating various resources and processes such as manpower, material, financial resources and maintenance personnel.
  3. Smart Bases that incorporate commercial IoT technologies in buildings, facilities, etc., force protection at bases as well as maritime and littoral environments, health and personnel monitoring, monitoring and Justin- time equipment maintenance.
  4.  Personal Sensing, Soldier Healthcare – The combination of IoT sensors (temperature, blood pressure, heart rate, cholesterol levels and blood glucose) through body area networks will allow the health of the soldier to be monitored in real time. Soldiers can be alerted of abnormal states such as dehydration, sleep deprivation, elevated heart rate or low blood sugar and, if necessary, warn a medical response team in a base hospital.
  5. Battlefield Awareness – Situational awareness encompasses a wide range of activities in the battlefield to gain information on enemy’s intent, capability and actual position. IoT can enable a vital role by collecting, analyzing, and delivering the synthesized information in real time for expeditious decision making. IoT can enhance Battlefield Awareness from global, to company, platoon and squad commanders down to single soldiers level.
  6. Fire-Control Systems: In fire-control systems, end-to-end deployment of sensor networks and digital analytics enable fully automated responses to real-time threats, and deliver firepower with pinpoint precision. Munitions can also be networked, allowing smart weapons to track mobile targets or be redirected in flight.
  7. Other use cases for IoT include fully immersive virtual simulations for soldiers’ training; autonomous vehicles;and business systems like the Army Strategic Management System to manage energy, utilities and environmental sensor.


Vulnerability of Military Internet of Things

Security equipment is also vulnerable to exploitation by politically and criminally motivated hackers. Security researchers Runa Sandvik and Michael Auger gained unauthorized access to the smart-rifle’s software via its WiFi connection and exploited various vulnerabilities in its proprietary software. The TP750 was tricked into missing the target and not firing the bullet. Recently IoT devices are themselves used for attacks such as when an internet-connected fridge was used as a botnet to send spam to tens of thousands of Internet users.

Military IoT networks will also need to deal with multiple threats from adversaries, said Army’s John Pellegrino deputy assistant secretary of the Army for strategic integration, including physical attacks on infrastructure, direct energy attacks, jamming of radiofrequency channels, attacks on power sources for IoT devices, electronic eavesdropping and malware.

DARPA has launched Leveraging the Analog Domain for Security (LADS) Program for developing revolutionary approaches for securing Military Internet of things. LADS will develop a new protection paradigm that separates security-monitoring functionality from the protected system, focusing on low-resource, embedded and Internet of Things (IoT) devices.


 US Army’s Internet of Battlefield Things (IoBT) Collaborative Research Alliance (CRA)

US Army’s Internet of Battlefield Things (IoBT) Collaborative Research Alliance (CRA)

Through its Internet of Battlefield Things (IoBT) Collaborative Research Alliance, the Army has assembled a team to conduct basic and applied research involving the explosive growth of interconnected sensing and actuating technologies that include distributed and mobile communications, networks of information-driven devices, and artificially intelligent services, and how ubiquitous “things” present imposing adversarial challenges for the Army. Alliance members leading IoBT research areas include UIUC, University of Massachusetts, University of California-Los Angeles and University of Southern California. Other members include Carnegie Mellon University, University of California Berkeley and SRI International.

The ability of the Army to understand, predict, adapt, and exploit the vast array of internet worked things that will be present of the future battlefield is critical to maintaining and increasing its competitive advantage. The explosive growth of technologies in the commercial sector that exploits the convergence of cloud computing, ubiquitous mobile communications, networks of data-gathering sensors, and artificial intelligence presents an imposing challenge for the Army. These Internet of Things (IoT) technologies will give our enemies ever increasing capabilities that must be countered, but commercial developments do not address the unique challenges that the Army will face in using them.

The U.S. Army Research Laboratory (ARL) has established an Enterprise approach to address the challenges resulting from the Internet of Battlefield Things (IoBT) that couples multi-disciplinary internal research with extramural research and collaborative ventures. ARL intends to establish a new collaborative venture (the IoBT CRA) that seeks to develop the foundations of IoBT in the context of future Army operations. The Collaborative Research Alliance (CRA) will consist of private sector and government researchers working jointly to solve complex problems. The overall objective is to develop the fundamental understanding of dynamically-composable, adaptive, goal-driven IoBTs to enable predictive analytics for intelligent command and control and battlefield services.

For the purposes of this CRA, an Internet of Battlefield Things (IoBT) can be summarized as a set of interdependent and interconnected entities (e.g. sensors, small actuators, control components, networks, information sources, etc.) or “things” that are: dynamically composed to meet multiple mission goals; capable of adapting to acquire and analyze data necessary to predict behaviors/activities, and effectuate the physical environment; selfaware, continuously learning, autonomous, and autonomic, where the things interact with networks, humans, and the environment in order to enable predictive decision augmentation that delivers intelligent command and control and battlefield services.

The IoBT is the realization of pervasive computing, communication, and sensing where everything will be a sensor and potentially a processor (i.e. increased number of heterogeneous devices, connectivity, and communication) where subsequent information is of a scale unseen before. The battlespace itself will consist of active red (enemy), blue (friendly), and gray (non-participant) resources, where deception will be the norm, the environment (e.g. megacities and rural) will be dynamic, and ownership and other boundaries will be diverse and transient.

These IoBT characteristics all translate into increased complexity for the warfighter, particularly because current, commonly available, interconnected “things” will exist in the battlefield and be increasingly intelligent, obfuscated, and pervasive. These IoBT characteristics all translate into increased complexity for the warfighter, requiring situation-adaptive responses, selective collection/processing and real time sensemaking over massive heterogeneous data.

The objective of the IoBT CRA is to develop the underlying science of pervasive, heterogeneous sensing and actuation to enhance tactical Soldier and Mission Command autonomy, miniaturization, and information analytic capabilities against adversarial influence and control of the information battlespace; delivering intelligent, agile, and resilient decisional overmatch at significant standoff and op-tempo.

The IoBT CRA consists of three main research areas: Device/Information Discovery, Composition, and Adaptation to establish theoretical foundations that facilitate goal-driven discovery, adaptation, and composition of devices and data at unprecedented scale, complexity, and rate of acquisition; Autonomous & Autonomic Actuation Enabling Intelligent Services to advance the theory and algorithms for complexity and nonlinear dynamics of real-time actuation and robustness with a focus on autonomic system properties (e.g. self-optimizing, self-healing and self-protecting behaviors); and Distributed Asynchronous Processing and Analytics of Things to enrich the theory and experimental methods for complex event processing, with compact representations and efficient pattern evaluation.

Distributed and Collaborative Intelligent Systems (DCIST) Collaborative Research Alliance (CRA)

Through its Distributed and Collaborative Intelligent Systems (DCIST) Collaborative Research Alliance (CRA), the Army will perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. Alliance members include the University of Pennsylvania as the lead research organization. Individual research area leads are MIT and Georgia Tech. Other consortium members are University of California San Diego, University of California Berkeley and University of Southern California.

DCIST concentrates its research into three main areas: distributed intelligence, led by MIT, where researchers will establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team; heterogeneous group control, let by Georgia Tech, to develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy; and adaptive and resilient behaviors, led by the University of Pennsylvania, to develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world. In addition to these three main research areas, research will be pursued along three underlying research themes in Learning, Autonomous Networking, and Cross Disciplinary Experimentation.

The U.S. Army’s operational competitive advantage in a multi-domain battle will be realized through technology dominance, said ARL Director Dr. Philip Perconti.

NATO task group to examine applicability of IoT to Military

These IoT networks will need to deal with multiple threats from adversaries, Pellegrino said, including physical attacks on infrastructure, direct energy attacks, jamming of radiofrequency channels, attacks on power sources for IoT devices, electronic eavesdropping and malware.

NATO has set up RTO task group  (IST-147) that would  select a  scenario  to   examine applicability of IoT to military operations including  base operations, situational awareness,  boundary surveillance including harbour, energy management, and etc. It shall also access the risk of applying IoT technologies in the scenario. Based on this risk assessment, models for security and trust management that address the most significant risks will be proposed. Mitigation measures may include:  Managing identity, credentials and rights of IoT devices and users; Object level protection and trust; and Assessment of available or emerging commercial security solutions. It shall also define an IoT architecture or architectures that might be used in military situations taking into account existing IoT architectures used in other domains.

Challenges and Requirements for Military internet of things (MIOT)

There is great potential for IoT technologies to revolutionize modern warfare, leveraging data and automation to deliver greater lethality and survivability to the warfighter while reducing cost and increasing efficiency. However the successful development and deployment of IoT technologies across the military requires many challenges to be solved:

  1. In contrast to commercial deployments that mainly focus on systems with fixed sensors/devices Military internet of things (MIOT) shall consist of large number of mobile things such as UAVs, Aircrafts, tanks e.t.c. The mobile IoT paradigm invalidates many of the assumptions of traditional wireless sensor networks, especially with regards to wireless technologies and protocols. In particular, mobile IoT devices would find it quite difficult to connect with each other and other components of the IoT network in the presence of mobility, intermittent connectivity and RF link variability.
  2. Deployment Features: One of the biggest constraints in a battlefield environment is power consumption. IoT devices are likely to be powered by batteries or solar power, and charged on-the-move from solar panels, trucks, or even by motion while walking. In either case, they should last for extended periods of time (at least for the duration of the mission). Therefore, devices and sensors need to be power-efficient.
  3. Challenges related to reliability and dependability, especially when IoT becomes mission critical. Equipment should fulfill the requirements imposed and be compliant with the considerations from military standards (e.g., MIL-STD 810G, MIL-STD 461F, MIL-STD-1275). IoT devices should be ruggedized and prepared to operate under extreme environmental conditions.
  4. Security challenges related to co-existence and interconnection of military and civilian IoT networks. Security concerns are the main issue holding back the military’s use of the Internet of Things. Some potential adversaries have advanced cyber and electronic warfare capabilities, and everything connected to the Internet is potentially vulnerable to attack.
  5. Node Capture Attacks: In a node capture attack, the adversary can capture and control the node or device in IoT via physically replacing the entire node, or tampering with the hardware of the node or device.
  6. Electronic Warfare: Another challenge to IoT implementation is that it makes systems vulnerable to electronic warfare. Most IoT technologies communicate wirelessly on radio frequencies. Adversaries can use relatively unsophisticated methods like RF jamming to block these signals, rendering the devices unable to communicate with backbone infrastructure.
  7. Information management challenges for military application of IoT – trustworthiness, pedigree, provenance, and enabling military commanders and missions to benefit from IoT generated information.

IoT can serve the warfighter better with more intelligence and more ways to coordinate actions amongst themselves. In 20 years the IoT will be ubiquitous, Yet for the Army and wider military to make the most of IoT, it will need to rely on heterogeneous and flexible networks that continue to operate in environments with spotty connectivity, and don’t place burdens on soldiers, said Pellegrino, deputy assistant secretary of the Army for strategic integration.

Pellegrino said some connected devices will be intelligent, and others will be “marginally intelligent” but that connectivity will spread everywhere, from munitions to weapons, robotics, vehicles and wearable devices. All of these devices will generate an enormous amount of data, he said, and the military needs to figure out how to make that data useful.

The CIA and Defense Information Security Agency (DISA) are working with commercial companies to bring the cloud and software to secure government networks. Thus, the infrastructure for dealing with the data volume of tactical IoT applications is, potentially, already in place.

“All of these devices are going to be performing a massive variety of tasks,” Pellegrino said, including recommendations on where and when to attack and defend, and which of them will need to be coordinated.

New technologies required to power IoT

State-of-the-art (SOA) sensors use active electronics to monitor the environment for the external trigger, consuming power continuously and limiting the sensor lifetime to durations of months or less. In addition, it increases the cost of deployment, either by necessitating the use of large, expensive batteries or by demanding frequent battery replacement. It also increases Warfighter exposure to danger.

DARPA’s N-ZERO program intends to extend the lifetime of remotely deployed communications and environmental sensors from months to years, by supporting projects that demonstrate the ability to continuously and passively monitor the environment, waking an electronic circuit only upon the detection of a specific trigger signature. DARPA’s N-ZERO program can also enable the future billions of Internet of Things (IoT) devices that shall be deployed ‘everywhere’ and to be accessed ‘any time’ from ‘anywhere’.

For more information on DARPA N-ZERO:

Flexible Networks

Wireless Sensor Networks shall  play major part in another revolution that is in IoT although other communication techniques are also used in IoT. The future billions of Internet of Things (IoT) devices shall be deployed ‘everywhere’ and to be accessed ‘any time’ from ‘anywhere’, anything from large buildings, industrial plants, planes, cars, machines, any kind of goods. WSN technology shall also be employed in smart cities for applications in smart grid, smart water, intelligent transportation systems, and smart homes.

Pellegrino notes that the battlefield situations the military operates in “range from the moderately stable to very high dynamic situations.” To support IoT, the military’s networks will need to be flexible and interactive, he said, and still work despite limited bandwidth, intermittent connectivity and with a large number of devices on the network.

The arrangement of those networks needs to be done “totally autonomously,” he said. The military’s partners may be changing depending on the mission, and connected devices will need to work across networks with different network equipment and configurations.

“To achieve changing objectives with multiple complex tradeoffs, we have got to have highly adaptive management and organization leading to action, with no burden on the soldier, either cognitive or physical burden,” Pellegrino said.

DARPA has been experimenting with “mobile ad hoc networks,” designed to form a self creating and self healing mesh of communication nodes, with setup time measured in minutes instead of days. DARPA envisions networks of more than 1,000 nodes providing individual soldiers with streaming video from drones and other sensors, radio communications to higher headquarters, and advanced situational awareness of other soldiers’ location and status.

DARPA’s Revolutionary Approach “LADS” for IoT Security 

DARPA, the Department of Defense’s Advanced Research Projects Agency, issued a call for “innovative research proposals” for the Leveraging the Analog Domain for Security (LADS) Program. The program is directing $36 million into developing enhanced cyber defense through analysis of involuntary analog emissions, including things like “electromagnetic emissions, acoustic emanations, power fluctuations and thermal output variations.”

The program will explore technologies to associate the running state of a device with its involuntary analog emissions across different physical modalities including, but not limited to, electromagnetic emissions, acoustic emanations, power fluctuations and thermal output variations. This will allow a decoupled monitoring device to confirm the software that is running on the monitored device and what the current state of the latter is (e.g., which instruction, basic block, or function is executing, or which part of memory is being accessed).


for more information on  DARPA LADS:




References and resources also include:


DARPA’s Safe Gene editing program aims to prevent Global Bioerror and Biothreat

CRISPR allows removing a single (defective) gene from a genome and replacing it with another one, to prevent genetic diseases.  CRISPR “has transformed labs around the world,” says Jing-Ruey Joanna Yeh, a chemical biologist at Massachusetts General Hospital’s Cardiovascular Research Center, in Charlestown, who contributed to the development of the technology. “Because this system is so simple and efficient, any lab can do it.” Editing with CRISPR is like placing a cursor between two letters in a word processing document and hitting “delete” or clicking “paste.” And the tool can cost less than US $50 to assemble.


Recently, China announced it was genetically engineering hyper-muscular SUPER-DOGS. The dogs, which are test tube bred in a lab, have twice the muscle mass of their natural counterparts and are considerably stronger and faster. An army of super-humans has been a staple of science fiction and superhero comics for decades – but the super-dog technology brings it closer to reality. The beagle puppy, one of 27, was genetically engineered by ‘deleting’ a gene called myostatin, giving it double the muscle mass of a normal beagle.


The advance genetic editing technology has been touted as a breakthrough which could herald the dawn of ‘superbreeds’, which could be stronger, faster, better at running and hunting. The Chinese official line is that the dogs could potentially be deployed to frontline service to assist police officers. Dr Lai Liangxue, researcher at Guangzhou institute of biological medicine and health, said: “This is a breakthrough, marking China as only the second country in the world to independently master dog-somatic clone technology, after South Korea.”


US DOD is also applying gene editing technology for military applications. During the second biennial Department of Defense Lab Day May 18, 2017, One AFRL exhibit, highlighted research into how geneticists and medical researchers edit parts of the genome by removing, adding or altering sections of the DNA sequence in order to remove a virus or disease caused by harmful chemical, biological or environmental agents a warfighter may have contact with.


Yet without careful precautions, a gene drive released into the wild could spread or change in unexpected ways. Accidently, a lethal gene engineered into a pest species, say, might jump (or, as biologists put it, “horizontally transfer”) into another species that’s a crucial part of an ecosystem.

Kevin Esvelt, head of the Sculpting Evolution lab at MIT Media Lab, which is applying for Safe Genes funding in collaboration with eight other research groups, predicts that eventually, perhaps around 15 years from now, an accident will allow a drive with potential to spread globally to escape laboratory controls. “It’s not going to be bioterror,” he says, “it’s going to be ‘bioerror.’”


This summer, the Daily Star  warned that the terrorist group ISIS is using gene drives to make “supercharged killer mosquitoes.” Experts regard that as unlikely. But the idea that gene drives pose a biosecurity threat is anything but. Because the technology to create a gene drive is widely accessible and inexpensive, biologist Kevin Esvelt of the Wyss Institute for Biologically Inspired Engineering at Harvard University warned the scientific panel at an earlier meeting, “We have never dealt with anything like this before,” as reported by Sharon Begley Senior Writer, Science and Discovery.


The possibilities for “weaponizing” gene drives range from suppressing pollinators, which could destroy an entire country’s agriculture system, to giving innocuous insects the ability to carry diseases such as dengue, said MIT political scientist Kenneth Oye, who briefed the bioweapons office. Gene drive is particularly worrisome because “it’s not just one or two labs that are capable of doing the work,” Oye said — and the “capable” could include do-it-yourself “garage biologists.”


The U.S. Defense Advanced Research Projects Agency (DARPA) has awarded a combined $65 million over four years to seven research teams toward projects designed to make gene editing technologies safer, more targeted and potentially even reversible. The DARPA’s Safe Genes program aims to deliver novel biological capabilities to facilitate the safe and expedient pursuit of advanced genome editing applications, while also providing the tools and methodologies to mitigate the risk of unintentional consequences or intentional misuse of these technologies.



Setting a Safe Course for Gene Editing Research: DARPA

Gene editing technologies have captured increasing attention from healthcare professionals, policymakers, and community leaders in recent years for their potential to selectively disable cancerous cells in the body, control populations of disease-spreading mosquitos, and defend native flora and fauna against invasive species, among other uses. The potential national security applications and implications of these technologies are equally profound, including protection of troops against infectious disease, mitigation of threats posed by irresponsible or nefarious use of biological technologies, and enhanced development of new resources derived from synthetic biology, such as novel chemicals, materials, and coatings with useful, unique properties, says DARPA.


Achieving such ambitious goals, however, will require more complete knowledge about how gene editors, and derivative technologies including gene drives, function at various physical and temporal scales under different environmental conditions, across multiple generations of an organism. In parallel, demonstrating the ability to precisely control gene edits, turning them on and off under certain conditions or even reversing their effects entirely, will be paramount to translation of these tools to practical applications. By establishing empirical foundations and removing lingering unknowns through laboratory-based demonstrations, the Safe Genes teams will work to substantially minimize the risks inherent in such powerful tools.


A new DARPA program could help unlock the potential of advanced gene editing technologies by developing a set of tools to address potential risks of this rapidly advancing field. The Safe Genes program envisions addressing key safety gaps by using those tools to restrict or reverse the propagation of engineered genetic constructs.


“Gene editing holds incredible promise to advance the biological sciences, but right now responsible actors are constrained by the number of unknowns and a lack of controls,” said Renee Wegrzyn, DARPA program manager. “DARPA wants to develop controls for gene editing and derivative technologies to support responsible research and defend against irresponsible actors who might intentionally or accidentally release modified organisms.”


Safe Genes was inspired in part by recent advances in the field of “gene drives,” which can alter the genetic character of a population of organisms by ensuring that certain edited genetic traits are passed down to almost every individual in subsequent generations. Scientists have studied self-perpetuating gene drives for decades, but the 2012 development of the genetic tool CRISPR-Cas9, which facilitates extremely precise genetic edits, radically increased the potential value of—and in some quarters the demand for—experimental gene drives.


Traditional biosafety and biosecurity measures including physical biocontainment, research moratoria, self-governance, and regulation are not designed for technologies that are, in fact, explicitly intended for environmental release and are widely available to users who operate outside of conventional institutions. The goal of Safe Genes is to build in biosafety for new biotechnologies at their inception, provide a range of options to respond to synthetic genetic threats, and create an understanding of what is possible, probable, and vulnerable with regard to emergent gene editing technologies. “DARPA is pursuing a suite of versatile tools that can be applied independently or in combination to support bio-innovation or combat bio-threats,” Wegrzyn said.


From a national security perspective, Safe Genes addresses the inherent risks that arise from the rapid democratization of gene editing tools. The steep drop in the costs of genomic sequencing and gene editing toolkits, along with the increasing accessibility of this technology, translates into greater opportunity to experiment with genetic modifications. This convergence of low cost and high availability means that applications for gene editing—both positive and negative—could arise from people or states operating outside of the traditional scientific community.


DARPA Awards $65M to Improve Gene-Editing Safety, Accuracy

The U.S. Defense Advanced Research Projects Agency (DARPA) has awarded a combined $65 million over four years to seven research teams toward projects designed to improve the safety and accuracy of gene editing.


The funding is being awarded under DARPA’s Safe Genes program, designed to gain fundamental understanding of how gene-editing technologies function; devise means to safely, responsibly, and predictably harness them for beneficial ends; and address potential health and security concerns related to their accidental or intentional misuse.


Efforts funded under the Safe Genes program fall into two broad categories: gene drive and genetic remediation technologies, and in vivo therapeutic applications of gene editors in mammals. Much of the research will look at ways to inhibit gene drive systems. The obvious concern with gene drive techniques is that it’s impossible to know the full ramifications of releasing a genetic modification into the environment until it is actually happening.


DARPA said the seven teams chosen for the funding will be pursuing one or more of three technical objectives:

  • Develop genetic constructs—biomolecular “instructions”—that provide spatial, temporal, and reversible control of genome editors in living systems;
  • Devise new drug-based countermeasures that provide prophylactic and treatment options to limit genome editing in organisms and protect genome integrity in populations of organisms; and
  • Create a capability to eliminate unwanted engineered genes from systems and restore them to genetic baseline states.


  1. A team led by Dr. Amit Choudhary (Broad Institute/Brigham and Women’s Hospital-Renal Division/Harvard Medical School) is developing means to switch on and off genome editing in bacteria, mammals, and insects, including control of gene drives in a mosquito vector for malaria, Anopheles stephensi. The team seeks to build a general platform for the rapid and cost-effective identification of chemicals that will block contemporary and next-generation genome editors. Such chemicals could propel the development of therapeutic applications of genome editors by limiting off-target effects or protect against future biological threats. The team will also construct synthetic genome editors for precision genome engineering.


  1. A Harvard Medical School team led by Dr. George Church seeks to develop systems to safeguard genomes by detecting, preventing, and ultimately reversing mutations that may arise from exposure to radiation. This work will involve creation of novel computational and molecular tools to enable the development of precise editors that can distinguish between highly similar genetic sequences. The team also plans to screen the effectiveness of natural and synthetic drugs to inhibit gene editing activity.


  1. A Massachusetts General Hospital (MGH) team led by Dr. Keith Joung aims to develop novel, highly sensitive methods to control and measure on-target genome editing activity—and limit and measure off-target activity—and apply these methods to regulate the activity of mosquito gene drive systems over multiple generations. State-of-the-art technologies for measuring on- and off-target activity require specialized expertise; the MGH team hopes to enable orders of magnitude higher sensitivity than what is available with existing methods and make this process routine and scalable. The team will also develop novel strategies to achieve control over genome editors, including drug-regulated versions of these molecules. The team will take advantage of contained facilities that simulate natural environments to study how drive systems perform in mosquitos under conditions approximating the real world.


  1. A Massachusetts Institute of Technology (MIT) team led by Dr. Kevin Esvelt has been selected to pursue modular “daisy drive” platforms with the potential to safely, efficiently, and reversibly edit local sub-populations of organisms within a geographic region of interest. Daisy drive systems are self-exhausting because they sequentially lose genetic elements until the drive system stops spreading. In one proposed variant, natural selection is anticipated to favor the edited or original version depending on which is in the majority, keeping genetic alterations confined to a specified region and potentially allowing targeted populations of organisms to be restored to wild-type genetics. MIT plans to conduct the majority of its work in nematodes, a simple type of worm that reproduces rapidly, enabling high-throughput testing of different drive configurations and predictive models over multiple generations. The team then aims to adapt this system in the laboratory for up to three key mosquito species relevant to human and animal health, gradually improving performance in mosquitos through an iterative cycle of model, test, and refine.


  1. A North Carolina State University (NCSU) team led by Dr. John Godwin aims to develop and test a mammalian gene drive system in rodents. The team’s genetic technique targets population-specific genetic variants found only in particular invasive communities of animals. If successful, the work will expand the tools available to manage invasive species that threaten biodiversity and human food security, and that serve as potential reservoirs of infectious diseases affecting native animal and human populations. The team also plans to develop mathematical models of how drives would function in mice, and then perform testing in contained, simulated natural environments to gauge the robustness, spatial limitation, and reversibility of the drives.


  1. A University of California, Berkeley team led by Dr. Jennifer Doudna will investigate the development of novel, safe gene editing tools for use as antiviral agents in animal models, targeting the Zika and Ebola viruses. The team will also aim to identify anti-CRISPR proteins capable of inhibiting unwanted genome-editing activity, while developing novel strategies for delivery of genome editors and inhibitors.


  1. A University of California, Riverside team led by Dr. Omar Akbari seeks to develop robust and reversible gene drive systems for control of Aedes aegypti mosquito populations, to be tested in contained, simulated natural environments. Preliminary testing will be conducted in high-throughput, rapidly reproducing populations of yeast as a model system. As part of this effort, the team will establish new temporal and environmental, context-dependent molecular strategies programmed to limit gene editor activity, create multiple capabilities to eliminate unwanted gene drives from populations through passive or active reversal, and establish mathematical models to inform design of gene drive systems and establish criteria for remediation strategies. In support of these goals, the team will sample the diversity of wild populations of Ae. aegypti.


“Part of our challenge and commitment under Safe Genes is to make sense of the ethical implications of gene-editing technologies, understanding people’s concerns, and directing our research to proactively address them so that stakeholders are equipped with data to inform future choices,” Renee Wegrzyn, Ph.D., manager of the Safe Genes program, said in a statement.


“As with all powerful capabilities, society can and should weigh the risks and merits of responsibly using such tools. We believe that further research and development can inform that conversation by helping people to understand and shape what is possible, probable, and vulnerable with these technologies.”


References and resources also include:

Free Space Optical communications for ultrafast secure communications from Aircrafts, Satellites, Moon and Mars

Free Space Optical or Laser communications is creating a new communications revolution, that by using visible and infrared light instead of radio waves for data transmission  is providing  large bandwidth, high data rate, license free spectrum, easy and quick deployability,  low mass and  less power requirement. It also offers low cost transmission as against radio frequency (RF) communication technology and fiber optics communication. FSO operates on the Line-of-Sight phenomenon, consisting of a LASER at source and detector at the destination which provides optical wireless communication between them.

Both military and civilian users have started planning Laser communication systems from terrestrial short-range systems, to high data rate Aircraft and Satellite communications, unmanned aerial vehicles (UAVs) to high altitude platforms (HAPs), near-space communications for relaying high data rates from moon, and deep space communications from mars.

US-based LGS Innovations, has won a contract from NASA to provide a laser transmitter for a first-of-a-kind space mission. The Herndon, Virginia, company’s photonics technology will be one of the key elements in a high-bandwidth optical communications link that will beam data and high-resolution imagery back to Earth from a craft orbiting around an unusual metal asteroid. A part of what is known is full as NASA’s Deep Space Optical Communications (DSOC) project, the laser transmitter will fly on the mission to the asteroid “Psyche” as a technology demonstration.

For military, FSO is the next frontier for net-centric connectivity, as it can provide low cost, large bandwidth, high speed and secure communications in space and inside the atmosphere. There are size, weight and power (SWAP) advantages as well. Intelligence, Surveillance, and Reconnaissance (ISR) platforms can deploy this technology as they require disseminating large amount of images and videos to the fighting forces, mostly in real time.

That’s why the Defense Department recently awarded a three-year, $45 million grant to a tri-service project for a laser communications system. Thomas and her collaborators have moved past the research equipment and are building a full-up prototype expected to be ready by 2019.

Growing employment of laser free space communication

One major NASA priority is to use lasers to make space communications for both near-Earth and deep-space missions more efficient. Laser wavelengths are 10,000 times shorter than radio waves, allowing data to be transmitted across narrower tighter beams; therefore the energy is not spread out as much as it travels through space.

For example, a typical Ka-Band signal from Mars spreads out so much that the diameter of the energy when it reaches Earth is larger than Earth’s diameter. A typical optical signal, however, will only spread over the equivalent of a small portion of the United States; thus there is less energy wasted. This also leads to reduction in antenna size for both ground and space receivers, which reduces satellite size and mass. “The shorter wavelength also means there is significantly more bandwidth available for an optical signal, while radio systems have to increasingly fight for a very limited bandwidth,” explains NASA.

This technology  is capable of providing promising gigabit Ethernet access for high rise network enterprise or bandwidth intensive applications (e.g., medical imaging, HDTV, hospitals for transferring large digital imaging files or telecommunication) or intra campus connections.

FSO technology provides good solution for cellular carriers using 4G technology to cater their large bandwidth and multimedia requirement by providing a back haul connection between cell towers It can provide a back up protection for fiber based system in case of accidental fiber damage. It is believed that FSO technology is the ultimate solution for providing high capacity last mile connectivity up to residential access.


Laser communications could also benefit a class of missions called CubeSats, which are about the size of a shoebox. These missions are becoming more popular and require miniaturized parts, including communications and power systems.

The  drawback of the FSO link is that its performance is strongly dependent on atmospheric attenuations. Different atmospheric conditions like snow, fog and rain scatter and absorb the transmitted signal, which leads to attenuation of information signal before receiving at receiver end. As a result of attenuation caused by atmospheric conditions the range and the capacity of wireless channel are degraded thereby restricting the potential of the FSO link by limiting the regions and times.


Laser communications for global internet connectivity

Facebook aims to use a mix of solar-powered aircraft and low-orbit satellites to beam signals carrying the internet to hard-to-reach locations. ‘As part of our efforts, we’re working on ways to use drones and satellites to connect the billion people who don’t live in range of existing wireless networks,’ said Mark Zuckerberg.

The drones, flying at 65,000ft (19,800 metres), will be capable of staying in the air for months. ‘Our Connectivity Lab is developing a laser communications system that can beam data from the sky into communities. ‘This will dramatically increase the speed of sending data over long distances.

It is proposed that for sub-urban areas in limited geographical regions, solar-powered high altitude drones will be used to deliver reliable internet connections via FSO links. For places where deployment of drones is uneconomical or impractical (like in low population density areas), LEO and GEO satellites can be used to provide internet access to the ground using FSO.

Freespace laser communications  was used to send data reliably between balloons flying on the stratospheric winds in Project Loon. Now loon team is  working with AP State FiberNet, a telecom company in Andhra Pradesh, a state in India which is home to more than 53 million people. Less than 20% of residents currently have access to broadband connectivity, so the state government has committed to connecting 12 million households and thousands of government organizations and businesses by 2019 — an initiative called AP Fiber Grid.

AP State FiberNet announced that they’ll be rolling out two thousand FSOC links created by  team at X. These FSOC links will form part of the high-bandwidth backbone of their network, giving them a cost effective way to connect rural and remote areas across the state. The links will plug critical gaps to major access points, like cell-towers and WiFi hotspots, that support thousands of people.


World record in free-space optical communications

Researchers at the German Aerospace Center  have set a new record in data transmission using laser: 1.72 terabits per second across a distance of 10.45 kilometres, which is equivalent to the transmission of 45 DVDs per second. This means that large parts of the still under-served rural areas in Western Europe could be supplied with broadband Internet services.

We have set ourselves the goal of enabling Internet access at high data rates outside major cities, and want to demonstrate how this is possible using satellites,” explains Christoph Günther, Director of the DLR Institute of Communications and Navigation. Fibre-optic links and other terrestrial systems offer high transmission rates, but are available predominantly in densely populated regions.

Outside of the metropolitan centres a broadband supply via geostationary satellites is possible. Scientists  as part of the DLR THRUST (Terabit-throughput optical satellite system technology) project, satellites should be connected to the terrestrial Internet via a laser link. The envisaged data throughput is more than one terabit per second. Communication with the users is then carried out in the Ka-band, a standard radio frequency for satellite communications.

Within the framework of the experiments, a fibre-optic transmission system of the Fraunhofer Heinrich Hertz Institute was employed which operates at wavelengths of around 1550 nanometres and which is suitable for high data rates. This system was integrated into DLR’s newly developed free-space optic transmission system.


 NASA’s Lunar Laser Communication Demonstration (LLCD) and Laser Communications Relay Demonstration (LCRD)

NASA’s Laser Communications Relay Demonstration (LCRD) mission has begun integration and testing at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. The LCRD mission continues the legacy of the Lunar Laser Communications Demonstration (LLCD), which flew aboard a moon-orbiting spacecraft in 2013.

LLCD had demonstrated error-free communication from the Moon to the Earth under all conditions, including during broad daylight and even when the Moon was within 3° of the Sun as seen from Earth. It also proved that a space-based laser communications system was viable and that the system could survive both launch and the space environment.

NASA’s Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from the Moon to Earth at a record-breaking download rate of 622Mb/s. The space laser terminal employed a 0.5W IR laser at 1.55 microns (which is eye-safe as well as invisible to the eye) and 4in (10.7cm) telescope to transmit toward the selected ground terminal. The downlink beam is received by an array of telescopes that are coupled to novel and highly sensitive superconducting nanowire detector arrays that convert the photons in the beam to bits of data.

The Goddard team is now planning a follow-on mission called the Laser Communications Relay Demonstration (LCRD) that proposes to revolutionize the way we send and receive data, video and other information, using lasers to encode and transmit data at rates 10 to 100 times faster than today’s fastest radio-frequency systems, using significantly less mass and power.  It will fly as a commercial satellite payload in 2019. It consists of two optical communications terminals in space and will enable real-time forwarding and storage of data up to 1.25 Gbps (coded) / 2.880 Gbps (uncoded).

Mission operators at ground stations in California and Hawaii will test its invisible, near-infrared lasers, beaming data to and from the satellite as they refine the transmission process, study different encoding techniques and perfect tracking systems.While in operation, LCRD will also enable the gathering of information about the longevity and durability of space-based optical systems and their hardware, as well as ensuring the accuracy of the lasers that carry messages to the ground. They also will study the effects of clouds and other disruptions on communications, studying mitigating solutions including relay operations in orbit or backup receiving stations on the ground.

NASA is now planning laser communications from Mars, it is developing new optical communications system, that will reduce the time required to transmit high resolution images from Mars from 90 minutes to few minutes. The new optical communications system that NASA plans to demonstrate in 2016 will even allow the streaming of high-definition video from distances beyond the Moon.

The Deep Space Optical Communications project is developing three key technologies essential for operational deep space optical communications, Spacecraft disturbance isolation platform, Photon counting receiver for spacecraft optical transceiver consisting of radiation-tolerant indium gallium arsenide phosphide (InGaAsP) detector and superconducting photon counting detectors for the Earth-based optical receivers.

The team at Glenn is developing an idea called Integrated Radio and Optical Communications (iROC) to put a laser communications relay satellite in orbit around Mars that could receive data from distant spacecraft and relay their signal back to Earth. The system would use both RF and laser communications, promoting interoperability amongst all of NASA’s assets in space. By integrating both communications systems, iROC could provide services both for new spacecraft using laser communications systems and older spacecraft like Voyager 1 that use RF.

NASA’s up and coming Psyche mission to explore a unique metal asteroid orbiting the Sun between Mars and Jupiter, will also test new communication hardware that uses lasers instead of radio waves.  In parallel with a more conventional X-band microwave link, it will send engineering and science data from the Psyche Spacecraft and is said to be the first such laser transmitter to support deep-space, high-bandwidth optical communication.

“Future deep space exploration missions, both manned and unmanned, will require high-bandwidth communications links to ground stations on Earth to support advanced scientific instruments, high-definition video, and high-resolution imagery,” states LGS, adding that its transmitter will enable much faster communication and help improve the efficiency of future solar system exploration missions


Europe’s Global Laser Communications System

The first dedicated laser terminal forming a high-speed optical network in space is now in orbit, after a Proton rocket launch from Kazakhstan on January 29. Part of the future “European Data Relay System” (EDRS), which the European Space Agency (ESA) describes as its “most ambitious telecommunications program to date”, the laser was developed by key partner Tesat Spacecom, an Airbus subsidiary.

The European Space Agency (ESA) and partner Airbus Defence and Space are aiming to build out the European Data Relay System (EDRS) into a global laser communications network by 2020 and hope that the system will become an international standard. Sentinel satellites 1A, 1B, 2A and 2B all have Laser Communication Terminals (LCT) payloads.

ESA and Airbus completed a major test of the EDRS system in late 2014, linking the Sentinel 1A satellite built by Thales Alenia Space with the Airbus-built Alphasat satellite via Laser Communication Terminals (LCTs). The test beamed images from Sentinel 1A, which circles the planet at 700 kilometers in Low Earth Orbit (LEO) to Alphasat 36,000 kilometers up in Geostationary Earth Orbit (GEO) and back to the ground. Tesat boasts it’s point-to-point data transfer covers about 28000 miles with transfer rate of 5 Gigabits per sec.

“[Sentinel 1A] produces around 1.8 terabytes of raw data every single day, and when we process this data it is even three terabytes, more or less, on average we produce everyday. In 2017 we will have seven Sentinels working and roughly seven times the amount of data to download. Four of these Sentinels will have a laser communication terminal and can use EDRS,” he said.

Stefan Klein, head of aviation division General Atomics, Spezialtechnik, said his company is eager to use EDRS laser communications for its Unmanned Aerial Vehicles (UAVs). Today the company’s drones get up to 40 hours of flight without refueling, and require real-time data through secure communications. The company plans to build LCT payloads to leverage EDRS by the end of the decade.

Laser Light Communications’s Optical Satellite Systems

The world’s first optical wave satellite communications has been planned by Laser Light Communications, intending to deploy it in the first quarter of 2017.The company plans on creating a 12-satellite constellation in MEO with an operating system capacity of 4.8 terabytes/sec and satellite-to-satellite optical crosslinks and satellite-to-ground optical up/down links of 200 gigabytes/sec. The company envisions integrating the Optical Satellite System with existing terrestrial and undersea fiber optic levels.

DISA has signed a Cooperative Research and Development Agreement with Laser Light Communications to evaluate the feasibility of the underlying technology and the future potential of the all-optical system for DOD missions.

The laser communications is attractive for defence of its high bandwidth and freedom from issues like spectrum allocation or mutual interference due to satellite spacing. The system is also more secure because enhanced resistance of optical communication systems from interception and jamming.


Aircraft to Ground Communications

Free-space optical (FSO) communication links of 1Gbps between aircraft and ground stations have been demonstrated by Christopher Schmidt and others from Institute of Communication and Navigation, German Aerospace Center. This ultrafast movement of information, of high data volumes which uses high-resolution sensor systems, has particular applications for disaster management, monitoring natural events, and traffic observation.

Their system consists of Optical transmitter, called Free-space Experimental Laser Terminal II (FELT II), installed in Do228 aircraft consisting of two-stage tracking system, an inertial measurement unit (for velocity and orientation), the optical bench inside the cabin of the aircraft, and a dome-shaped assembly below the cabin.

For data reception they have designed transportable optical ground station (TOGS), that consist of a pneumatically deployable Ritchey-Chrétien-Cassegrain telescope with a main mirror diameter of 60cm. TOGS is equipped with an optical tracking system, dual-antenna global positioning system and an inclination sensor to determine its own location, heading, and calibration, and it has supports to enable leveling of the station.


 Optical LAN

Short range Laser communications has also started being utilized for tasks such as connecting campus or office buildings when an obstruction such as a river or road makes laying fiber infeasible. Northern Storm, a US based enterprise has partnered with Mostcom in Eastern Europe have developed NS10G system, that can provide 10 gig throughput upto 1 Km at the price is 1/4th the installed cost of a 10 Gig fiber line.

Military FSO or Laser Communications

Spectrum congestion is a growing problem, it increasingly limits operational capabilities due to the increasing deployment and bandwidth of wireless communications, the use of network-centric and unmanned systems, and the need for increased flexibility in radar and communications spectrum to improve performance and overcome sophisticated countermeasures.

Networks are said to be one of the U.S. military’s Achilles’ heels. Anthony Nigara, senior director for advanced systems at Exelis, which is working on a laser communications project for the Office of Naval Research, said in an interview that adversaries may want to block, degrade or eavesdrop on U.S. military communications.  Cutting off communications through jamming or the destruction of infrastructure could be devastating to battlefield commanders. FSO communication cannot be easily intercepted, detected or jammed as FSO laser beam is highly directional with very narrow beam divergence. Unlike RF signal, FSO signal cannot penetrate walls which can therefore prevent eavesdropping.

However, this technology is limited to LOS communications and  is affected by atmospheric attenuation that is impossible to control. This  is definitely a challenge that will impact mission capabilities.

Therefore, a viable future work would be to explore the possibility of implementing FSO relay capability as a solution for broadband communication over the horizon in tactical operations. Relay could be implemented from ground-to-air and air-to-ground paired links. The device in the air acts as a repeater to avoid physical obstructions during required ground-to-ground communications.” This solution addresses the challenge involving LOS, and transmission away from the ground reduces the effect of atmospheric scintillation to the optical link. The solution could be cascaded to further increase the eventual ground-to-ground range,” propose Lai, Jin Wei Monterey, California: Naval Postgraduate School. Caution has to be exercised when using an FSO communication system as the laser may cause damage to the human eye.

Office of Naval Research tests tactical line-of-sight operational network (TALON)

Defense Department recently awarded a three-year, $45 million grant to a tri-service project for a laser communications system. “This is basically fiber optic communications without the fiber,” said lead researcher Linda Thomas, whose Naval Research Laboratory team takes home about a third of the grant money. Their TALON device transmits messages via laser over distances comparable to current Marine Corps tactical radios, but because it’s a narrow beam of light rather than a radio broadcast, it’s much harder for an enemy to pick up the transmission, let alone interfere with it.

ONR successfully tested Exelis’ tactical line-of-sight operational network (TALON) between two mountains 50 kilometers apart at Naval Air Weapons Station China Lake in California. Nigara said the TALON program has worked on synchronizing transmitters and terminals on the move, whether from ship to ship, or ship to shore. They must be able to find each other and link automatically.

Exelis’ ES division has developed TALON (Tactical Line-of-Sight Optical Network), “Our TALON product line is a free-space optical communications system that uses lasers to transmit mission-critical data to warfighters from distances of more than 30 miles and 1,000 times faster than RF technology,” says Andy Dunn, vice president of business development, integrated electronic warfare systems, Exelis ES.

“You can only push so much data, video and voice communications through the traditional RF space. When you move up to optics or laser-based communications, you can push a lot more data through the pipeline and that’s what the TALON line does.”

Because heavy weather can still block laser beams, especially over long distances, Thomas emphasized you’d never want to get rid of your radios and rely exclusively on lasers.


Market growth

According to the new market research report by Markets and Markets, the FSO market is expected to grow from USD 116.7 Million in 2015 to USD 940.2 Million by 2020, at a CAGR of 51.8% during the forecast period. The factors driving the FSO market are last mile connectivity, license-free, and alternative solution to overburdened RF technology for outdoor networking

“The global free–space optical communications market was valued at $41.9 million in 2013 and $59.2 million in 2014. This market is expected to reach $501.1 million in 2019, at a compound annual growth rate (CAGR) of 53.3% from 2014 through 2019,” according to report by Reportlinker. The Market is divided into segments like data transmission, security, last mile access, storage area network, disaster recovery, healthcare facilities and others among both civil and defence users.

The article sources also include:,Interception.aspx


Security agencies are employing data analytics and AI tools for Crime Prevention

Crime is down but it is changing, said Rt Hon Theresa May MP Home Secretary, UK. While traditional high volume crimes like burglary and street violence have more than halved, previously ‘hidden’ crimes like child sexual abuse, rape and domestic violence have all become more visible, if not more frequent, and there is growing evidence of the scale of online fraud and cyber crime.

As with so many of the challenges we face as a society, the prevention of crime is better than cure. Stopping crime before it happens, and preventing the harm caused to victims, must be preferable to picking up the pieces afterwards.

Data and data analytics, tools have become critical in successfully preventing crime. Many police forces are already trialling forms of ‘predictive policing’, largely to forecast where there is a high risk of ‘traditional’ crimes like burglary happening, and plan officers’ patrol patterns accordingly, says UK’s Modern Crime Prevention Strategy. Data analytics can be used to identify vulnerable people, and to ensure potential victims are identified quickly and consistently.

China, a surveillance state where authorities have unchecked access to citizens’ histories, is developing artificial intelligence based tools that they say will help them identify and apprehend suspects before criminal acts are committed.

China planning to use AI technology to predict and prevent crime

China’s crime-prediction technology relies on several AI techniques, including facial recognition and gait analysis, to identify people from surveillance footage, according to The Financial Times. In addition, “crowd analysis” can be used to detect “suspicious” patterns of behaviour in crowds, for example to single out thieves from normal passengers at a train stations.

Facial recognition company Cloud Walk has been trialling a system that uses data on individuals’ movements and behaviour — for instance visits to shops where weapons are sold — to assess their chances of committing a crime. Its software warns police when a citizen’s crime risk becomes dangerously high, allowing the police to intervene.

“If we use our smart systems and smart facilities well, we can know beforehand . . . who might be a terrorist, who might do something bad,” said Li Meng, vice-minister of science and technology.

Another example of AI use in Chinese crime prediction is “personal re-identification” — matching someone’s identity even if spotted in different places wearing different clothes, a relatively recent technological achievement.

“We can use re-ID to find people who look suspicious by walking back and forth in the same area, or who are wearing masks,” said Leng Biao, professor of bodily recognition at the Beijing University of Aeronautics and Astronautics. “With re-ID, it’s also possible to reassemble someone’s trail across a large area.”


Durham Constabulary Deploy AI for Crime Prevention

Durham Constabulary is preparing to trial an artificially intelligent system to help officers decided whether or not to keep a suspect in custody.

The Force, will use the Harm Assessment Risk Tool (Hart) to help officers decide if a suspect can be released from detention, based on the probability of offending once released. Hart has been trained on five years of the Force’s data (from 2008 – 2012), and will classify a suspect as either low, medium, or high risk of offending. The system was tested from 2013, with forecasts that a suspect was low risk accurate 98% of the time, while forecasts that suspects were high risk were accurate 88% of the time. The Hart system was developed in conjunction with the renowned Centre for Evidence-based Policing at the University of Cambridge.

The use of data analytics and AI to help inform police decision making is in line with the Home office’s aspirations outlined in last year’s Modern Crime Prevention Strategy. The Strategy acknowledges that better use of data and technology is one of the key pillars of effective modern crime prevention in the digital age, and outlines the Government’s role in “stripping away barriers to the effective use of data and data analytics, and helping others exploit new and existing technology to prevent crime.”

According to Modern Crime Prevention Strategy data analytics can:

  • Help police forces deploy officers to prevent crime in known hotspots (often called ‘predictive policing’)
  • Use information shared by local agencies on, for example, arrests, convictions, hospital admissions, and calls on children’s services to identify individuals who are vulnerable to abuse or exploitation
  • Spot suspicious patterns of activity that can provide new leads for investigators, such as large payments to multiple bank accounts registered at the same address
  • Show which products, services, systems or people are vulnerable to particular types of crime – for example that young women are disproportionately likely to have their smartphone stolen. This means system flaws can be addressed, or crime prevention advice (e.g. on mobile phone security measures) can be targeted more effectively.


SA Company to Use Artificial Intelligence to Predict Crime

Designed to predict and map potential crimes, Solution House Software has announced the launch of their new artificial intelligence (AI) module for Incident Desk. The Incident Desk Predictive Analysis module uses machine learning technology developed by Solution House together with aggregated data from multiple information sources to determine the likelihood of different types of criminal activity in the Incident Desk management area.

“With the module installed, Incident Desk generates 7 and 30-day forecasts as heat maps based on crime types and incident probabilities that managers can use to optimise their finite security resources,” says Janse van Rensburg.

“Crime is notoriously difficult to predict, but given that Incident Desk can access so many different types of data – including weather patterns and forecasts and historical data – the results are based on fairly accurate and proven trending algorithms,” her says.

One of the biggest problems currently plaguing public safety and security are the ‘islands of data’ that are not being shared or centralised, which makes it difficult to data mine and analyse.



References and resources also include:

Single photon detector (SPD) critical technology for quantum computers and communications, and submarine detection

Light is widely used for communications, carrying phone conversations and video signals through fiber-optic cables around the world in pulses composed of many photons. Light is also being used in  optical wireless communication, a form of free space communications consisting of a LASER at source and detector at the destination.  Both military and civilian users have started planning Laser communication systems from terrestrial short-range systems, to high data rate Aircraft and Satellite communications, unmanned aerial vehicles (UAVs) to high altitude platforms (HAPs), near-space communications for relaying high data rates from moon, and deep space communications from mars.

The detectors  which detect these signals are most critical elements that determine the performance of wide range of civilian and military systems. These include systems such as light or laser detection and ranging (LIDAR or LADAR), photography, astronomy, quantum information processing, advanced metrology, quantum optics, medical imaging, microscopy, quantum and classical optical communications including underwater Blue-Green communications, and environmental sensing.

As the state of the art in these fields has advanced, so have the performance requirements of the constituent detectors. A single photon is the indivisible minimum energy unit of light, and therefore, detectors with the capability of single-photon detection are the ultimate tools for weak light detection. Single photon detectors  have found application in various research fields such as quantum information, quantum optics, optical communication, and deep space communications.

There has been concerted effort to advance single-photon detection technologies to achieve higher efficiency, lower noise, higher speed and timing resolution, as well as to improve other properties, such as photon number resolution, imaging, and sensitivity to lower energy photons. High-bandwidth, high-sensitivity, compact and readily available photon-counting detector is a key technology for many future scientific developments and improved DoD application capabilities, according to DARPA.

Engineers have shown that a widely used method of detecting single photons can also count the presence of at least four photons at a time. The researchers say this discovery will unlock new capabilities in physics labs working in quantum information science around the world, while providing easier paths to developing quantum-based technologies.

Detector technologies

Depending on the wavelength regime of interest, different technologies have been utilized, such as silicon avalanche photodiodes (APDs) for visible wavelengths, photomultiplier tubes, or InGaAs-based APDs for the telecommunication range. In recent years, superconducting nanowire single-photon detectors (SNSPDs) have been shown to be promising alternatives, particularly when they are integrated directly onto waveguides and into photonic circuits. Apart from these, there are also some new technologies like hybrid photodetectors, visible light photon counters, frequency up-conversion, quantum dots & defects and carbon nanotubes.

 Semiconductor Single-Photon Avalanche Photodiodes (SPAD)

SPADs are currently the mainstream solution for single-photon detection in practical applications. SPAD device is operated in Geiger mode, for which biasing above the breakdown voltage results in a self-sustaining avalanche in response to the absorption of just a single photon. This electron cascade and multiplication effect, significantly amplifies the response and allows for an easy measurement of the response pulses.

In the visible light range, the best known and most widely used are Si avalanche photodiodes (APD’s). Detection of single-photon infrared (IR) radiation remains a major technological challenge because IR photons carry significantly less energy than those of visible light, making it difficult to engineer an efficient electron cascade. The most successful Si APD’s have their sensitivity restricted by the bandgap, while APD’s based on narrow-gap semiconductors exhibit unacceptably large dark counts.

The best quantum efficiency (QE) reported for InGaAs APD’s is 16% at 1.2 µm, but the large, 0.5-ns jitter and high, 10 -per-second dark counts make them not attractive for several important applications, including practical quantum communication systems.

The typical structure of an InGaAs/InP single-photon detector is made by a separate absorption and multiplication (SAM) region where a low-bandgap material (InGaAs) is used to absorb NIR photons and a compatible highbandgap material (InP) is used for avalanche multiplication through a high electric field.

Some tasks require free-running operation of the detector because the arrival time of the photons is unknown or they are spread over a long time slot (tens of microsecond). Free-running operation of InGaAs/InP detectors is challenging due to afterpulsing effects, where spontaneous dark detections can occur shortly after previous photon detections, due to trapping phenomena.

To minimize the afterpulsing effect, the avalanche current must be reduced since this reduces the probability that a trap gets filled in the first place. An appropriate circuit, referred to as quenching electronics, is necessary to rapidly suppress the avalanche by lowering the reverse bias down and to restore the SPAD to its armed state to detect the next incoming photon. The rapid quenching also reduces afterpulsing, therefore the quenching electronics plays a key role in a SPAD system. The after-pulsing effects in InGaAs APDs, make them ill-suited for applications requiring high duty-cycle and high-rate detection.

Usually, InGaAs APDs are operated in gated mode in which a periodic shot duration bias, synchronized to input photon timing, is applied. In the gated mode, however, InGaAs APDs cannot detect photons with a random input timing.

The InGaAs APDs can be operated at temperatures accessible via thermoelectric cooling, making them ideal for applications requiring compact photon-counting solutions.

NIST Patents Single-Photon Detector for Potential Encryption and Sensing Apps

Individual photons of light now can be detected far more efficiently using a device patented  by a team including the National Institute of Standards and Technology (NIST), whose scientists have overcome longstanding limitations with one of the most commonly used type of single-photon detectors. Their invention could allow higher rates of transmission of encrypted electronic information and improved detection of greenhouse gases in the atmosphere.

Semiconductor Single-Photon Avalanche Photodiodes (SPAD) based on indium-gallium-arsenide semiconductors is widely used in quantum cryptography research because it can detect photons at the particular wavelengths (colors of light) that travel through fiber. Unfortunately, when the detector receives a photon and outputs a signal, sometimes an echo of electronic noise is induced within the detector. Traditionally, to reduce the chances of this happening, the detector must be disabled for some time after each detection, limiting how often it can detect photons.

Usually, InGaAs APDs are operated in gated mode in which a periodic shot duration bias, synchronized to input photon timing, is applied. In the gated mode, however, InGaAs APDs cannot detect photons with a random input timing.

The team, which also includes scientists working at the California Institute of Technology and the University of Maryland, has patented a method to detect the photons that arrive when the gates are either open or closed. The NIST team had developed a highly sensitive way to read tiny signals from the detector, a method that is based on electronic interferometry, or the combining of waves such that they cancel each other out.

The approach allows readout of tiny signals even when the voltage pulses that open the gate are large, and the team found that these large pulses allow the detector to be operated in a new way. The pulses turn on the detector during the gate as usual. But in between gate openings the pulses turn the detector off so well that signals produced by absorbing a photon can linger for a while in the device. Then the next time the gate opens, these lingering signals can be amplified and read out.

The added ability to detect photons that arrive when the gate is closed increases the detector’s efficiency, an improvement that would be particularly beneficial in applications in which photons could arrive at any moment, such as atmospheric scanning and topographic mapping.

The new detector can count individual photons at a very high maximum rate—several hundred million per second—and at higher than normal efficiency, while maintaining low noise. Its efficiency is at least 50 percent for photons in the near infrared, the standard wavelength range used in telecommunications. Commercial detectors operate with only 20 to 30 percent efficiency.

Superconductor single photon detectors

Superconducting SPDs include superconducting nanowire singlephoton detectors (SNSPD), transition edge sensors and superconducting tunnel junctions.

Superconducting nanowire single-photon detector (SNSPD) has emerged as the fastest single-photon detector (SPD) for photon counting. The SNSPD consists of a thin (≈ 5 nm) and narrow (≈ 100 nm) superconducting nanowire. The nanowire is cooled well below its superconducting critical temperature and biased with a DC current that is close to but less than the superconducting critical current of the nanowire.

The absorption of a single photon in superconducting nanowires results in creation of hotspot, and subsequently, the superconducting current density increases due to the size expansion of the hotspot. Once the superconducting current density in the nanowires reaches the critical value, the nanowires are changed from the superconducting state to the normal resistance state. This transition generates a voltage signal of single-photon detection.

The primary advantages of SNSPDs are low dark count rate, high photon count rate and very accurate time resolution. The detection efficiency was low (at the level of a few percent) for early generation devices, but recently, this parameter has been significantly improved through the efforts of the SNSPD community.

SNSPDs tend to be expensive because they need very low temperatures to operate while photomultiplier tubes do not have high detection efficiency and are costly too. SNSPDs have wide spectral range from visible to mid IR , far beyond that of the Si single-photon avalanche photodiode (SPAD) and the SNSPD is superior to the InGaAs SPAD in terms of signal-to-noise ratio.


Cooling technology challenges

Most SNSPDs are made of niobium nitride (NbN), which offers a relatively high superconducting critical temperature (≈ 10 K) and a very fast cooling time (<100 picoseconds). NbN devices have demonstrated device detection efficiencies as high as 67% at 1064 nm wavelength with count rates in the hundreds of MHz. NbN devices have also demonstrated jitter – the uncertainty in the photon arrival time – of less than 50 picoseconds, as well as very low rates of dark counts, i.e. the occurrence of voltage pulses in the absence of a detected photon.

This detector operates at the boiling point of liquid helium (4.2 K), this temperature can be reached by by immersing it in liquid helium (He) or mounting the device in a cryogenic probe station. Liquid He is expensive, hazardous and demands trained personnel for correct use. This technique is satisfactory for testing superconducting devices in a low temperature physics laboratory; however if the ultimate goal is to provide a working device for users in other scientific fields or in military applications, alternative cooling methods must be sought.

Operating SNSPDs in a closed-cycle refrigerator offers a solution to this problem. The circulating fluid is high pressure, high purity He gas which is enclosed inside the refrigerator allowing continuous operation and eliminating repeated cryogenic handling.

The requirement of very low temperatures limit the operation of these devices only on the ground, which limits the use of SNSPDs to ground-based applications. For example, in the Lunar Laser Communication Demonstration project of the National Aeronautics and Space Administration, G-M cryocooler-based SNSPD systems were adopted at the employed ground station. Meanwhile, semiconducting single photon detectors without complicated cryocoolers were used for the satellite.

Researchers from Chinese Academy of Sciences (CAS),  have developed  a hybrid cryocooler that is compatible with space applications, which incorporates a two-stage high-frequency pulse tube (PT) cryocooler and a 4He Joule–Thomson (JT) cooler.

“To make a practical SNSPD system for space applications, we chose a superconducting NbTiN ultrathin film, which can operate sufficiently well above 2 K, to fabricate the SNSPDs, instead of using WSi, which usually requires sub-1-K temperatures. The hybrid cryocooler successfully cooled an NbTiN SNSPD down to a minimum temperature of 2.8 K. The NbTiN SNSPD showed a maximum SDE of over 50% at a wavelength of 1550 nm and a SDE of 47% at a DCR of 100 Hz. Therefore, these results experimentally demonstrate the feasibility of space applications for this SNSPD system,” write the authors.


Single-photon detector can count to four

Duke University, the Ohio State University and industry partner Quantum Opus, have discovered a new method for using a photon detector called a superconducting nanowire single-photon detector (SNSPD). In the new setup, the researchers pay special attention to the specific shape of the initial spike in the electrical signal, and show that they can get enough detail to correctly count at least four photons traveling together in a packet.

“Here, we report multi-photon detection using a conventional single-pixel SNSPD, where photon-number resolution arises from a time- and photon-number-dependent resistance 𝑅hsRhs of the nanowire during an optical wavepacket detection event. The different resistances give rise to different rise times of the generated electrical signal, which can be measured using a low-noise read-out circuit.”

“Photon-number-resolution is very useful for a lot of quantum information/communication and quantum optics experiments, but it’s not an easy task,” said Clinton Cahall, an electrical engineering doctoral student at Duke and first author of the paper. “None of the commercial options are based on superconductors, which provide the best performance. And while other laboratories have built superconducting detectors with this ability, they’re rare and lack the ease of our setup as well as its sensitivity in important areas such as counting speed or timing resolution.”


Chinese Superconducting Nanowire Single-Photon Detector Sets Efficiency Record

Researchers have demonstrated the fabrication and operation of a superconducting nanowire single-photon detector (SNSPD) with detection efficiency that they believe is the highest on record. The photodetector is made of polycrystalline NbN with system detection efficiency of 90.2 percent for 1550-nm-wavelength photons at 2.1 K. In experiments, the system detection efficiency saturated at 92.1 percent when the temperature was lowered to 1.8 K. The research team believes that such results could pave the way for the practical application of SNSPD for quantum information and other high-end applications.

For their SNSPD device, researchers from the Shanghai Institute of Microsystem and Information Technology and the Chinese Academy of Sciences used an integrated distributed Bragg reflector (DBR) cavity offering near unity reflection at the interface while performing systematic optimization of the NbN nanowire’s meandered geometry. This approach enabled researchers to simultaneously achieve the stringent requirements for coupling, absorption and intrinsic quantum efficiency.

The device exhibited timing jitters down to 79 picoseconds (ps), almost half that of previously reported WSi-SNSPD, promising additional advantages in applications requiring high timing precision.  Extensive efforts have been made to develop SNSPDs based on NbN, targeted at operating temperatures above 2 K, which are accessible with a compact, user-friendly cryocooler. Achieving a detection efficiency of more than 90 percent has required the simultaneous optimization of many different factors, including near perfect optical coupling, near perfect absorption and near unity intrinsic quantum efficiency.

The device has been applied to the quantum information frontier experiments at the University of Science and Technology of China


Graphene single photon detectors

Current detectors are efficient at detecting incoming photons that have relatively high energies, but their sensitivity drastically decreases for low frequency, low energy photons. In recent years, graphene has shown to be an exceptionally efficient photo-detector for a wide range of the electromagnetic spectrum, enabling new types of applications for this field.

Thus, in a recent paper published in the journal Physical Review Applied, and highlighted in APS Physics, ICFO researcher and group leader Prof. Dmitri Efetov, in collaboration with researchers from Harvard University, MIT, Raytheon BBN Technologies and Pohang University of Science and Technology, have proposed the use of graphene-based Josephson junctions (GJJs) to detect single photons in a wide electromagnetic spectrum, ranging from the visible down to the low end of radio frequencies, in the gigahertz range.

In their study, the scientists envisioned a sheet of graphene that is placed in between two superconducting layers. The so created Josephson junction allows a supercurrent to flow across the graphene when it is cooled down to 25 mK. Under these conditions, the heat capacity of the graphene is so low, that when a single photon hits the graphene layer, it is capable of heating up the electron bath so significantly, that the supercurrent becomes resistive – overall giving rise to an easily detectable voltage spike across the device. In addition, they also found that this effect would occur almost instantaneously, thus enabling the ultrafast conversion of absorbed light into electrical signals, allowing for a rapid reset and readout.

The results of the study confirm that we can expect a rapid progress in integrating graphene and other 2-D materials with conventional electronics platforms, such as in CMOS-chips, and shows a promising path towards single-photon-resolving imaging arrays, quantum information processing applications of optical and microwave photons, and other applications that would benefit from the quantum-limited detection of low-energy photons.


DARPA’s Fundamental Limits of Photon Detection—or Detect—program

Current photon detectors, such as semiconductor detectors, superconductor detectors, and biological detectors have various strengths and weaknesses as measured against eight technical metrics, including what physicists refer to as timing jitter; dark count; maximum rate; bandwidth; efficiency; photon-number resolution; operating temperature; and array size. There is currently no single detector that simultaneously excels at all eight characteristics. The fully quantum model developed and tested in Detect will help determine the potential for creating such a device.

“We want to know whether the basic physics of photon detection allows us, at least theoretically, to have all of the attributes we want simultaneously, or whether there are inherent tradeoffs,” Kumar said. “And if tradeoffs are necessary, what combination of these attributes can I maximize at the same time?”

“The goal of the Detect program is to determine how precisely we can spot individual photons and whether we can maximize key characteristics of photon detectors simultaneously in a single system,” said Prem Kumar, DARPA program manager. “This is a fundamental research effort, but answers to these questions could radically change light detection as we know it and vastly improve the many tools and avenues of discovery that today rely on light detection.”

Photons in the visible range fill at the minimum a cubic micron of space, which might seem to make them easy to distinguish and to count. The difficulty arises when light interacts with matter. A cubic micron of conventional photon-detection material has more than a trillion atoms, and the incoming light will interact with many of those atoms simultaneously. That cloud of atoms has to be modeled quantum mechanically to conclude with precision that a photon was actually there. And modeling at that massive scale hasn’t been possible—until recently.

“For decades we saw few significant advances in photon detection theory, but recent progress in the field of quantum information science has allowed us to model very large and complicated systems,” Kumar said. Advances in nano-science have also been critical, he added. “Nano-fabrication techniques have come a long way. Now not only can we model, but we can fabricate devices to test those models.”

The Fundamental Limits of Photon Detection (Detect) Program will establish the first-principles limits of photon detector performance by developing new models of photon detection in a variety of technology platforms, and by testing those models in proof-of-concept experiments.


DARPA SBIR to improve upon nanowire single-photon detector performance

DARPA issued SBIR project in 2014 to further improve upon the current state-of-the-art in nanowire single-photon detector performance while advancing the supporting technologies to allow for a compact, turn-key commercial system.

New results in superconducting nanowire devices have shown that high detection rates, low dark-count rates (DCRs), and high efficiency are all possible simultaneously with operating temperatures between 1 and 4 K.

Despite these results, further performance improvements are needed. For example, detection efficiency (DE) above 90% and bandwidth (BW) approaching 1 GHz has yet to be achieved simultaneously. In addition, innovations leading to a reduction in the system footprint and improved operability will provide better accessibility of such technologies to the relevant scientific and engineering communities.

The final system should provide multiple (>2), independent single-pixel detectors with performance superior to all current commercially available options (DE>90%, BW~1GHz, DCR< 1 Hz) in a ~5U 19™ rack-mount package. To achieve these goals, work under this SBIR may include the following: efforts to increase fabrication yields through the use of new materials or fabrication techniques, new device designs to improve bandwidth and sensitivity, efforts to reduce system SWaP through compact, application-specific cooling systems, electronics, and packaging.

The detectors developed under this SBIR will have applications for the DoD which include secure communications and active stand-off imaging systems. The improved availability and SWaP will allow the use of these detectors in all relevant government labs and open the door to new fieldable systems. For example, low power, portable optical communication links exceeding RF system bandwidths by 10-100x may be possible using the technology developed under this SBIR.

Photomultiplier (PMT) Tubes

A PMT consists of a photocathode and a series of dynodes in an evacuated glass enclosure. When a photon of sufficient energy strikes the photocathode, it ejects a photoelectron due to the photoelectric effect. The photocathode material is usually a mixture of alkali metals, which make the PMT sensitive to photons throughout the visible region of the electromagnetic spectrum. The photocathode is at a high negative voltage, typically -500 to -1500 volts.

The photoelectron is accelerated towards a series of additional electrodes called dynodes. These electrodes are each maintained at successively less negative potentials. Additional electrons are generated at each dynode. This cascading effect creates 105 to 107 electrons for each photoelectron that is ejected from the photocathode. The amplification depends on the number of dynodes and the accelerating voltage. This amplified electrical signal is collected at an anode at ground potential, which can be measured.

PMTs can have large active areas, but they suffer from low efficiency (~10%), high jitter (~150-ps) and high dark count rate. They are fragile, bulky, and sensitive to magnetic fields, require very high operating voltages, and are not conducive to making large format detector arrays. Moreover, their sensitivity in the SWIR spectral band is poor. Although PMT still plays an important role in some applications today, as with many vacuum tube-based devices, this 80-year-old technology is gradually being replaced by newer solid-state devices.


References and Resources also include:

DARPA developing high bandwidth neural interfaces for treating sensory disorders, and developing Brain Warfare systems

DARPA announced NESD in January 2016 with the goal of developing an implantable system able to provide precision communication between the brain and the digital world. Such an interface would convert the electrochemical signaling used by neurons in the brain into the ones and zeros that constitute the language of information technology, and do so at far greater scale than is currently possible. The work has the potential to significantly advance scientists’ understanding of the neural underpinnings of vision, hearing, and speech and could eventually lead to new treatments for people living with sensory deficits.

Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”

“The NESD program looks ahead to a future in which advanced neural devices offer improved fidelity, resolution, and precision sensory interface for therapeutic applications,” said Phillip Alvelda, the founding NESD Program Manager. “By increasing the capacity of advanced neural interfaces to engage more than one million neurons in parallel, NESD aims to enable rich two-way communication with the brain at a scale that will help deepen our understanding of that organ’s underlying biology, complexity, and function.”

Although the goal of communicating with one million neurons sounds lofty, Alvelda noted, “A million neurons represents a miniscule percentage of the 86 billion neurons in the human brain. Its deeper complexities are going to remain a mystery for some time to come. But if we’re successful in delivering rich sensory signals directly to the brain, NESD will lay a broad foundation for new neurological therapies.”

The Research would enable highly efficient Brain-computer interfaces that could be applied in neuroprosthetics, through which paralyzed persons are able to control robotic arms, neurogaming where one can control keyboard, mouse etc using their thoughts and play games, neuroanalysis (psychology), and in defense to control robotic soldiers or fly planes with thoughts.

This would also result in efficient in efficient Brain control devices. Researchers at the University of Zurich have identified the brain mechanism that governs decisions between honesty and self-interest. Using non-invasive brain stimulation, they could even increase honest behavior. Government is  also interested in controlling mind control of people to spread its propoganda while disrupting dissent. Military is  interested in mind control of  soldiers. Whistleblower has recently  revealed about secret DARPA project  military mind control project at major university.


DARPA has awarded contracts to five research organizations and one company that will support the Neural Engineering System Design (NESD) program: Brown University; Columbia University; Fondation Voir et Entendre (The Seeing and Hearing Foundation); John B. Pierce Laboratory; Paradromics, Inc.; and the University of California, Berkeley.

These organizations have formed teams to develop the fundamental research and component technologies required to pursue the NESD vision of a high-resolution neural interface and integrate them to create and demonstrate working systems able to support potential future therapies for sensory restoration. Four of the teams will focus on vision and two will focus on aspects of hearing and speech.


DARPA’s  “Neural Engineering System Design” program

DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program’s first year will focus on making fundamental breakthroughs in hardware, software, and neuroscience, and testing those advances in animals and cultured cells. Phase II of the program calls for ongoing basic studies, along with progress in miniaturization and integration, with attention to possible pathways to regulatory approval for human safety testing of newly developed devices. As part of that effort, researchers will cooperate with the U.S. Food and Drug Administration (FDA) to begin exploration of issues such as long-term safety, privacy, information security, compatibility with other devices, and the numerous other aspects regulators consider as they evaluate potential applications of new technologies.

The NESD call for proposals laid out a series of specific technical goals, including development of an implantable package that accounts for power, communications, and biocompatibility concerns. Part of the fundamental research challenge will be developing a deep understanding of how the brain processes hearing, speech, and vision simultaneously with individual neuron-level precision and at a scale sufficient to represent detailed imagery and sound. The selected teams will apply insights into those biological processes to the development of strategies for interpreting neuronal activity quickly and with minimal power and computational resources.

“Significant technical challenges lie ahead, but the teams we assembled have formulated feasible plans to deliver coordinated breakthroughs across a range of disciplines and integrate those efforts into end-to-end systems,” Alvelda said

Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.

Successful NESD proposals must culminate in the delivery of complete, functional, implantable neural interface systems and the functional demonstration thereof. The final system must read at least one million independent channels of single-neuron information and stimulate at least one hundred thousand channels of independent neural action potentials in real-time. The system must also perform continuous, simultaneous full-duplex interaction with at least one thousand neurons. While DARPA desires a single 1 cm3 device that satisfies all of these capabilities (read, write, and full-duplex), proposers may propose a design wherein each capability is embodied in separate 1 cm3 devices. Proposed implementations must not require tethers or percutaneous connectors for powering or facilitating communication between the implanted and external portions of the system.

DARPA anticipates investing up to $60 million in the NESD program over four years.NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative.


Details of DARPA’s NESD awards

The teams’ approaches include a mix of fundamental research and applied science and engineering. The teams will either pursue development and integration of complete NESD systems, or advance particular aspects of the research, engineering, and mathematics required to achieve the NESD vision, providing new tools, capabilities, and understanding. Summaries of the teams’ proposed research appear below:

A Brown University team led by Dr. Arto Nurmikko will seek to decode neural processing of speech, focusing on the tone and vocalization aspects of auditory perception. The team’s proposed interface would be composed of networks of up to 100,000 untethered, submillimeter-sized “neurograin” sensors implanted onto or into the cerebral cortex. A separate RF unit worn or implanted as a flexible electronic patch would passively power the neurograins and serve as the hub for relaying data to and from an external command center that transcodes and processes neural and digital signals.

What we’re developing is essentially a micro-scale wireless network in the brain enabling us to communicate directly with neurons on a scale that hasn’t previously been possible,” Arto Nurmikko, a professor of engineering at Brown said in a statement. “The understanding of the brain we can get from such a system will hopefully lead to new therapeutic strategies involving neural stimulation of the brain, which we can implement with this new neurotechnology.”

A Columbia University team led by Dr. Ken Shepard will study vision and aims to develop a non-penetrating bioelectric interface to the visual cortex  that could eventually enable computers to see what we see — or potentially allow human brains to tap directly into video feeds. The team envisions layering over the cortex a single, flexible complementary metal-oxide semiconductor (CMOS) integrated circuit containing an integrated electrode array. A relay station transceiver worn on the head would wirelessly power and communicate with the implanted device.

A Fondation Voir et Entendre team led by Drs. Jose-Alain Sahel and Serge Picaud will study vision. The team aims to apply techniques from the field of optogenetics to enable communication between neurons in the visual cortex and a camera-based, high-definition artificial retina worn over the eyes, facilitated by a system of implanted electronics and micro-LED optical technology.

A John B. Pierce Laboratory team led by Dr. Vincent Pieribone will also study vision. The team will pursue an interface system in which modified neurons capable of bioluminescence and responsive to optogenetic stimulation communicate with an all-optical prosthesis for the visual cortex.

A Paradromics, Inc., team led by Dr. Matthew Angle aims to create a high-data-rate cortical interface using large arrays of penetrating microwire electrodes for high-resolution recording and stimulation of neurons. As part of the NESD program, the team will seek to build an implantable device to support speech restoration. Paradromics’ microwire array technology exploits the reliability of traditional wire electrodes, but by bonding these wires to specialized CMOS electronics the team seeks to overcome the scalability and bandwidth limitations of previous approaches using wire electrodes.

A University of California, Berkeley, team led by Dr. Ehud Isacoff aims to develop a novel “light field” holographic microscope that can detect and modulate the activity of up to a million neurons in the cerebral cortex. The team will attempt to create quantitative encoding models to predict the responses of neurons to external visual and tactile stimuli, and then apply those predictions to structure photo-stimulation patterns that elicit sensory percepts in the visual or somatosensory cortices, where the device could replace lost vision or serve as a brain-machine interface for control of an artificial limb.

DARPA structured the NESD program to facilitate commercial transition of successful technologies. Key to ensuring a smooth path to practical applications, teams will have access to design assistance, rapid prototyping, and fabrication services provided by industry partners whose participation as facilitators was organized by DARPA and who will operate as sub-contractors to the teams.


References and Resources also include:



Metamaterials promise invisible armies, protection from earthquakes & tsunamis

“Invisibility” is a goal that has been long sought after by the  Militaries. Currently, the military is also backing the creation of a “Quantum Stealth” camouflage material that makes its wearers completely invisible to the naked eye by bending light waves around them. A Canadian company is spearheading that effort, which is, like Quantum Stealth itself, shrouded in secrecy. Israeli researchers are also working on a “cloaking carpet” that uses a similar light-deflecting technology.   Metamaterials are the primary materials  that are for cloaking, making their platforms, weapons and persons invisible from electro-optic sensors, radars and sonars.


Metamaterials are artificially structured materials designed to control and manipulate physical phenomena such as light and other electromagnetic waves, sound waves and seismic waves in unconventional ways, resulting in exotic behavior that’s not found in nature. They are predicted to be able to protect the building from earthquakes by bending seismic waves around it. Similarly, tsunami waves could be bent around towns, and sound waves could be bent around a room to make it soundproof.



NATO troops will soon become invisible to radar and thermal cameras thanks to a groundbreaking fabric developed in Turkey, officials said Sunday. The fabric, which has reportedly passed tests by the Turkish Armed Forces, spreads a person’s body heat to confuse thermal cameras. It also makes it easier for soldiers to hide from night vision scopes and other detectors. A team of researchers at Moscow’s National University of Science and Technology (NUST MISIS) have come up with a unique metamaterial which can make combat vehicles invisible, the authoritative scientific journal Physical Review wrote.


An operational cloaking chip could be an extension of technologies such as radar-absorbing dark paint used on stealth aircraft, local optical camouflage, surface cooling to minimize electromagnetic IR emissions, or electromagnetic wave scattering.

But an invisibility cloak needn’t be a sinister tool of war. Vanderbilt’s Valentine suggests architectural usage. “You could use this technology to hide supporting columns from sight, making a space feel completely open,” he said. Other potential uses include rendering parts of an aircraft invisible for pilots to see below the cockpit, or to rid drivers of the blind spot in a car. Toyota has recently  patented  a cloaking device designed to turn vehicles’ A-pillars to the left and right of the car’s dashboard invisible, improving road visibility for the driver.


Invisibility Cloaks

“The idea behind metamaterials is to mimic the way atoms interact with light, but with artificial structures much smaller than the wavelength of light itself,” said Boris Kuhlmey from the University of Sydney. This way, their properties are derived from both the inherent properties from their base materials as well as the way they are assembled, such as the design of their shape, geometry, size, orientation and arrangement. Thus optical properties are no longer restricted to those of the constituent materials, and can be designed almost arbitrarily. Metamaterial-enabled devices have a wide range of applications in the RF, THz, IR, and visible spectrum.


Different Metamaterials and methods are being tried  for cloaking. Metamaterial cloaking is used to building devices that can hide something, so that a defined region of space is invisibly isolated from passing electromagnetic fields (or sound waves).  Researchers from the Max Planck Institute are working on mimicking the biology of moths’ eyes to turn lenses and glass invisible. Meanwhile, a group from the University of California, San Diego is controlling how light reflects on objects using a thin “carpet cloak” made form Teflon and ceramic particle.

Radar Cloaks

In 2006 researchers demonstrated it was possible to absorb or direct electromagnetic waves around an object through a coating and make it “invisible”. However, it only worked on microwaves and in two dimensions. Moreover previous cloaking efforts required materials as much as 10 times thicker than the wavelength being dodged. Missile guidance and marine radar wavelengths measure roughly 3 centimeters; that would require about a foot of coating.

Russian Scientists  develop metamaterial to Make Combat Vehicles Invisible in radio and infrared waves

“The experimental part of our research was the creation of a one-of-a-kind metamaterial consisting of a small flat grid of the so-called meta-molecules cut out from a solid piece of ordinary steel,” the project’s director Alexei Basharin was quoted as saying by the NUST MISIS press service. The NUST MISIS team worked closely with colleagues from the University of Crete, Greece


Basharin said that thanks to the special shape and configuration of these cells the scientists managed to obtain metamaterial with absolutely unique properties. This metamaterial can be used to make supersensitive sensors to detect explosives and chemical weapons.


“An addition of a nonlinear semiconductor will turn the metamaterial into an adjustable screen for stealth technologies, which make fighting vehicles less visible in radio, infrared and other bands,” the NUST MISIS press service said in a statement. The newly obtained metamaterial can also become a vital element of the latest types of lasers and serve the basis for quantum computers.


Ultra-thin Dielectric metasurface cloak of University of California-San Diego

Boubacar Kante, a professor at the University of California-San Diego and his colleagues, have developed the first effective “dielectric metasurface cloak” based on a new material consisting of a layer of Teflon substrate with tiny ceramic cylinders embedded into it.


Kante said his material requires thickness of only 1/10 of the wavelength. Hiding from that same 3 cm wavelength would thus only require about a 3 mm coat. Different thicknesses (thinner) could be used for electromagnetic waves as small as those of visible light (which ranges from about 400 to 700 nanometers.) The cloak would be useful for Unmanned Aerial Vehicles and other planes, ships and anything else interested in dodging radar could have a use for it.


IOWA Engineers develop flexible skin that traps radar waves, cloaks objects

Iowa State University engineers have developed a new flexible, stretchable and tunable “metaskin” that uses rows of small, liquidmetal devices to cloak an object from radar. By stretching and flexing the polymer metaskin, it can be tuned to reduce the reflection of a wide range of radar frequencies.


“It is believed that the present meta-skin technology will find many applications in electromagnetic frequency tuning, shielding and scattering suppression,” the lead authors Liang Dong, associate professor; and Jiming Song, professor wrote in journal Scientific Reports.


Metaskin is composed of rows of split ring resonators embedded inside layers of silicone sheets. The electric resonators are filled with galinstan, a metal alloy that’s liquid at room temperature and less toxic than other liquid metals such as mercury. Those resonators are small rings with an outer radius of 2.5 millimeters and a thickness of half a millimeter. They have a 1 millimeter gap, essentially creating a small, curved segment of liquid wire.


The rings create electric inductors and the gaps create electric capacitors. Together they create a resonator that can trap and suppress radar waves at a certain frequency. Stretching the meta-skin changes the size of the liquid metal rings inside and changes the frequency the devices suppress.


Tests showed radar suppression was about 75 percent in the frequency range of 8 to 10 gigahertz, according to the paper. When objects are wrapped in the meta-skin, the radar waves are suppressed in all incident directions and observation angles.


Optical Cloaks

Using a metasurface and an integrated photonics platform, researchers have conceived a method of achieving invisibility cloaks by tailoring evanescent fields. The approach deflects and scatters light away from a ‘cloaking’ chip surface so it is not detected. The scattering fields of the object located on the cloak do not interact with the evanescent field, rendering the object invisible.

To design a plasmonic waveguide-based invisibility cloaking scheme, Ben-Gurion University of the Negev (BGU) researchers performed an analysis of the modal distribution and surface intensity in a channel photonic waveguide with a metasurface overlayer. The spatial distribution of the metasurface permittivity was analytically calculated based on transformation optics principles. The spatial distribution was then imported into a commercial Maxwell solver using the finite-difference time-domain method (FDTD).

Researchers demonstrated cloaking for a cylindrical object with diameter of 70 percent from the waveguide width on a high index ridge waveguide structure with a silicon nitride guiding layer on silica substrate. “We showed that it is possible to bend the light around an object located on the cloak on an optical chip. The light does not interact with the object, thus resulting in the object’s invisibility,” said Alina Karabchevsky, head of BGU’s Light-on-a-Chip Group.
“These results open the door to new integrated photonic devices, harnessing electromagnetic fields of light at nanoscale for a variety of applications from on-chip optical devices to all-optical processing,” Karabchevsky said. The researchers’ next step will be to develop a prototype.

Turkey-made invisible fabric to sell to NATO countries

A Turkey-made invisible fabric that cannot be spotted by radars and thermal cameras will soon be sold to NATO countries.

The fabric, developed at the Sun Textile and Research Development Center, succeeded tests by the Turkish Armed Forces, and is now awaiting approval from Turkey’s Defense Ministry for export.

Sabri Ünlütürk, chairman of the executive board of Sun Holding, told state-run Anadolu Agency on Nov. 16 that the fabric was invented by two scientists at Teknokent of Hacettepe University in Ankara.

He added that they began producing the fabric in their factory in the western province of İzmir, and came third after the U.S. and Israel in this particular technology.

“We are proud that the Turkish army is using this fabric. The previous products were only for visual camouflage,” Ünlütürk added.

He said the fabric spreads body heat in a way that makes the person wearing it impossible to be spotted by thermal cameras.

The tests for the camouflage uniforms are currently underway.

“These uniforms are designed for our soldiers to hide themselves from night vision scopes. Military units are testing them,” Ünlütürk said.

3D invisibility “skin” cloak at Berkley

Berkeley researchers have devised an ultra-thin invisibility “skin” cloak that can conform to the shape of an object and hide it from detection with visible light. Although this cloak is only microscopic in size, the principles behind the technology should enable it to be scaled-up to conceal macroscopic items as well.


They demonstrated a metasurface cloak made from an ultrathin 80 nanometers thick layer of gold nanoantennas that was wrapped around a three-dimensional object about the size of a few biological cells and arbitrarily shaped with multiple bumps and dents. The surface of the skin cloak was meta-engineered to reroute reflected light waves so that the object was rendered invisible to optical detection when the cloak is activated.


When the cloak is turned “on,” the bump-shaped object being illuminated in the center white spot disappears from view. The object reappears when the cloak is turned “off.” This is the first time a 3D object of arbitrary shape has been cloaked from visible light.


Invisibility in diffusive light scattering media

In 2014, scientists demonstrated good cloaking performance in murky water, demonstrating that an object shrouded in fog can disappear completely when appropriately coated with metamaterial. This is due to the random scattering of light, such as that which occurs in clouds, fog, milk, frosted glass, etc., combined with the properties of the metatmaterial coating. When light is diffused, a thin coat of metamaterial around an object can make it essentially invisible under a range of lighting conditions.


While metamaterials may not yet be making objects invisible to the eye; they could be used to redirect other kinds of waves, including mechanical waves such as sound and ocean waves. Ong points to the possibility of using what has been learnt in reconfiguring the geometry of materials to divert tsunamis from strategic buildings. French researchers earlier this year, for example, diverted seismic waves around specially placed holes in the ground, reflecting the waves backward.


Toyota Patents a “Cloaking Device”

Toyota is developing a device that would allow objects to turn invisible, or at least transparent. The Japanese car maker recently received a patent from the U.S. Patent and Trademark Office for a device meant to improve the visibility of drivers.

According to Toyota, such a technology is already possible — like the Rochester Cloak — but it would require video cameras and other expensive equipment for it to work in cars. This cloaking device, on the other hand, would be a less expensive solution. It would use mirrors to bend visible light around the A-pillars to allow the driver to “see” through them. This would give drivers a wider view of the road and their surroundings. It also benefits pedestrians, as drivers would see them better.

“Light from an object on an object-side of the cloaking device [i.e., facing the road] is directed around an article [the A-pillars] ]within the cloaking region and forms an image on an image-side of the cloaking device [i.e., facing the driver’s seat] such the article appears transparent to an observer looking towards the object,” according to a description of the device in the patent.


Acoustic Cloak

In 2014 researchers created a 3D acoustic cloak from stacked plastic sheets dotted with repeating patterns of holes. The pyramidal geometry of the stack and the hole placement provide the effect.


Prof Wegener works on cloaking, but his aim is not to make things invisible. He wants to hide them from physical forces, and last year his lab produced a honeycomb-like material that made an object beneath it unfeelable. This particular metamaterial was a solid lattice that acts like a fluid in certain ways, deflecting pressure around its hidden cargo.


Now the tiny, hidden cylinder was very small in that case (less than 1mm) but related work by Prof Wegener’s team was picked up by French physicists and engineers, who showed that a careful pattern of drilled holes could divert damaging earthquake vibrations.


But an invisibility cloak needn’t be a sinister tool of war. Metamaterials could also absorb and emit light with extremely high efficiency — for example in a high-resolution ultrasound — or redirect light over a very small distance. This, says Anthony Vicari of Lux Research, “could be used to improve fibre optical communications networks, or even for optical communications within microchips for faster computing.”


References and Resources also include:

Psychological warfare essential element of Russia’s Gerasimov doctrine to China’s three Warfares to DARPA’s mind control

Psychological warfare consists of attempts to make your enemy lose confidence, give up hope, or feel afraid, so that you can win. Psychological warfare involves the planned use of propaganda and other psychological operations to influence the opinions, emotions, motives, reasoning,  attitudes, and behavior of opposition groups. Psychological operations target foreign governments, organizations, groups and individuals. It is used to induce confessions or reinforce attitudes and behaviors favorable to the originator’s objectives, and are sometimes combined with black operations or false flag tactics.



According to U.S. military analysts, attacking the enemy’s mind is an important element of the People’s Republic of China’s military strategy. This type of warfare is rooted in the Chinese Stratagems outlined by Sun Tzu in The Art of War and Thirty-Six Stratagems.


It is also used to destroy the morale of enemies through tactics that aim to depress troops’ psychological states. Civilians of foreign territories can also be targeted by technology and media so as to cause an effect in the government of their country. Psychological warfare (PSYWAR), or the basic aspects of modern psychological operations (PSYOP), have been known by many other names or terms, including MISO, Psy Ops, Political Warfare, “Hearts and Minds”, and propaganda.


In 2016, Russia was accused of using thousands of covert human agents and robot computer programs to spread disinformation referencing the stolen campaign emails of Hillary Clinton, amplifying their effect. Recently Social media has become important medium to conduct psychological warfare for terrorists to Nation states.  Russian   influence operations on the  social media, Russia have been reported to alter the course of events in the U.S. by manipulating public opinion.


Facebook – which testified in front of Congress alongside Google and Twitter – admitted in October that the Russia-backed content reached as many as 126 million Americans on the social network during the 2016 presidential election. In October, Twitter released to the US Congress a list of 2,752 accounts it believes were created by Russian actors in an attempt to sway the election.  In October Collins asked Facebook to investigate its own records for evidence that Russia-linked accounts were used to interfere in the EU referendum, and later asked Twitter to do similar.


Facebook has now  launched a new tool to allow users to see if they’ve liked or followed Russian propaganda accounts.The social network says its  tool will  allow  users to see whether they interacted with a Facebook page or Instagram account created by the Internet Research Agency (IRA), a state-backed organisation based in St Petersburg that carries out online misinformation operations.

Psychological warfare

US DOD categorize, PSYWAR as a type of information operation (IO), previously referred to as command and control warfare (C2W). IO consists of five core capabilities that are used in concert and with any related capabilities to influence, disrupt, corrupt, or takeover an enemy’s decision making process. They include: psychological operations (PsyOp), military deception (MILDEC), operations security (OPSEC), and electronic warfare (EW), and computer network operations (CNO). IO is basically a way of interfering with the various systems that a person uses to make decisions.


DOD defines PSYOP as planned operations to convey selected information to targeted foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign governments, organizations, groups, and
individuals. For example, during the Operation Iraqi Freedom (OIF), broadcast messages were sent from Air Force EC-130E aircraft, and from Navy ships operating in the Persian Gulf, along with a barrage of e-mail, faxes, and cell phone calls to
numerous Iraqi leaders encouraging them to abandon support for Saddam Hussein.


At the same time, the civilian Al Jazeera news network, based in Qatar, beams its messages to well over 35 million viewers in the Middle East, and is considered by many to be a “market competitor” for U.S. PSYOP. Terrorist groups can also use the Internet to quickly place their own messages before an international audience.


Some observers have stated that the U.S. will continue to lose ground in the global media wars until it develops a coordinated strategic communications strategy to counter competitive civilian news media, such as Al Jazeera. Partly in response to this observation, DOD now emphases that PSYOP must be improved and focused against potential adversary decisionmaking, sometimes well in advance of times of conflict. Products created for PSYOP must be based on in-depth knowledge of the audience’s decision-making processes. Using this knowledge, the PSYOPS products then must be produced rapidly, and disseminated directly to targeted audiences throughout the area of operations.


Neocortical warfare is RAND’s version of PsyOp that controls the behavior of the enemy without physically harming them. RAND describes the neocortical system as consciousness, perception, and will. Neocortical warfare regulates the enemy’s neocortical system by interfering with their continuous cycle of observation, orientation, decision, and action. It presents the enemy with perceptions, sensory, and cognitive data designed to result in a narrow set of conclusions, and ultimately actions.


The success of Psychological warfare is due to peculiarities of our minds. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Another study found that  “Once formed,” the impressions are remarkably perseverant.” Stanford researchers have found that even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted.


China’s three Warfares strategy

In 2003, the Central Military Commission (CMC) approved the guiding conceptual umbrella for information operations for the People’s Liberation Army (PLA) – the “Three Warfares” (san zhong zhanfa). The concept is based on three mutually reinforcing strategies: (1) the coordinated use of strategic psychological operations; (2) overt and covert media manipulation; and (3) legal warfare designed to manipulate strategies, defense policies, and perceptions of target audiences abroad.


At the operational level, the “Three Warfares” became the responsibility for the PLA’s General Political Department’s Liaison Department (GPD/LD), which conducts diverse political, financial, military, and intelligence operations.


Traditionally, the primary target for China’s information and political warfare campaigns has been Taiwan, with the GPD-LD activities and operations attempting to exploit political, cultural, and social frictions inside Taiwan, undermining trust between varying political-military authorities, delegitimizing Taiwan’s international position, and gradually subverting Taiwan’s public perceptions to “reunite” Taiwan on Beijing’s terms. In the process, the GPD-LD has directed, managed, or guided a number of political, military, academic, media, and intelligence assets that have either overtly or covertly served as agents of influence.


In 2016, this concept was at work after the UNCLOS tribunal ruled against China in a comprehensive verdict dismissing China’s claims in the South China Sea. Despite the fact that the Philippines achieved a major international victory against the depredations of a more powerful but more aggressive neighbour, China, with its application of the Three Warfares, was able to successfully co-opt Rodrigo Duterte (Phillipines President) to its side.


The most recent incident was comprehensive  psychological warfare  campaign unleashed by China in recent India-China Doklam crisis. Beijing tried to exploit the political divisions  to sow dissensions in India by calling Sushma Swaraj a “liar”, reaching out to Modi’s opponents, including Rahul Gandhi, and attacking his “Hindu nationalism.” .  The aim was to use Indians to put pressure on the Indian government and get them to withdraw, largely by doubting India’s own assertions.


The  daily threats to teach India a lesson, intimidating India with escalation with aggressive warnings, carrying out military exercises and issuing dire warnings about great loss in war, reminding  India of the earlier defeat of 1962  and how weak it is, warning that China would rescind its decision on Sikkim or “free” Sikkim from Indian oppression; or that it could interfere in J&K” — all intended to “undermine India’s ability to conduct combat operations through psychological operations aimed at deterring, shocking and demoralizing enemy military personnel.”

 Russia’s Gerasimov doctrine

US media  has discovered  “Gerasimov Doctrine” – based on a 2013 essay where the Chief of the General Staff of the Armed Forces of Russia, Valery Gerasimov, mentioned different types of modern warfare, which could be loosely termed as “hybrid war.”


In roughly 2,000 words, Mr. Gerasimov outlines a new theory of modern warfare, which turns hackers, media, social networks, and businessmen into weapons of war — and keys to victory. “The role of nonmilitary means of achieving political and strategic goals has grown,” Mr. Gerasimov writes, “and, in many cases, they have exceeded the power of force of weapons in their effectiveness. … All this is supplemented by military means of a concealed character.”


Russian military according to experts practice a repertoire of lethal tricks known as maskirovka, or masking or operations of deceit and disguise. The idea behind maskirovka is to keep the enemy guessing, never admitting your true intentions, always denying your activities and using all means political and military to maintain an edge of surprise for your soldiers. The doctrine, military analysts say, is in this sense “multilevel.” It draws no distinction between disguising a soldier as a bush or a tree with green and patterned clothing, a lie of a sort, and highlevel political disinformation and cunning evasions.


However, RT  calling it a Hoax writes , “The FT attempts to backup its argument with mentions of Crimea, allegations of US election hacking and information war. Using these as examples of a sudden Russian discovery of non-linear methods. Yet, the author is not self-aware enough to realize that the US has been using composite techniques like sanctions and revolutions, whether color or otherwise, to achieve strategic goals for decades. “Economic penalties or the removal of legitimate governments are clearly forms of “hybrid war” which pre-date Gerasimov, Makarov, and Putin himself. ”

Use of Social media for psychological warfare

Social media has enabled the use of disinformation on a wide scale. Analysts have found evidence of doctored or misleading photographs spread by social media in the Syrian Civil War and 2014 Russian military intervention in Ukraine, possibly with state involvement.


The 15-member UN Security council body expressed its grave concern at the increase of foreign fighters joining the Islamic State in Iraq and the Levant/Sham (ISIL/ISIS or Da’esh), Al-Qaida and other groups to over 25,000. BAN KI-MOON, Secretary-General of the United Nations, said that the 70 per cent increase in foreign terrorist combatants between the middle of 2014 and March 2015 meant more fighters on the front lines in Syria and Iraq, as well as in Afghanistan, Yemen and Libya.


One of the reasons of large increase in foreign fighters is their successful use of social media to recruit, radicalise and raise funds. Terrorist groups increasingly using social media platforms like YouTube, Facebook and Twitter to further their goals and spread their message, because of its convenience, affordability and broad reach of social media.


DARPA Is Using Mind Control Techniques to Manipulate Social Media

DARPA launched its SMISC program in 2011 to examine ways social networks could be used for propaganda under Military Information Support Operations (MISO), formerly known as psychological operations.


“With the spread of blogs, social networking sites and media-sharing technology, and the rapid propagation of ideas enabled by these advances, the conditions under which the nation’s military forces conduct operations are changing nearly as fast as the speed of thought. DARPA has an interest in addressing this new dynamic and understanding how social network communication affects events on the ground as part of its mission of preventing strategic surprise.”


The general goal of the Social Media in Strategic Communication (SMISC) program is to develop a new science of social networks built on an emerging technology base. Through the program, DARPA seeks to develop tools to help identify misinformation or deception campaigns and counter them with truthful information, reducing adversaries’ ability to manipulate events.


To accomplish this, SMISC will focus research on linguistic cues, patterns of information flow and detection of sentiment or opinion in information generated and spread through social media. Researchers will also attempt to track ideas and concepts to analyze patterns and cultural narratives. If successful, they should be able to model emergent communities and analyze narratives and their participants, as well as characterize generation of automated content, such as by bots, in social media and crowd sourcing.


SMISC researchers will create a closed and controlled environment where large amounts of data are collected, with experiments performed in support of development and testing. One example of such an environment might be a closed social media network of 2,000 to 5,000 people who have agreed to conduct social media-based activities in this network and agree to participate in required data collection and experiments. This network might be formed within a single organization, or span several. Another example might be a role-player game where use of social media is central to that game and where players have again agreed to participate in data collection and experiments.


Some of the research projects funded by the SMISC program included studies that analyzed the Twitter followings of Lady Gaga and Justin Bieber among others; investigations into the spread of Internet memes; a study by the Georgia Tech Research Institute into automatically identifying deceptive content in social media with linguistic cues; and “Modeling User Attitude toward Controversial Topics in Online Social Media”—an IBM Research study that tapped into Twitter feeds to track responses to topics like “fracking” for natural gas.

Defense Advanced Research Projects Agency psychological warfare tool: “Sonic Projector”

“The Air Force has experimented with microwaves that create sounds in people’s head (which they’ve called a possible psychological warfare tool), and American Technologies can “beam” sounds to specific targets with their patented HyperSound,” wrote Sharon Weinberger and “yes, I’ve heard/seen them demonstrate the speakers, and they are shockingly effective”.


DARPA had earlier launched their Sonic Projector” program: The goal of the Sonic Projector program is to provide Special Forces with a method of surreptitious audio communication at distances over 1 km. Sonic Projector technology is based on the non-linear interaction of sound in air translating an ultrasonic signal into audible sound. The Sonic Projector will be designed to be a man-deployable system, using high power acoustic transducer technology and signal processing algorithms which result in no, or unintelligible, sound everywhere but at the intended target. The Sonic Projector system could be used to conceal communications for special operations forces and hostage rescue missions, and to disrupt enemy activities.


Changing Characteristics of Psychological warfare past to present

Psychological warfare is ancient as warfare itself. Genghis Khan, leader of the Mongolian Empire in the 13th century AD believed that defeating the will of the enemy before having to attack and reaching a consented settlement was preferable to actually fighting. The Mongol generals demanded submission to the Khan, and threatened the initially captured villages with complete destruction if they refused to surrender. If they had to fight to take the settlement, the Mongol generals fulfilled their threats and massacred the survivors. Tales of the encroaching horde spread to the next villages and created an aura of insecurity that undermined the possibility of future resistance.


The Khan also employed tactics that made his numbers seem greater than they actually were. During night operations he ordered each soldier to light three torches at dusk to give the illusion of an overwhelming army and deceive and intimidate enemy scouts. He also sometimes had objects tied to the tails of his horses, so that riding on open and dry fields raised a cloud of dust that gave the enemy the impression of great numbers. His soldiers used arrows specially notched to whistle as they flew through the air, creating a terrifying noise. Another tactic favoured by the Mongols was catapulting severed human heads over city walls to frighten the inhabitants and spread disease in the besieged city’s closed confines.


Military employs many methods for psychological warfare such as: Demoralization by distributing pamphlets that encourage desertion or supply instructions on how to surrender, Shock and awe military strategy such as that used in the Iraq War by the United States to psychologically maim, and break the will of the Iraqi Army to fight.


Other methods are Projecting repetitive and annoying sounds and music for long periods at high volume towards groups under siege like during Operation Nifty Package, propaganda radio stations, such as Lord Haw-Haw in World War II on the “Germany calling” station, The CIA has extensively used propaganda broadcasts against the Cuban government through TV Marti, based in Miami, Florida. However, the Cuban government has been successful at jamming the signal of TV Marti.


Renaming cities and other places when captured, such as the renaming of Saigon to Ho Chi Minh City after Vietnamese victory in the Vietnam War, False flag events, Use of loudspeaker systems to communicate with enemy soldiers, Terrorism and The threat of chemical weapons.


More recently, it has been used by totalitarian regimes such as Fascist Italy, Nazi Germany, and militaristic Japan. It was used during WWII by both the US and Germany. It was used by US forces in Panama and Cuba, where pirated TV broadcasts were transmitted, as well as Guatemala, Iran, the first Gulf War, Vietnam, and other places.


“One of the most famous example was Colin Powell’s speech in the UN in 2003 where he presented false information about the so called weapons of mass destruction in Iraq which again lead to the disastrous war on Iraq. Norway’s war on Libya, which the whole Parliament supported, and which destroyed that country, was, as is well known, built on lies that Moammar al Gadaf was about to kill his own people,” writes Pål Steigan.

References and Resources also include:



USAF’s ISR vision of Full-Spectrum Awareness for Distributed Targeting, Space Control and Cyber Warfare

Intelligence, surveillance, and reconnaissance (ISR) capabilities enable the U.S. Air Force (USAF) to be aware of developments related to adversaries worldwide and to conduct a wide variety of critical missions, both in peacetime and in conflict. It involves a networked system of systems operating in space, cyberspace, air, land, and maritime domains. These systems include planning and direction, collection, processing and exploitation, analysis and production, and dissemination (PCPAD) capabilities linked together by communications architecture.

US Air force has released “AF ISR 2023: Delivering Decision Advantage,” that lays out a strategic vision of “Full-Spectrum Awareness” and “World-Class Expertise” which combine to the ultimate vision of “Delivering Decision Advantage.” AF ISR Vision 2023’ demand for an “…ISR enterprise that seamlessly ingests data from an even wider expanse of sources, swiftly conducts multi- and all-source analysis, and rapidly delivers decision advantage to war fighters and national decision makers.”

ISR is one of the Air Force’s five enduring core missions along with air and space superiority, rapid global mobility, global strike, and command and control. AF ISR is integral to Global Vigilance for the nation and is foundational to Global Reach and Global Power.

“We will not be able to maintain the size and composition of the current ISR force, yet we must prepare for operations which will range from humanitarian assistance to major contingency operations in highly contested environments. This strategic vision enables us to achieve national goals while tailoring our ISR force to best meet future challenges.”

Intelligence gathering in future will also involve monitoring and mining social media in real time via an automated artificial intelligence is another way the Air Force and other military branches can obtain information, said the head of the service. The Air Force on some level does monitor social media already. The service’s only non-offensive air operations center, known as “America’s AOC” at Tyndall Air Force Base, Florida.

But social media is just one aspect, said Col. Robert Bloodworth, chief of combat operations. It is also the technology of “refining the analysis” through AI to reach the operator, pilot or airman in a decisive and streamlined way is what the Air Force desperately needs to conduct missions in the future. “Before you get to artificial intelligence, you have to get to automation, and what does that mean? It means we’re really developing algorithms, so we then have to build trust in the algorithms,” said Lt. Gen. VeraLinn “Dash” Jamieson, the service’s deputy chief of staff for intelligence, surveillance and reconnaissance on the Air Staff  during an interview.

AF ISR 22023

The challenge for AF ISR is to maintain the impressive tactical competencies developed and sustained over the past 12 years, while rebuilding the capability and capacity to provide the air component commander and subordinate forces with the all-source intelligence required to conduct full-spectrum cross-domain operations in volatile, uncertain, complex, and ambiguous environments around the globe.

Our ability to provide dominant ISR depends on well-trained, well-led professional Airmen who have strong analytical skills along with a high state of readiness, agility, and responsiveness. These characteristics, along with continued innovation and integration of technological advancements, will combine to make our Airmen experts in their trade.

Additionally, we will not rely solely on our own capabilities; it is imperative that we fully leverage the vast array of national capabilities along with those of the Total Force, our sister Services, the Intelligence Community (IC), and our international partners.


World-Class Expertise

Providing world-class expertise as an integral part of air component and joint operations requires ISR Airmen who are masters of threat characterization, analysis, collection, targeting, and operations-intelligence integration. Empowered to innovate, ISR Airmen will lead the way in the development of tactics, techniques, and procedures (TTP) that will compress OODA loops, produce actionable intelligence, and provide the intelligence needed to complete the kinetic or nonkinetic targeting equation.


Delivering Decision Advantage

The fundamental job of AF ISR Airmen is to analyze, inform, and provide commanders at every level with the knowledge they need to prevent surprise, make decisions, command forces, and employ weapons. Maintaining decision advantage empowers leaders to protect friendly forces and hold targets at risk across the depth and breadth of the battlespace—on the ground, at sea, in the air, in space, and in cyberspace. It also enables commanders to apply deliberate, discriminate, and deadly kinetic and non-kinetic combat power. To deliver decision advantage, we will seamlessly present, integrate, command and control (C2), and operate ISR forces to provide Airmen, joint force commanders, and national decision makers with utmost confidence in the choices they make.


Distributed Targeting

Over the past two decades, our deliberate targeting competence has stagnated. To ensure AF readiness across the full range of military operations, we will refocus on satisfying the air component commander’s air, space, and cyberspace deliberate targeting requirements by: adopting a distributed targeting concept of operations and TTPs; integrating and automating targeting capabilities across the enterprise; integrating kinetic and non-kinetic targeting TTPs; and establishing more comprehensive targeting training. Targeting is a critical enabler of Global Vigilance, Global Reach and Global Power; we will ensure that AF ISR is ready to provide this highly perishable skill when required.


Multi- and All-source intelligence

In addition to the tactical intelligence mission, the AF ISR force of 2023 must also conduct strategic intelligence collection in peacetime—Phase 0—and provide world-class, multi- and all-source intelligence in highly contested, communications-degraded environments across all domains.

Since 9/11, there has been an explosion in space and cyberspace capabilities, with corresponding prominence on the national stage. Additionally, the conflicts in Iraq and Afghanistan resulted in renewed, sustained emphasis on human-derived intelligence (HUMINT and open sources) by all of the Services. To execute the AF ISR mission, we must be better collectors, enablers, and integrators of information derived from space, cyberspace, human, and open sources


Cyber Warfare

Cyberspace, a relatively new and rapidly evolving operational domain for the Department of Defense (DoD) and the military services, is defined as “a global domain within the information environment consisting of the interdependent network of information technology infrastructures, including the Internet, telecommunications networks, computer systems, and embedded processors and controllers.”

ISR sensors can be augmented by the ability of cyber information to provide geolocation information and movement information on adversarial and friendly systems. This capability can allow sparse assets to be deployed elsewhere or to obtain information more effectively, allowing rapid, minimal observations.

There is a multidimensional relationship between the ISR and cyber missions and capabilities. There are three missions from a cyberspace perspective: support, defense, and force application. ISR is a crosscutting capability that can be applied holistically with other core functions to enable cyberspace missions. Conversely, Cyberspace Superiority supports and is supported by all of the other Air Force core functions. In the case of the Global Integrated ISR (GIISR) core function, these relationships could be characterized as “Cyber for ISR” and “ISR from Cyber.”

The “Cyber for ISR” relationship is illustrated by the mission assurance requirement for the cyber domain in support of an ISR mission. Cyberspace mission assurance ensures the availability and defense of a secured network to support a military operation.

Conversely, the “ISR from Cyber” relationship is illustrated by considering how ISR can be executed during cyberspace operations, particularly during cyberspace force application (exploitation). This can be characterized as situational awareness during and in support of cyberspace operations.

By 2023, AF ISR and cyber forces will be an integral partner to the joint team that operates in cyberspace to meet air component commander, joint force commander, and national needs. We will also forge service-specific cyber capabilities that provide specialized applications across the domains.

Computer Network Exploitation (CNE) will continue to be a crucial enabler for Offensive Cyber Operations (OCO), Defensive Cyber Operations (DCO), and Department of Defense Information Network (DoDIN) operations, but ISR will also be a prominent and critical product of those operations, meeting Air Force, joint, and national decision maker requirements.


Space Control and Protection

AF ISR relies heavily on space-based assets for collection and global airborne ISR operations; ISR collected from space greatly enhances our ability to characterize the battlespace through all domains and is critical to success across the full spectrum of operations.

In the early stages of conflict in a contested, degraded environment, ISR from space may represent our most viable collection capabilities. But the space domain is increasingly congested and contested. Therefore, to maintain this capability, we need to identify non-kinetic and kinetic threats to space assets and architecture; identify adversary intent and capabilities to use space; and conduct target analysis that enables offensive and defensive counterspace operations.

Protecting space assets is critical to AF ISR operations and the nation’s full spectrum joint operations. Purposefully developing ISR Airmen who understand ISR for and from space is the initial step we will take to ensure this critical capability. To solidify the value of space ISR, we will also broaden and improve our ability to integrate space-based ISR capabilities across the AF ISR Enterprise.


USAF ISR enabled by data science

The  characteristics of the intelligence environment since 2000 suggest fundamental change is occurring: an ever-larger volume of data; widening variety (classic intelligence sources, new sensors and types of data, and open sources); increasing velocity (more data and information in motion, every day); and more complex veracity (data duplication, identity, authenticity, and the resolution of each).

The ability of Air Force ISR analysts or “Analyst Airmen,” to deliver in this new era of intelligence analysis will be predicated in great part on a strategy to shape AF ISR Big Data into a manageable form to meet tactical, operational, and strategic mission needs.

The IC Cloud is a main feature of the Office of the Director of National Intelligence (ODNI) “IC IT Enterprise” (IC-ITE) program which represents a mass migration of IC data to a common ecosystem. Described by the ODNI, “…IC-ITE moves the IC from an agency-centric IT architecture to a common platform where the Community easily and securely shares information, techAs the AF ISR community integrates into ICITE,

Joint Information Environment (JIE), Defense Intelligence Information Environment (DI2E), and simultaneously maintains its own large enterprises that collect, exploit, and disseminate data, the Data Science discipline and the need for imbedded talent will become more important. Technological advances in live data streaming and correlation allow for realtime decision making on a scale never before experienced in AF ISR. We now have the ability to ingest disparate data sets, put relevant conditions and rules in place, and derive insights and prescriptive intelligence in an unprecedented fashion.

By managing and providing the Community’s IT infrastructure and services as a single enterprise, the IC will not only be more efficient, but will also establish a powerful platform to deliver more innovative and secure technology to desktops at all levels across the intelligence enterprise.”This transformation presents both challenges and opportunities for AF ISR in adopting a Data Science strategy and capitalizing on the wealth of information available from the IC Cloud.



References and Resources also include:

Capability Planning and Analysis to Optimize Air Force Intelligence, Surveillance, and Reconnaissance Investment, National Academy of Sciences.