US DOD and NATO plan Battlefield Internet of Things connecting sensors, wearables, weapons, minitions, platforms and networks for information dominance

The Internet-of-Things is an emerging revolution in the ICT sector under which there is shift from an “Internet used for interconnecting end-user devices” to an “Internet used for interconnecting physical objects that communicate with each other and/or with humans in order to offer a given service”.

The increasing miniaturization of electronics has enabled tiny sensors and processors to be integrated into everyday objects, making them ‘‘smart’’ , such as smart watches, fitness monitoring products, food items, home appliances, plant control systems, equipment monitoring and maintenance sensors and industrial robots. By means of wireless and wired connections, they are able to interact and cooperate with each other to create new applications/services in order to reach common goals. By 2025, it is predicted that there can be as many as 100 billion connected IoT devices or network of everyday objects as well as sensors that will be infused with intelligence and computing capability.

The rapid growth in IOT devices, however will offer new opportunities for hacking, identity theft, disruption, and other malicious activities affecting the people, infrastructures and economy. Some incidents have already happened, FDA issued an alert about a connected hospital medicine pump that could be compromised and have its dosage changed. Jeep Cherokee was sensationally remote-controlled by hackers in 2015.

The military operations will be significantly affected by widespread adoption of IoT technologies. Analogous to IoT, Military internet of things (MIOT) comprising multitude of platforms, ranging from ships to aircraft to ground vehicles to weapon systems, is expected to be developed. MIoT offers high potential for the military to achieve significant efficiencies, improve safety and delivery of services, and produce major cost savings.

Some of the military applications include fully immersive virtual simulations for soldiers’ training; autonomous vehicles; the ability to use smart inventory systems to consolidate warehouses using a web-based delivery and inventory system; and business systems like the Army Strategic Management System to manage energy, utilities and environmental sensors.  The military has begun taking steps towards implementing IoT technologies—some troops have been issued with helmets containing built-in monitoring devices to detect potential concussions and other brain injuries.

“With strategy concepts such as “net centric,” “information dominance,” and the emergence of cyber as an entirely new domain of operations, information always has and will remain central to the military’s efficiency and effectiveness. Naturally, IoT technologies and architectures that are designed to move and process information more quickly and in distributed environments seem like natural fits for military applications,” write Joe Mariani, Brian Williams, Brett Loubert.



Military  Internet of Things

The vision of military internet of things (MIOT) is to realize “anytime, anyplace connectivity for anything, ubiquitous network with ubiquitous computing” in military domain. Commanders make decisions based on real-time analysis generated by integrating Sensors data from unmanned sensors and reports from the field. These commanders shall benefit from a wide range of information supplied by sensors and cameras mounted on the ground, and manned or unmanned vehicles or soldiers.

The DOD has been using IoT in various ways for years, Pellegrino noted, especially for managing its energy usage and physical infrastructure. Connected energy management solutions have allowed the military to reduce total energy consumption by 23 percent since 2002. The military has about 8,000 smart meters installed, with 66 percent of them reporting to an integrated management system. Connected water management has allowed the military to cut portable water use intensity by 27 percent since 2007, he said.

The University of Illinois is leading a $25 million initiative to develop an “internet of battlefield things.” Officials say the initiative aims to have humans and technology work together in a seamless network. They say the initiative will connect soldiers with smart technology in armor, radios, weapons and other objects to give troops a better understanding of battlefield situations and help them assess risks. Experts say future military operations will rely less on human soldiers and more on interconnected technology. They say unmanned systems and machine intelligence advances can be used to improve military capabilities.

Soldiers need a continual flow of information to make the best decisions possible in battle because they are constantly making quick decisions in the face of adverse conditions, UI computer science professor Tarek Abdelzaher said. “You need to connect to the right sensors, the right cameras, the right devices to collect the right pieces of information,” Abdelzaher said.

The present application researches of MIOT are almost limited on how to improve working efficiency in logistic domain using IOT technologies. In future MIOT can be Equipment Maintenance, Smart Bases, Personal Sensing, Soldier Healthcare, Battlefield Awareness, C4ISR and Fire-Control Systems. Joe Mariani, Brian Williams, Brett Loubert  categorize IoT applications according to those that aim to improve cost efficiency, those that aim to improve warfighter effectiveness, and rare cases that aim for both.

Some of the applications of MIoT are:

  1. Military Equipment Logistics – IoT can be huge enabler of efficiency, visibility and military equipment in the right hands at right time. Deploying radio frequency identification tags and standardized barcodes to track individual supplies down to the tactical level could provide real-time supply chain visibility and allow the military to order parts and supplies on demand.  The ability to use smart inventory systems to consolidate warehouses using a web-based delivery and inventory system.
  2. Equipment Maintenance: The harsh conditions and extended deployments put extensive wear and tear on equipment. IoT can enable enhanced equipment maintenance and management through monitoring, optimizing and appropriately allocating various resources and processes such as manpower, material, financial resources and maintenance personnel.
  3. Smart Bases that incorporate commercial IoT technologies in buildings, facilities, etc., force protection at bases as well as maritime and littoral environments, health and personnel monitoring, monitoring and Justin- time equipment maintenance.
  4.  Personal Sensing, Soldier Healthcare – The combination of IoT sensors (temperature, blood pressure, heart rate, cholesterol levels and blood glucose) through body area networks will allow the health of the soldier to be monitored in real time. Soldiers can be alerted of abnormal states such as dehydration, sleep deprivation, elevated heart rate or low blood sugar and, if necessary, warn a medical response team in a base hospital.
  5. Battlefield Awareness – Situational awareness encompasses a wide range of activities in the battlefield to gain information on enemy’s intent, capability and actual position. IoT can enable a vital role by collecting, analyzing, and delivering the synthesized information in real time for expeditious decision making. IoT can enhance Battlefield Awareness from global, to company, platoon and squad commanders down to single soldiers level.
  6. Fire-Control Systems: In fire-control systems, end-to-end deployment of sensor networks and digital analytics enable fully automated responses to real-time threats, and deliver firepower with pinpoint precision. Munitions can also be networked, allowing smart weapons to track mobile targets or be redirected in flight.
  7. Other use cases for IoT include fully immersive virtual simulations for soldiers’ training; autonomous vehicles;and business systems like the Army Strategic Management System to manage energy, utilities and environmental sensor.


Vulnerability of Military Internet of Things

Security equipment is also vulnerable to exploitation by politically and criminally motivated hackers. Security researchers Runa Sandvik and Michael Auger gained unauthorized access to the smart-rifle’s software via its WiFi connection and exploited various vulnerabilities in its proprietary software. The TP750 was tricked into missing the target and not firing the bullet. Recently IoT devices are themselves used for attacks such as when an internet-connected fridge was used as a botnet to send spam to tens of thousands of Internet users.

Military IoT networks will also need to deal with multiple threats from adversaries, said Army’s John Pellegrino deputy assistant secretary of the Army for strategic integration, including physical attacks on infrastructure, direct energy attacks, jamming of radiofrequency channels, attacks on power sources for IoT devices, electronic eavesdropping and malware.

DARPA has launched Leveraging the Analog Domain for Security (LADS) Program for developing revolutionary approaches for securing Military Internet of things. LADS will develop a new protection paradigm that separates security-monitoring functionality from the protected system, focusing on low-resource, embedded and Internet of Things (IoT) devices.


 US Army’s Internet of Battlefield Things (IoBT) Collaborative Research Alliance (CRA)

US Army’s Internet of Battlefield Things (IoBT) Collaborative Research Alliance (CRA)

Through its Internet of Battlefield Things (IoBT) Collaborative Research Alliance, the Army has assembled a team to conduct basic and applied research involving the explosive growth of interconnected sensing and actuating technologies that include distributed and mobile communications, networks of information-driven devices, and artificially intelligent services, and how ubiquitous “things” present imposing adversarial challenges for the Army. Alliance members leading IoBT research areas include UIUC, University of Massachusetts, University of California-Los Angeles and University of Southern California. Other members include Carnegie Mellon University, University of California Berkeley and SRI International.

The ability of the Army to understand, predict, adapt, and exploit the vast array of internet worked things that will be present of the future battlefield is critical to maintaining and increasing its competitive advantage. The explosive growth of technologies in the commercial sector that exploits the convergence of cloud computing, ubiquitous mobile communications, networks of data-gathering sensors, and artificial intelligence presents an imposing challenge for the Army. These Internet of Things (IoT) technologies will give our enemies ever increasing capabilities that must be countered, but commercial developments do not address the unique challenges that the Army will face in using them.

The U.S. Army Research Laboratory (ARL) has established an Enterprise approach to address the challenges resulting from the Internet of Battlefield Things (IoBT) that couples multi-disciplinary internal research with extramural research and collaborative ventures. ARL intends to establish a new collaborative venture (the IoBT CRA) that seeks to develop the foundations of IoBT in the context of future Army operations. The Collaborative Research Alliance (CRA) will consist of private sector and government researchers working jointly to solve complex problems. The overall objective is to develop the fundamental understanding of dynamically-composable, adaptive, goal-driven IoBTs to enable predictive analytics for intelligent command and control and battlefield services.

For the purposes of this CRA, an Internet of Battlefield Things (IoBT) can be summarized as a set of interdependent and interconnected entities (e.g. sensors, small actuators, control components, networks, information sources, etc.) or “things” that are: dynamically composed to meet multiple mission goals; capable of adapting to acquire and analyze data necessary to predict behaviors/activities, and effectuate the physical environment; selfaware, continuously learning, autonomous, and autonomic, where the things interact with networks, humans, and the environment in order to enable predictive decision augmentation that delivers intelligent command and control and battlefield services.

The IoBT is the realization of pervasive computing, communication, and sensing where everything will be a sensor and potentially a processor (i.e. increased number of heterogeneous devices, connectivity, and communication) where subsequent information is of a scale unseen before. The battlespace itself will consist of active red (enemy), blue (friendly), and gray (non-participant) resources, where deception will be the norm, the environment (e.g. megacities and rural) will be dynamic, and ownership and other boundaries will be diverse and transient.

These IoBT characteristics all translate into increased complexity for the warfighter, particularly because current, commonly available, interconnected “things” will exist in the battlefield and be increasingly intelligent, obfuscated, and pervasive. These IoBT characteristics all translate into increased complexity for the warfighter, requiring situation-adaptive responses, selective collection/processing and real time sensemaking over massive heterogeneous data.

The objective of the IoBT CRA is to develop the underlying science of pervasive, heterogeneous sensing and actuation to enhance tactical Soldier and Mission Command autonomy, miniaturization, and information analytic capabilities against adversarial influence and control of the information battlespace; delivering intelligent, agile, and resilient decisional overmatch at significant standoff and op-tempo.

The IoBT CRA consists of three main research areas: Device/Information Discovery, Composition, and Adaptation to establish theoretical foundations that facilitate goal-driven discovery, adaptation, and composition of devices and data at unprecedented scale, complexity, and rate of acquisition; Autonomous & Autonomic Actuation Enabling Intelligent Services to advance the theory and algorithms for complexity and nonlinear dynamics of real-time actuation and robustness with a focus on autonomic system properties (e.g. self-optimizing, self-healing and self-protecting behaviors); and Distributed Asynchronous Processing and Analytics of Things to enrich the theory and experimental methods for complex event processing, with compact representations and efficient pattern evaluation.

Distributed and Collaborative Intelligent Systems (DCIST) Collaborative Research Alliance (CRA)

Through its Distributed and Collaborative Intelligent Systems (DCIST) Collaborative Research Alliance (CRA), the Army will perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. Alliance members include the University of Pennsylvania as the lead research organization. Individual research area leads are MIT and Georgia Tech. Other consortium members are University of California San Diego, University of California Berkeley and University of Southern California.

DCIST concentrates its research into three main areas: distributed intelligence, led by MIT, where researchers will establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team; heterogeneous group control, let by Georgia Tech, to develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy; and adaptive and resilient behaviors, led by the University of Pennsylvania, to develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world. In addition to these three main research areas, research will be pursued along three underlying research themes in Learning, Autonomous Networking, and Cross Disciplinary Experimentation.

The U.S. Army’s operational competitive advantage in a multi-domain battle will be realized through technology dominance, said ARL Director Dr. Philip Perconti.

NATO task group to examine applicability of IoT to Military

These IoT networks will need to deal with multiple threats from adversaries, Pellegrino said, including physical attacks on infrastructure, direct energy attacks, jamming of radiofrequency channels, attacks on power sources for IoT devices, electronic eavesdropping and malware.

NATO has set up RTO task group  (IST-147) that would  select a  scenario  to   examine applicability of IoT to military operations including  base operations, situational awareness,  boundary surveillance including harbour, energy management, and etc. It shall also access the risk of applying IoT technologies in the scenario. Based on this risk assessment, models for security and trust management that address the most significant risks will be proposed. Mitigation measures may include:  Managing identity, credentials and rights of IoT devices and users; Object level protection and trust; and Assessment of available or emerging commercial security solutions. It shall also define an IoT architecture or architectures that might be used in military situations taking into account existing IoT architectures used in other domains.

Challenges and Requirements for Military internet of things (MIOT)

There is great potential for IoT technologies to revolutionize modern warfare, leveraging data and automation to deliver greater lethality and survivability to the warfighter while reducing cost and increasing efficiency. However the successful development and deployment of IoT technologies across the military requires many challenges to be solved:

  1. In contrast to commercial deployments that mainly focus on systems with fixed sensors/devices Military internet of things (MIOT) shall consist of large number of mobile things such as UAVs, Aircrafts, tanks e.t.c. The mobile IoT paradigm invalidates many of the assumptions of traditional wireless sensor networks, especially with regards to wireless technologies and protocols. In particular, mobile IoT devices would find it quite difficult to connect with each other and other components of the IoT network in the presence of mobility, intermittent connectivity and RF link variability.
  2. Deployment Features: One of the biggest constraints in a battlefield environment is power consumption. IoT devices are likely to be powered by batteries or solar power, and charged on-the-move from solar panels, trucks, or even by motion while walking. In either case, they should last for extended periods of time (at least for the duration of the mission). Therefore, devices and sensors need to be power-efficient.
  3. Challenges related to reliability and dependability, especially when IoT becomes mission critical. Equipment should fulfill the requirements imposed and be compliant with the considerations from military standards (e.g., MIL-STD 810G, MIL-STD 461F, MIL-STD-1275). IoT devices should be ruggedized and prepared to operate under extreme environmental conditions.
  4. Security challenges related to co-existence and interconnection of military and civilian IoT networks. Security concerns are the main issue holding back the military’s use of the Internet of Things. Some potential adversaries have advanced cyber and electronic warfare capabilities, and everything connected to the Internet is potentially vulnerable to attack.
  5. Node Capture Attacks: In a node capture attack, the adversary can capture and control the node or device in IoT via physically replacing the entire node, or tampering with the hardware of the node or device.
  6. Electronic Warfare: Another challenge to IoT implementation is that it makes systems vulnerable to electronic warfare. Most IoT technologies communicate wirelessly on radio frequencies. Adversaries can use relatively unsophisticated methods like RF jamming to block these signals, rendering the devices unable to communicate with backbone infrastructure.
  7. Information management challenges for military application of IoT – trustworthiness, pedigree, provenance, and enabling military commanders and missions to benefit from IoT generated information.

IoT can serve the warfighter better with more intelligence and more ways to coordinate actions amongst themselves. In 20 years the IoT will be ubiquitous, Yet for the Army and wider military to make the most of IoT, it will need to rely on heterogeneous and flexible networks that continue to operate in environments with spotty connectivity, and don’t place burdens on soldiers, said Pellegrino, deputy assistant secretary of the Army for strategic integration.

Pellegrino said some connected devices will be intelligent, and others will be “marginally intelligent” but that connectivity will spread everywhere, from munitions to weapons, robotics, vehicles and wearable devices. All of these devices will generate an enormous amount of data, he said, and the military needs to figure out how to make that data useful.

The CIA and Defense Information Security Agency (DISA) are working with commercial companies to bring the cloud and software to secure government networks. Thus, the infrastructure for dealing with the data volume of tactical IoT applications is, potentially, already in place.

“All of these devices are going to be performing a massive variety of tasks,” Pellegrino said, including recommendations on where and when to attack and defend, and which of them will need to be coordinated.

New technologies required to power IoT

State-of-the-art (SOA) sensors use active electronics to monitor the environment for the external trigger, consuming power continuously and limiting the sensor lifetime to durations of months or less. In addition, it increases the cost of deployment, either by necessitating the use of large, expensive batteries or by demanding frequent battery replacement. It also increases Warfighter exposure to danger.

DARPA’s N-ZERO program intends to extend the lifetime of remotely deployed communications and environmental sensors from months to years, by supporting projects that demonstrate the ability to continuously and passively monitor the environment, waking an electronic circuit only upon the detection of a specific trigger signature. DARPA’s N-ZERO program can also enable the future billions of Internet of Things (IoT) devices that shall be deployed ‘everywhere’ and to be accessed ‘any time’ from ‘anywhere’.

For more information on DARPA N-ZERO:

Flexible Networks

Wireless Sensor Networks shall  play major part in another revolution that is in IoT although other communication techniques are also used in IoT. The future billions of Internet of Things (IoT) devices shall be deployed ‘everywhere’ and to be accessed ‘any time’ from ‘anywhere’, anything from large buildings, industrial plants, planes, cars, machines, any kind of goods. WSN technology shall also be employed in smart cities for applications in smart grid, smart water, intelligent transportation systems, and smart homes.

Pellegrino notes that the battlefield situations the military operates in “range from the moderately stable to very high dynamic situations.” To support IoT, the military’s networks will need to be flexible and interactive, he said, and still work despite limited bandwidth, intermittent connectivity and with a large number of devices on the network.

The arrangement of those networks needs to be done “totally autonomously,” he said. The military’s partners may be changing depending on the mission, and connected devices will need to work across networks with different network equipment and configurations.

“To achieve changing objectives with multiple complex tradeoffs, we have got to have highly adaptive management and organization leading to action, with no burden on the soldier, either cognitive or physical burden,” Pellegrino said.

DARPA has been experimenting with “mobile ad hoc networks,” designed to form a self creating and self healing mesh of communication nodes, with setup time measured in minutes instead of days. DARPA envisions networks of more than 1,000 nodes providing individual soldiers with streaming video from drones and other sensors, radio communications to higher headquarters, and advanced situational awareness of other soldiers’ location and status.

DARPA’s Revolutionary Approach “LADS” for IoT Security 

DARPA, the Department of Defense’s Advanced Research Projects Agency, issued a call for “innovative research proposals” for the Leveraging the Analog Domain for Security (LADS) Program. The program is directing $36 million into developing enhanced cyber defense through analysis of involuntary analog emissions, including things like “electromagnetic emissions, acoustic emanations, power fluctuations and thermal output variations.”

The program will explore technologies to associate the running state of a device with its involuntary analog emissions across different physical modalities including, but not limited to, electromagnetic emissions, acoustic emanations, power fluctuations and thermal output variations. This will allow a decoupled monitoring device to confirm the software that is running on the monitored device and what the current state of the latter is (e.g., which instruction, basic block, or function is executing, or which part of memory is being accessed).


for more information on  DARPA LADS:




References and resources also include:


Free Space Optical communications for ultrafast secure communications from Aircrafts, Satellites, Moon and Mars

Free Space Optical or Laser communications is creating a new communications revolution, that by using visible and infrared light instead of radio waves for data transmission  is providing  large bandwidth, high data rate, license free spectrum, easy and quick deployability,  low mass and  less power requirement. It also offers low cost transmission as against radio frequency (RF) communication technology and fiber optics communication. FSO operates on the Line-of-Sight phenomenon, consisting of a LASER at source and detector at the destination which provides optical wireless communication between them.

Both military and civilian users have started planning Laser communication systems from terrestrial short-range systems, to high data rate Aircraft and Satellite communications, unmanned aerial vehicles (UAVs) to high altitude platforms (HAPs), near-space communications for relaying high data rates from moon, and deep space communications from mars.

US-based LGS Innovations, has won a contract from NASA to provide a laser transmitter for a first-of-a-kind space mission. The Herndon, Virginia, company’s photonics technology will be one of the key elements in a high-bandwidth optical communications link that will beam data and high-resolution imagery back to Earth from a craft orbiting around an unusual metal asteroid. A part of what is known is full as NASA’s Deep Space Optical Communications (DSOC) project, the laser transmitter will fly on the mission to the asteroid “Psyche” as a technology demonstration.

For military, FSO is the next frontier for net-centric connectivity, as it can provide low cost, large bandwidth, high speed and secure communications in space and inside the atmosphere. There are size, weight and power (SWAP) advantages as well. Intelligence, Surveillance, and Reconnaissance (ISR) platforms can deploy this technology as they require disseminating large amount of images and videos to the fighting forces, mostly in real time.

That’s why the Defense Department recently awarded a three-year, $45 million grant to a tri-service project for a laser communications system. Thomas and her collaborators have moved past the research equipment and are building a full-up prototype expected to be ready by 2019.

Growing employment of laser free space communication

One major NASA priority is to use lasers to make space communications for both near-Earth and deep-space missions more efficient. Laser wavelengths are 10,000 times shorter than radio waves, allowing data to be transmitted across narrower tighter beams; therefore the energy is not spread out as much as it travels through space.

For example, a typical Ka-Band signal from Mars spreads out so much that the diameter of the energy when it reaches Earth is larger than Earth’s diameter. A typical optical signal, however, will only spread over the equivalent of a small portion of the United States; thus there is less energy wasted. This also leads to reduction in antenna size for both ground and space receivers, which reduces satellite size and mass. “The shorter wavelength also means there is significantly more bandwidth available for an optical signal, while radio systems have to increasingly fight for a very limited bandwidth,” explains NASA.

This technology  is capable of providing promising gigabit Ethernet access for high rise network enterprise or bandwidth intensive applications (e.g., medical imaging, HDTV, hospitals for transferring large digital imaging files or telecommunication) or intra campus connections.

FSO technology provides good solution for cellular carriers using 4G technology to cater their large bandwidth and multimedia requirement by providing a back haul connection between cell towers It can provide a back up protection for fiber based system in case of accidental fiber damage. It is believed that FSO technology is the ultimate solution for providing high capacity last mile connectivity up to residential access.


Laser communications could also benefit a class of missions called CubeSats, which are about the size of a shoebox. These missions are becoming more popular and require miniaturized parts, including communications and power systems.

The  drawback of the FSO link is that its performance is strongly dependent on atmospheric attenuations. Different atmospheric conditions like snow, fog and rain scatter and absorb the transmitted signal, which leads to attenuation of information signal before receiving at receiver end. As a result of attenuation caused by atmospheric conditions the range and the capacity of wireless channel are degraded thereby restricting the potential of the FSO link by limiting the regions and times.


Laser communications for global internet connectivity

Facebook aims to use a mix of solar-powered aircraft and low-orbit satellites to beam signals carrying the internet to hard-to-reach locations. ‘As part of our efforts, we’re working on ways to use drones and satellites to connect the billion people who don’t live in range of existing wireless networks,’ said Mark Zuckerberg.

The drones, flying at 65,000ft (19,800 metres), will be capable of staying in the air for months. ‘Our Connectivity Lab is developing a laser communications system that can beam data from the sky into communities. ‘This will dramatically increase the speed of sending data over long distances.

It is proposed that for sub-urban areas in limited geographical regions, solar-powered high altitude drones will be used to deliver reliable internet connections via FSO links. For places where deployment of drones is uneconomical or impractical (like in low population density areas), LEO and GEO satellites can be used to provide internet access to the ground using FSO.

Freespace laser communications  was used to send data reliably between balloons flying on the stratospheric winds in Project Loon. Now loon team is  working with AP State FiberNet, a telecom company in Andhra Pradesh, a state in India which is home to more than 53 million people. Less than 20% of residents currently have access to broadband connectivity, so the state government has committed to connecting 12 million households and thousands of government organizations and businesses by 2019 — an initiative called AP Fiber Grid.

AP State FiberNet announced that they’ll be rolling out two thousand FSOC links created by  team at X. These FSOC links will form part of the high-bandwidth backbone of their network, giving them a cost effective way to connect rural and remote areas across the state. The links will plug critical gaps to major access points, like cell-towers and WiFi hotspots, that support thousands of people.


World record in free-space optical communications

Researchers at the German Aerospace Center  have set a new record in data transmission using laser: 1.72 terabits per second across a distance of 10.45 kilometres, which is equivalent to the transmission of 45 DVDs per second. This means that large parts of the still under-served rural areas in Western Europe could be supplied with broadband Internet services.

We have set ourselves the goal of enabling Internet access at high data rates outside major cities, and want to demonstrate how this is possible using satellites,” explains Christoph Günther, Director of the DLR Institute of Communications and Navigation. Fibre-optic links and other terrestrial systems offer high transmission rates, but are available predominantly in densely populated regions.

Outside of the metropolitan centres a broadband supply via geostationary satellites is possible. Scientists  as part of the DLR THRUST (Terabit-throughput optical satellite system technology) project, satellites should be connected to the terrestrial Internet via a laser link. The envisaged data throughput is more than one terabit per second. Communication with the users is then carried out in the Ka-band, a standard radio frequency for satellite communications.

Within the framework of the experiments, a fibre-optic transmission system of the Fraunhofer Heinrich Hertz Institute was employed which operates at wavelengths of around 1550 nanometres and which is suitable for high data rates. This system was integrated into DLR’s newly developed free-space optic transmission system.


 NASA’s Lunar Laser Communication Demonstration (LLCD) and Laser Communications Relay Demonstration (LCRD)

NASA’s Laser Communications Relay Demonstration (LCRD) mission has begun integration and testing at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. The LCRD mission continues the legacy of the Lunar Laser Communications Demonstration (LLCD), which flew aboard a moon-orbiting spacecraft in 2013.

LLCD had demonstrated error-free communication from the Moon to the Earth under all conditions, including during broad daylight and even when the Moon was within 3° of the Sun as seen from Earth. It also proved that a space-based laser communications system was viable and that the system could survive both launch and the space environment.

NASA’s Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from the Moon to Earth at a record-breaking download rate of 622Mb/s. The space laser terminal employed a 0.5W IR laser at 1.55 microns (which is eye-safe as well as invisible to the eye) and 4in (10.7cm) telescope to transmit toward the selected ground terminal. The downlink beam is received by an array of telescopes that are coupled to novel and highly sensitive superconducting nanowire detector arrays that convert the photons in the beam to bits of data.

The Goddard team is now planning a follow-on mission called the Laser Communications Relay Demonstration (LCRD) that proposes to revolutionize the way we send and receive data, video and other information, using lasers to encode and transmit data at rates 10 to 100 times faster than today’s fastest radio-frequency systems, using significantly less mass and power.  It will fly as a commercial satellite payload in 2019. It consists of two optical communications terminals in space and will enable real-time forwarding and storage of data up to 1.25 Gbps (coded) / 2.880 Gbps (uncoded).

Mission operators at ground stations in California and Hawaii will test its invisible, near-infrared lasers, beaming data to and from the satellite as they refine the transmission process, study different encoding techniques and perfect tracking systems.While in operation, LCRD will also enable the gathering of information about the longevity and durability of space-based optical systems and their hardware, as well as ensuring the accuracy of the lasers that carry messages to the ground. They also will study the effects of clouds and other disruptions on communications, studying mitigating solutions including relay operations in orbit or backup receiving stations on the ground.

NASA is now planning laser communications from Mars, it is developing new optical communications system, that will reduce the time required to transmit high resolution images from Mars from 90 minutes to few minutes. The new optical communications system that NASA plans to demonstrate in 2016 will even allow the streaming of high-definition video from distances beyond the Moon.

The Deep Space Optical Communications project is developing three key technologies essential for operational deep space optical communications, Spacecraft disturbance isolation platform, Photon counting receiver for spacecraft optical transceiver consisting of radiation-tolerant indium gallium arsenide phosphide (InGaAsP) detector and superconducting photon counting detectors for the Earth-based optical receivers.

The team at Glenn is developing an idea called Integrated Radio and Optical Communications (iROC) to put a laser communications relay satellite in orbit around Mars that could receive data from distant spacecraft and relay their signal back to Earth. The system would use both RF and laser communications, promoting interoperability amongst all of NASA’s assets in space. By integrating both communications systems, iROC could provide services both for new spacecraft using laser communications systems and older spacecraft like Voyager 1 that use RF.

NASA’s up and coming Psyche mission to explore a unique metal asteroid orbiting the Sun between Mars and Jupiter, will also test new communication hardware that uses lasers instead of radio waves.  In parallel with a more conventional X-band microwave link, it will send engineering and science data from the Psyche Spacecraft and is said to be the first such laser transmitter to support deep-space, high-bandwidth optical communication.

“Future deep space exploration missions, both manned and unmanned, will require high-bandwidth communications links to ground stations on Earth to support advanced scientific instruments, high-definition video, and high-resolution imagery,” states LGS, adding that its transmitter will enable much faster communication and help improve the efficiency of future solar system exploration missions


Europe’s Global Laser Communications System

The first dedicated laser terminal forming a high-speed optical network in space is now in orbit, after a Proton rocket launch from Kazakhstan on January 29. Part of the future “European Data Relay System” (EDRS), which the European Space Agency (ESA) describes as its “most ambitious telecommunications program to date”, the laser was developed by key partner Tesat Spacecom, an Airbus subsidiary.

The European Space Agency (ESA) and partner Airbus Defence and Space are aiming to build out the European Data Relay System (EDRS) into a global laser communications network by 2020 and hope that the system will become an international standard. Sentinel satellites 1A, 1B, 2A and 2B all have Laser Communication Terminals (LCT) payloads.

ESA and Airbus completed a major test of the EDRS system in late 2014, linking the Sentinel 1A satellite built by Thales Alenia Space with the Airbus-built Alphasat satellite via Laser Communication Terminals (LCTs). The test beamed images from Sentinel 1A, which circles the planet at 700 kilometers in Low Earth Orbit (LEO) to Alphasat 36,000 kilometers up in Geostationary Earth Orbit (GEO) and back to the ground. Tesat boasts it’s point-to-point data transfer covers about 28000 miles with transfer rate of 5 Gigabits per sec.

“[Sentinel 1A] produces around 1.8 terabytes of raw data every single day, and when we process this data it is even three terabytes, more or less, on average we produce everyday. In 2017 we will have seven Sentinels working and roughly seven times the amount of data to download. Four of these Sentinels will have a laser communication terminal and can use EDRS,” he said.

Stefan Klein, head of aviation division General Atomics, Spezialtechnik, said his company is eager to use EDRS laser communications for its Unmanned Aerial Vehicles (UAVs). Today the company’s drones get up to 40 hours of flight without refueling, and require real-time data through secure communications. The company plans to build LCT payloads to leverage EDRS by the end of the decade.

Laser Light Communications’s Optical Satellite Systems

The world’s first optical wave satellite communications has been planned by Laser Light Communications, intending to deploy it in the first quarter of 2017.The company plans on creating a 12-satellite constellation in MEO with an operating system capacity of 4.8 terabytes/sec and satellite-to-satellite optical crosslinks and satellite-to-ground optical up/down links of 200 gigabytes/sec. The company envisions integrating the Optical Satellite System with existing terrestrial and undersea fiber optic levels.

DISA has signed a Cooperative Research and Development Agreement with Laser Light Communications to evaluate the feasibility of the underlying technology and the future potential of the all-optical system for DOD missions.

The laser communications is attractive for defence of its high bandwidth and freedom from issues like spectrum allocation or mutual interference due to satellite spacing. The system is also more secure because enhanced resistance of optical communication systems from interception and jamming.


Aircraft to Ground Communications

Free-space optical (FSO) communication links of 1Gbps between aircraft and ground stations have been demonstrated by Christopher Schmidt and others from Institute of Communication and Navigation, German Aerospace Center. This ultrafast movement of information, of high data volumes which uses high-resolution sensor systems, has particular applications for disaster management, monitoring natural events, and traffic observation.

Their system consists of Optical transmitter, called Free-space Experimental Laser Terminal II (FELT II), installed in Do228 aircraft consisting of two-stage tracking system, an inertial measurement unit (for velocity and orientation), the optical bench inside the cabin of the aircraft, and a dome-shaped assembly below the cabin.

For data reception they have designed transportable optical ground station (TOGS), that consist of a pneumatically deployable Ritchey-Chrétien-Cassegrain telescope with a main mirror diameter of 60cm. TOGS is equipped with an optical tracking system, dual-antenna global positioning system and an inclination sensor to determine its own location, heading, and calibration, and it has supports to enable leveling of the station.


 Optical LAN

Short range Laser communications has also started being utilized for tasks such as connecting campus or office buildings when an obstruction such as a river or road makes laying fiber infeasible. Northern Storm, a US based enterprise has partnered with Mostcom in Eastern Europe have developed NS10G system, that can provide 10 gig throughput upto 1 Km at the price is 1/4th the installed cost of a 10 Gig fiber line.

Military FSO or Laser Communications

Spectrum congestion is a growing problem, it increasingly limits operational capabilities due to the increasing deployment and bandwidth of wireless communications, the use of network-centric and unmanned systems, and the need for increased flexibility in radar and communications spectrum to improve performance and overcome sophisticated countermeasures.

Networks are said to be one of the U.S. military’s Achilles’ heels. Anthony Nigara, senior director for advanced systems at Exelis, which is working on a laser communications project for the Office of Naval Research, said in an interview that adversaries may want to block, degrade or eavesdrop on U.S. military communications.  Cutting off communications through jamming or the destruction of infrastructure could be devastating to battlefield commanders. FSO communication cannot be easily intercepted, detected or jammed as FSO laser beam is highly directional with very narrow beam divergence. Unlike RF signal, FSO signal cannot penetrate walls which can therefore prevent eavesdropping.

However, this technology is limited to LOS communications and  is affected by atmospheric attenuation that is impossible to control. This  is definitely a challenge that will impact mission capabilities.

Therefore, a viable future work would be to explore the possibility of implementing FSO relay capability as a solution for broadband communication over the horizon in tactical operations. Relay could be implemented from ground-to-air and air-to-ground paired links. The device in the air acts as a repeater to avoid physical obstructions during required ground-to-ground communications.” This solution addresses the challenge involving LOS, and transmission away from the ground reduces the effect of atmospheric scintillation to the optical link. The solution could be cascaded to further increase the eventual ground-to-ground range,” propose Lai, Jin Wei Monterey, California: Naval Postgraduate School. Caution has to be exercised when using an FSO communication system as the laser may cause damage to the human eye.

Office of Naval Research tests tactical line-of-sight operational network (TALON)

Defense Department recently awarded a three-year, $45 million grant to a tri-service project for a laser communications system. “This is basically fiber optic communications without the fiber,” said lead researcher Linda Thomas, whose Naval Research Laboratory team takes home about a third of the grant money. Their TALON device transmits messages via laser over distances comparable to current Marine Corps tactical radios, but because it’s a narrow beam of light rather than a radio broadcast, it’s much harder for an enemy to pick up the transmission, let alone interfere with it.

ONR successfully tested Exelis’ tactical line-of-sight operational network (TALON) between two mountains 50 kilometers apart at Naval Air Weapons Station China Lake in California. Nigara said the TALON program has worked on synchronizing transmitters and terminals on the move, whether from ship to ship, or ship to shore. They must be able to find each other and link automatically.

Exelis’ ES division has developed TALON (Tactical Line-of-Sight Optical Network), “Our TALON product line is a free-space optical communications system that uses lasers to transmit mission-critical data to warfighters from distances of more than 30 miles and 1,000 times faster than RF technology,” says Andy Dunn, vice president of business development, integrated electronic warfare systems, Exelis ES.

“You can only push so much data, video and voice communications through the traditional RF space. When you move up to optics or laser-based communications, you can push a lot more data through the pipeline and that’s what the TALON line does.”

Because heavy weather can still block laser beams, especially over long distances, Thomas emphasized you’d never want to get rid of your radios and rely exclusively on lasers.


Market growth

According to the new market research report by Markets and Markets, the FSO market is expected to grow from USD 116.7 Million in 2015 to USD 940.2 Million by 2020, at a CAGR of 51.8% during the forecast period. The factors driving the FSO market are last mile connectivity, license-free, and alternative solution to overburdened RF technology for outdoor networking

“The global free–space optical communications market was valued at $41.9 million in 2013 and $59.2 million in 2014. This market is expected to reach $501.1 million in 2019, at a compound annual growth rate (CAGR) of 53.3% from 2014 through 2019,” according to report by Reportlinker. The Market is divided into segments like data transmission, security, last mile access, storage area network, disaster recovery, healthcare facilities and others among both civil and defence users.

The article sources also include:,Interception.aspx


Security agencies are employing data analytics and AI tools for Crime Prevention

Crime is down but it is changing, said Rt Hon Theresa May MP Home Secretary, UK. While traditional high volume crimes like burglary and street violence have more than halved, previously ‘hidden’ crimes like child sexual abuse, rape and domestic violence have all become more visible, if not more frequent, and there is growing evidence of the scale of online fraud and cyber crime.

As with so many of the challenges we face as a society, the prevention of crime is better than cure. Stopping crime before it happens, and preventing the harm caused to victims, must be preferable to picking up the pieces afterwards.

Data and data analytics, tools have become critical in successfully preventing crime. Many police forces are already trialling forms of ‘predictive policing’, largely to forecast where there is a high risk of ‘traditional’ crimes like burglary happening, and plan officers’ patrol patterns accordingly, says UK’s Modern Crime Prevention Strategy. Data analytics can be used to identify vulnerable people, and to ensure potential victims are identified quickly and consistently.

China, a surveillance state where authorities have unchecked access to citizens’ histories, is developing artificial intelligence based tools that they say will help them identify and apprehend suspects before criminal acts are committed.

China planning to use AI technology to predict and prevent crime

China’s crime-prediction technology relies on several AI techniques, including facial recognition and gait analysis, to identify people from surveillance footage, according to The Financial Times. In addition, “crowd analysis” can be used to detect “suspicious” patterns of behaviour in crowds, for example to single out thieves from normal passengers at a train stations.

Facial recognition company Cloud Walk has been trialling a system that uses data on individuals’ movements and behaviour — for instance visits to shops where weapons are sold — to assess their chances of committing a crime. Its software warns police when a citizen’s crime risk becomes dangerously high, allowing the police to intervene.

“If we use our smart systems and smart facilities well, we can know beforehand . . . who might be a terrorist, who might do something bad,” said Li Meng, vice-minister of science and technology.

Another example of AI use in Chinese crime prediction is “personal re-identification” — matching someone’s identity even if spotted in different places wearing different clothes, a relatively recent technological achievement.

“We can use re-ID to find people who look suspicious by walking back and forth in the same area, or who are wearing masks,” said Leng Biao, professor of bodily recognition at the Beijing University of Aeronautics and Astronautics. “With re-ID, it’s also possible to reassemble someone’s trail across a large area.”


Durham Constabulary Deploy AI for Crime Prevention

Durham Constabulary is preparing to trial an artificially intelligent system to help officers decided whether or not to keep a suspect in custody.

The Force, will use the Harm Assessment Risk Tool (Hart) to help officers decide if a suspect can be released from detention, based on the probability of offending once released. Hart has been trained on five years of the Force’s data (from 2008 – 2012), and will classify a suspect as either low, medium, or high risk of offending. The system was tested from 2013, with forecasts that a suspect was low risk accurate 98% of the time, while forecasts that suspects were high risk were accurate 88% of the time. The Hart system was developed in conjunction with the renowned Centre for Evidence-based Policing at the University of Cambridge.

The use of data analytics and AI to help inform police decision making is in line with the Home office’s aspirations outlined in last year’s Modern Crime Prevention Strategy. The Strategy acknowledges that better use of data and technology is one of the key pillars of effective modern crime prevention in the digital age, and outlines the Government’s role in “stripping away barriers to the effective use of data and data analytics, and helping others exploit new and existing technology to prevent crime.”

According to Modern Crime Prevention Strategy data analytics can:

  • Help police forces deploy officers to prevent crime in known hotspots (often called ‘predictive policing’)
  • Use information shared by local agencies on, for example, arrests, convictions, hospital admissions, and calls on children’s services to identify individuals who are vulnerable to abuse or exploitation
  • Spot suspicious patterns of activity that can provide new leads for investigators, such as large payments to multiple bank accounts registered at the same address
  • Show which products, services, systems or people are vulnerable to particular types of crime – for example that young women are disproportionately likely to have their smartphone stolen. This means system flaws can be addressed, or crime prevention advice (e.g. on mobile phone security measures) can be targeted more effectively.


SA Company to Use Artificial Intelligence to Predict Crime

Designed to predict and map potential crimes, Solution House Software has announced the launch of their new artificial intelligence (AI) module for Incident Desk. The Incident Desk Predictive Analysis module uses machine learning technology developed by Solution House together with aggregated data from multiple information sources to determine the likelihood of different types of criminal activity in the Incident Desk management area.

“With the module installed, Incident Desk generates 7 and 30-day forecasts as heat maps based on crime types and incident probabilities that managers can use to optimise their finite security resources,” says Janse van Rensburg.

“Crime is notoriously difficult to predict, but given that Incident Desk can access so many different types of data – including weather patterns and forecasts and historical data – the results are based on fairly accurate and proven trending algorithms,” her says.

One of the biggest problems currently plaguing public safety and security are the ‘islands of data’ that are not being shared or centralised, which makes it difficult to data mine and analyse.



References and resources also include:

USAF’s ISR vision of Full-Spectrum Awareness for Distributed Targeting, Space Control and Cyber Warfare

Intelligence, surveillance, and reconnaissance (ISR) capabilities enable the U.S. Air Force (USAF) to be aware of developments related to adversaries worldwide and to conduct a wide variety of critical missions, both in peacetime and in conflict. It involves a networked system of systems operating in space, cyberspace, air, land, and maritime domains. These systems include planning and direction, collection, processing and exploitation, analysis and production, and dissemination (PCPAD) capabilities linked together by communications architecture.

US Air force has released “AF ISR 2023: Delivering Decision Advantage,” that lays out a strategic vision of “Full-Spectrum Awareness” and “World-Class Expertise” which combine to the ultimate vision of “Delivering Decision Advantage.” AF ISR Vision 2023’ demand for an “…ISR enterprise that seamlessly ingests data from an even wider expanse of sources, swiftly conducts multi- and all-source analysis, and rapidly delivers decision advantage to war fighters and national decision makers.”

ISR is one of the Air Force’s five enduring core missions along with air and space superiority, rapid global mobility, global strike, and command and control. AF ISR is integral to Global Vigilance for the nation and is foundational to Global Reach and Global Power.

“We will not be able to maintain the size and composition of the current ISR force, yet we must prepare for operations which will range from humanitarian assistance to major contingency operations in highly contested environments. This strategic vision enables us to achieve national goals while tailoring our ISR force to best meet future challenges.”

Intelligence gathering in future will also involve monitoring and mining social media in real time via an automated artificial intelligence is another way the Air Force and other military branches can obtain information, said the head of the service. The Air Force on some level does monitor social media already. The service’s only non-offensive air operations center, known as “America’s AOC” at Tyndall Air Force Base, Florida.

But social media is just one aspect, said Col. Robert Bloodworth, chief of combat operations. It is also the technology of “refining the analysis” through AI to reach the operator, pilot or airman in a decisive and streamlined way is what the Air Force desperately needs to conduct missions in the future. “Before you get to artificial intelligence, you have to get to automation, and what does that mean? It means we’re really developing algorithms, so we then have to build trust in the algorithms,” said Lt. Gen. VeraLinn “Dash” Jamieson, the service’s deputy chief of staff for intelligence, surveillance and reconnaissance on the Air Staff  during an interview.

AF ISR 22023

The challenge for AF ISR is to maintain the impressive tactical competencies developed and sustained over the past 12 years, while rebuilding the capability and capacity to provide the air component commander and subordinate forces with the all-source intelligence required to conduct full-spectrum cross-domain operations in volatile, uncertain, complex, and ambiguous environments around the globe.

Our ability to provide dominant ISR depends on well-trained, well-led professional Airmen who have strong analytical skills along with a high state of readiness, agility, and responsiveness. These characteristics, along with continued innovation and integration of technological advancements, will combine to make our Airmen experts in their trade.

Additionally, we will not rely solely on our own capabilities; it is imperative that we fully leverage the vast array of national capabilities along with those of the Total Force, our sister Services, the Intelligence Community (IC), and our international partners.


World-Class Expertise

Providing world-class expertise as an integral part of air component and joint operations requires ISR Airmen who are masters of threat characterization, analysis, collection, targeting, and operations-intelligence integration. Empowered to innovate, ISR Airmen will lead the way in the development of tactics, techniques, and procedures (TTP) that will compress OODA loops, produce actionable intelligence, and provide the intelligence needed to complete the kinetic or nonkinetic targeting equation.


Delivering Decision Advantage

The fundamental job of AF ISR Airmen is to analyze, inform, and provide commanders at every level with the knowledge they need to prevent surprise, make decisions, command forces, and employ weapons. Maintaining decision advantage empowers leaders to protect friendly forces and hold targets at risk across the depth and breadth of the battlespace—on the ground, at sea, in the air, in space, and in cyberspace. It also enables commanders to apply deliberate, discriminate, and deadly kinetic and non-kinetic combat power. To deliver decision advantage, we will seamlessly present, integrate, command and control (C2), and operate ISR forces to provide Airmen, joint force commanders, and national decision makers with utmost confidence in the choices they make.


Distributed Targeting

Over the past two decades, our deliberate targeting competence has stagnated. To ensure AF readiness across the full range of military operations, we will refocus on satisfying the air component commander’s air, space, and cyberspace deliberate targeting requirements by: adopting a distributed targeting concept of operations and TTPs; integrating and automating targeting capabilities across the enterprise; integrating kinetic and non-kinetic targeting TTPs; and establishing more comprehensive targeting training. Targeting is a critical enabler of Global Vigilance, Global Reach and Global Power; we will ensure that AF ISR is ready to provide this highly perishable skill when required.


Multi- and All-source intelligence

In addition to the tactical intelligence mission, the AF ISR force of 2023 must also conduct strategic intelligence collection in peacetime—Phase 0—and provide world-class, multi- and all-source intelligence in highly contested, communications-degraded environments across all domains.

Since 9/11, there has been an explosion in space and cyberspace capabilities, with corresponding prominence on the national stage. Additionally, the conflicts in Iraq and Afghanistan resulted in renewed, sustained emphasis on human-derived intelligence (HUMINT and open sources) by all of the Services. To execute the AF ISR mission, we must be better collectors, enablers, and integrators of information derived from space, cyberspace, human, and open sources


Cyber Warfare

Cyberspace, a relatively new and rapidly evolving operational domain for the Department of Defense (DoD) and the military services, is defined as “a global domain within the information environment consisting of the interdependent network of information technology infrastructures, including the Internet, telecommunications networks, computer systems, and embedded processors and controllers.”

ISR sensors can be augmented by the ability of cyber information to provide geolocation information and movement information on adversarial and friendly systems. This capability can allow sparse assets to be deployed elsewhere or to obtain information more effectively, allowing rapid, minimal observations.

There is a multidimensional relationship between the ISR and cyber missions and capabilities. There are three missions from a cyberspace perspective: support, defense, and force application. ISR is a crosscutting capability that can be applied holistically with other core functions to enable cyberspace missions. Conversely, Cyberspace Superiority supports and is supported by all of the other Air Force core functions. In the case of the Global Integrated ISR (GIISR) core function, these relationships could be characterized as “Cyber for ISR” and “ISR from Cyber.”

The “Cyber for ISR” relationship is illustrated by the mission assurance requirement for the cyber domain in support of an ISR mission. Cyberspace mission assurance ensures the availability and defense of a secured network to support a military operation.

Conversely, the “ISR from Cyber” relationship is illustrated by considering how ISR can be executed during cyberspace operations, particularly during cyberspace force application (exploitation). This can be characterized as situational awareness during and in support of cyberspace operations.

By 2023, AF ISR and cyber forces will be an integral partner to the joint team that operates in cyberspace to meet air component commander, joint force commander, and national needs. We will also forge service-specific cyber capabilities that provide specialized applications across the domains.

Computer Network Exploitation (CNE) will continue to be a crucial enabler for Offensive Cyber Operations (OCO), Defensive Cyber Operations (DCO), and Department of Defense Information Network (DoDIN) operations, but ISR will also be a prominent and critical product of those operations, meeting Air Force, joint, and national decision maker requirements.


Space Control and Protection

AF ISR relies heavily on space-based assets for collection and global airborne ISR operations; ISR collected from space greatly enhances our ability to characterize the battlespace through all domains and is critical to success across the full spectrum of operations.

In the early stages of conflict in a contested, degraded environment, ISR from space may represent our most viable collection capabilities. But the space domain is increasingly congested and contested. Therefore, to maintain this capability, we need to identify non-kinetic and kinetic threats to space assets and architecture; identify adversary intent and capabilities to use space; and conduct target analysis that enables offensive and defensive counterspace operations.

Protecting space assets is critical to AF ISR operations and the nation’s full spectrum joint operations. Purposefully developing ISR Airmen who understand ISR for and from space is the initial step we will take to ensure this critical capability. To solidify the value of space ISR, we will also broaden and improve our ability to integrate space-based ISR capabilities across the AF ISR Enterprise.


USAF ISR enabled by data science

The  characteristics of the intelligence environment since 2000 suggest fundamental change is occurring: an ever-larger volume of data; widening variety (classic intelligence sources, new sensors and types of data, and open sources); increasing velocity (more data and information in motion, every day); and more complex veracity (data duplication, identity, authenticity, and the resolution of each).

The ability of Air Force ISR analysts or “Analyst Airmen,” to deliver in this new era of intelligence analysis will be predicated in great part on a strategy to shape AF ISR Big Data into a manageable form to meet tactical, operational, and strategic mission needs.

The IC Cloud is a main feature of the Office of the Director of National Intelligence (ODNI) “IC IT Enterprise” (IC-ITE) program which represents a mass migration of IC data to a common ecosystem. Described by the ODNI, “…IC-ITE moves the IC from an agency-centric IT architecture to a common platform where the Community easily and securely shares information, techAs the AF ISR community integrates into ICITE,

Joint Information Environment (JIE), Defense Intelligence Information Environment (DI2E), and simultaneously maintains its own large enterprises that collect, exploit, and disseminate data, the Data Science discipline and the need for imbedded talent will become more important. Technological advances in live data streaming and correlation allow for realtime decision making on a scale never before experienced in AF ISR. We now have the ability to ingest disparate data sets, put relevant conditions and rules in place, and derive insights and prescriptive intelligence in an unprecedented fashion.

By managing and providing the Community’s IT infrastructure and services as a single enterprise, the IC will not only be more efficient, but will also establish a powerful platform to deliver more innovative and secure technology to desktops at all levels across the intelligence enterprise.”This transformation presents both challenges and opportunities for AF ISR in adopting a Data Science strategy and capitalizing on the wealth of information available from the IC Cloud.



References and Resources also include:

Capability Planning and Analysis to Optimize Air Force Intelligence, Surveillance, and Reconnaissance Investment, National Academy of Sciences.

DARPA’s 1000X efficient graph analytics processor enables real-time identification of cyber threats, and vastly improved situational awareness

Today large amounts of data is collected from numerous sources, such as social media, sensor feeds (e.g. cameras), and scientific data. There are over 1 billion websites on the world wide web today and the Annual global IP traffic will reach 3.3 ZB per year by 2021, or 278 exabytes (EB) per month. In 2016, the annual run rate for global IP traffic was 1.2 ZB per year, or 96 EB per month. The notion of Big Data emerges from the observation that 90 percent of the data available today has been created in just the past two years. From devices at the edge to large data centers crunching everything from corporate clouds to future energy technology simulations, the world is awash in data – being stored, indexed and accessed, says Intel. The goal of DARPA’s Hierarchical Identify Verify Exploit (HIVE) programme is to explore new and more efficient methods of processing large amounts of complex data.

In the big data era, information is often linked to form large-scale graphs. Graph analytics has emerged as a way to understand the relationships between these heterogeneous types of data, allowing analysts to draw conclusions from the patterns in the data and to answer previously unthinkable questions.  By understanding the complex relationships between different data feeds, a more complete picture of the problem can be understood and some amount of causality may be inferred.

There is also an increasing need to make decisions in real time, which requires understanding how the inherent relationships in the graph evolve over time. This emerging data analytics technology is also used for applications like cyber defense and critical infrastructure protection that require analyzing huge data sets in real time.

DoD, has to make sense and make decisions based on large amount of data it collects like communications, intelligence, surveillance, and reconnaissance from drones, automated cybersecurity systems. Real-time predictive large-scale data analytics can provide decisive advantage to commanders across a range of military operations in the homeland and abroad, information supremacy, enhancing autonomy technologies and vastly improved situational awareness to aid warfighters and intelligence analysts, according to ARL.

Currently much of graph analytics is performed in large data centers on large cached or static data sets and the amount of processing required is a function of not only the size of the graph, but the type of data being processed. DARPA’s Hierarchical Identify Verify Exploit (HIVE) program that seeks to develop a generic and scalable graph processor that specializes in processing sparse graph primitives, and achieves 1000-times improvement in processing efficiency over standard processors.

In combination with emerging machine learning and other artificial intelligence techniques that can categorize raw data elements, and by updating the elements in the graph as new data becomes available, a powerful graph analytics processor could discern otherwise hidden causal relationships and stories among the data elements in the graph representations.

Most graph processing problems require large server-class type computers with a large size, weight, and power (SWaP) requirements. But the scale required limits what can be done in a tactical environment. HIVE is expected to overcome that challenge and enable processing of information at the tactical edge, Boyle said. “The hard problem is getting the processor down into a form factor and a SWaP footprint that is compatible with a tactical environment and then using it in an environment where you are really working towards this future of cognitive autonomy and intelligent systems,” he added.

“You are seeing companies shift from general-purpose computing devices to purpose built; that is what the HIVE chip is, it is a purpose-built chip just for graph processing,” he said. “This is a key enabler to the future that you hear the customers talking about. It is not just about being able to process graphs, this is one of the core technologies required for cognitive systems. That technology more broadly speaking will impact just about every aspect of war fighting in the future.”

The concept phase of the HIVE program extends though next year, with initial prototyping beginning in fiscal 2019. Chip fabrication could begin as early as fiscal 2020, the agency said.

Current Hardware inefficient for graph analytics

Unlike traditional analytics that are tools to study “one to one” or “one to many” relationships, graph analytics can use algorithms to construct and process the world’s data organized in a “many to many” relationship – moving from immediate connections to multiple layers of indirect relationships.  Examples of these relationships among data elements and categories include person-to-person interactions as well as seemingly disparate links between, say, geography and changes in doctor visit trends or social media and regional strife. This applies to a wide array of applications such as transportation routing, genomics processing, financial transaction optimization, and consumer purchasing analysis.

Processing connected big data has been a major challenge. With the emergence of data and network science, graph computing is becoming one of the most important techniques for processing, analyzing, and visualizing connected data. “Georgia Tech researchers led by Lifeng Na noted in a paper delivered at the 2015 supercomputing conference, “The challenges in graph computing come from multiple key issues like frameworks, data representations, computation types, and data sources.”

Previous research has been done on streaming graph analytics, but has been hampered by the amount of processing required to pinpoint which part of the graph needs to be updated based on the new data. This update has to be done at the speed of the incoming data and cannot be done as an offline process because the nature of the graph is either developing or changing in real time.

The nature of the graph can be very sparse, as the number of relationships between entities are not known or clear.  Trying to analyze the graph with standard processors is extremely inefficient because sparse data must be processed in real time, DARPA officials say.

The sparseness of the data and the requirement to process that data in real time make the application of graph analytics on standard processors extremely inefficient. Graph analytics shifts the processing workload to locating the information and moving the data; only 4 percent of processing time and power goes to the overall effort. Such inefficiency either limits the size of the graph to what the chip can hold, or requires an extremely large cluster of computers.


Graph Analytics Processor

To take on that technology shortfall, MTO last summer unveiled its Hierarchical Identify Verify Exploit (HIVE) program that seeks to develop a generic and scalable graph processor that specializes in processing sparse graph primitives, and achieves 1000-times improvement in processing efficiency over standard processors.

If HIVE is successful, it could deliver a graph analytics processor that achieves a thousand fold improvement in processing efficiency over today’s best processors, enabling the real-time identification of strategically important relationships as they unfold in the field rather than relying on after-the-fact analyses in data centers.

“This should empower data scientists to make associations previously thought impractical due to the amount of processing required,” said Tran. These could include the ability to spot, for example, early signs of an Ebola outbreak, the first digital missives of a cyberattack, or even the plans to carry out such an attack before it happens.


HIVE is non-Neuman architecture

The classical von Neumann architecture, in which the processing of information and the storage of information are kept separate, has now faced a performance bottleneck. Data travels to and from the processor and memory—but the computer can’t process and store at the same time. By the nature of the architecture, it’s a linear process, and ultimately leads to the von Neuman “bottleneck.”

Trung Tran, a DARPA program manager said that our CPUs and GPUs have gone parallel but cores are still von Neumann. HIVE is non-Neuman as it simultaneously performs different processes on the different areas of memory. This approach allows one big map that can be accessed by multiple processors at the same time, Tran said.


Hierarchical Identify Verify Exploit (HIVE) program

The program has now signed on five performers to carry out HIVE’s mandate: to develop a powerful new data-handling and computing platform specialized for analyzing and interpreting huge amounts of data with unprecedented deftness.

The program includes the development of chip prototypes, development of software tools to support programming of the new hardware, and design of a system architecture to support efficient multi-node scaling. Specifically the chip development will focus on improving the efficiency of random access memory transactions to limit data movement, efficient parallelism to improve scalability, and new accelerators which are design specifically for graph computation.

The HIVE project will be performed in three phases over the next four and a half years, with three technical areas:

  • Graph analytics processor: The role of TA1 is to research and design a new chip architecture from scratch. Performers are intended to tackle the twin challenges of the memory wall and of true parallelization of multimode systems. The memory wall has vexed programmers for the last 20 years and has forced them to come up with new and creative ways to deal with memory access and memory bandwidth bottlenecks, bottlenecks caused by serial memory access patterns relying on the uniform memory placement. New memory architectures are anticipated to be created to allow for non-uniform memory access (NUMA).
  • True parallelization has also been hampered by the ability to allow coherent memory accesses between nodes and the ability to allow for multi-master multi-drop bus architectures. This leads to machines running in parallel but running independently. True parallelization would allow for those machines to work more closely in concert. In essence, TA1 has to move from today’s single instruction multi-data (SIMD) world to one that allows for multiple instruction multi-data (MIMD) execution
  • Graph analytics toolkits


DARPA has outlined the HIVE architectural goals as follows:

  • Create an accelerator architecture and processor pipeline which supports the processing of identified graph primitives in a native sparse matrix format.
  • Develop a chip architecture that supports the rapid and efficient movement of data from memory or I/Os to the accelerators based on an identified data flow model. Emphasis should be on redefining cache based architectures so that they address both sparse and dense data sets.
  • Develop an external memory controller designed to ensure efficient use of the identified data mapping tools. The controller should be able to efficiently handle random and sequential memory accesses on memory transfers as small as 8 to 32 bytes


According to Dhiraj Mallick, vice president of the Data Center Group and general manager of the Innovation Pathfinding and Architecture Group at Intel, by the middle of 2021, they and their HIVE contract partners will deliver “a 16-node demonstration platform showcasing 1,000x performance-per-watt improvement over today’s best-in-class hardware and software for graph analytics workloads.”


There are two initial challenges:

The first is a static graph problem focused on sub-graph Isomorphism. This provides the ability to search a large graph in order to identify a particular subsection of that graph.

The second is a dynamic graph problem focused on trying to find optimal clusters of data within the graph. Both will have a small graph problems in the billions of nodes and a large graph problem in the trillions of nodes.


HIVE Partners

The quintet of performers includes a mix of large commercial electronics firms, a national laboratory, a university, and a veteran defense-industry company: Intel Corporation (Santa Clara, California), Qualcomm Intelligent Solutions (San Diego, California), Pacific Northwest National Laboratory (Richland, Washington), Georgia Tech (Atlanta, Georgia), and Northrop Grumman (Falls Church, Virginia).

HIVE is centered on three areas; the first has two teams, one led by Intel, the other by Qualcomm, who are developing the specialised graph processor chip. Two other teams, one led by Pacific Northwest National Laboratory and the other by Georgia Tech, are developing the software and analytic tools piece.

Qualcomm Intelligent Solutions, Inc. (QISI) is one of just two silicon technology providers selected by DARPA to perform breakthrough architectural work on a graph analytics processor as a part of the HIVE (Hierarchical Identify Verify Exploit) project.

QISI has kicked off an initiative called Project Honeycomb to support this important effort. QISI’s goal with Project Honeycomb is to develop a domain-specific processor design and scalable multi-node architecture for the HIVE project. The work is intended to produce a hardware accelerator for graph computation primitives, a memory controller that optimizes data movement based on sparse mapping, and network architecture to avoid congestion in data movement. QISI plans to deliver the Project Honeycomb architecture specification and simulator to DARPA and other HIVE project performers in 12 months. The next two phases entail the design and fabrication of the graph analytics processor and delivery of a functioning 16-node system to DARPA for evaluation.

We are excited about the innovation potential of this research project that we believe will help define future architectures for advanced deep learning. In addition, we expect Project Honeycomb and the HIVE project as a whole will help accelerate the development of commercial products using a new innovative architectural approach for many areas related to data analytics and artificial intelligence.

The third area is led by Northrop Grumman, which will integrate the hardware and software and test it against a variety of relevant use cases and other technologies, Vern Boyle, vice-president for cyber and advanced processing for Northrop Grumman told Jane’s . The company has been setting up the test environment and working with both the hardware and software providers to look at the designs and algorithms to understand how those will be tested, measured, and evaluated, Boyle said.

The company has been setting up the test environment and working with both the hardware and software providers to look at the designs and algorithms to understand how those will be tested, measured, and evaluated, Boyle said. In order to evaluate HIVE processors, the programme intends to compare the performance of prototypes to current state-of-the-art, multi-GPU systems, Wade Shen, DARPA programme manager for HIVE, told Jane’s . “We are working with [US Department of Defense (DoD) and US government] partners to collect and benchmark how these systems will compare on real graph problems,” Shen said.




References and Resources also include:

US, Russia and China in race to develop autonomous and intelligent guided missiles to strike targets in anti-access, area-denial environment

The new buzzword in militaries across the world today is ‘artificial intelligence’ (AI) — the ability for combat platforms to self-control, self-regulate and self-actuate, using inherent computing and decision-making capabilities. AI is has also enabling autonomous military missiles that can identify and strike hostile targets without human decision. The U.S., Russia and China, the world’s leading military powers are all applying artificial intelligence to missiles, drones and other deadly devices.

Lockheed Martin has successfully carried out a controlled flight test of the US Navy’s long-range anti-ship missile (LRASM) surface-launch variant. With a range of at least 200 nautical miles, LRASM is designed to use next-generation guidance technology to help track and eliminate targets such as enemy ships, shallow submarines, drones, aircraft and land-based targets.  According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

In August this year, a Chinese daily reported that China’s aerospace industry was developing tactical missiles with inbuilt intelligence that would help seek out targets in combat. The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers. “They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

China has overtaken the United States to become the world leader in deep learning research, a branch of artificial intelligence (AI) inspired by the human brain, according to White House reports that aim to help prepare the US for the growing role of artificial intelligence in society.

Now Russia has claimed to be developing new missiles and drones that will use artificial intelligence to think for itself, according to weapons manufacturers and defense officials, in a bid to match military might against the United States and China.

LRASM highly autonomous missile

The LRASM is a long-range precision-guided, anti-ship standoff missile designed to meet the needs of U.S. Navy and Air Force warfighters in anti-access/area-denial threat environments. The LRASM boasts a range of well over 200 nautical miles, a payload of 1,000 pounds, and the ability to strike at nearly the speed of sound.

What really makes LRASM stand out is that all of this is completely autonomous. Human beings tell the missile where the enemy fleet is, which ship to strike, and provide it with a continuous stream of data—the missile takes care of everything else. Using artificial intelligence, the missile takes data and makes decisions all on its own. Using AI and datalinks, multiple LRASMs can launch a coordinated attack on an enemy fleet, writes Kyle.

LRASM is first guided by the ship that launched it, then by satellite. The missile is jam-resistant and can carry on even if it loses contact with the Global Positioning System. As part of the targeting system, the missile can be set to fly to a series of waypoints, flying around static threats, land features, and commercial shipping. LRASM can detect threats between waypoints and navigate around them. If it decides it would be entering the engagement range of an enemy ship not on the target list, LRASM will fly around the ship, even skipping waypoints that might lie within enemy range and going on to the next one.

After locating the enemy fleet, it dives to sea-skimming altitude to avoid close-in defenses. LRASM then sizes up the enemy fleet, locates its target, and calculates the desired “mean point of impact”—the exact spot the missile should aim for, taking into account the accuracy of the missile—to ensure the missile does not miss. In most instances that is the exact center of the ship, with the angle of the ship in relation to the missile taken into consideration, reported Kyle Mizokami in PM.


China’s next-gen cruise missiles shall have high-level of artificial intelligence

China is looking to create a new generation of cruise missiles, which will have a high level of artificial intelligence, will be multifunctional and reconfigurable based on modular design according to a senior designer from China’s Aerospace and Industry Corp. The Chinese military is looking to adapt its technology with the belief that future combat missions will require weapons to be both cost-efficient and flexible.

“We plan to adopt a ‘plug and play’ approach in the development of new cruise missiles, which will enable our military commanders to tailor-make missiles in accordance with combat conditions and their specific requirements,” Wang Changqing of the China Aerospace and Industry Corp told China Daily newspaper. Meanwhile Wang Ya’nan, the editor in chief of the Aerospace Knowledge magazine, said that missiles will be multi-functional. He mentioned that their payload can be changed, while they will also be suitable for striking targets both on land and at sea.

“Moreover, our future cruise missiles will have a very high level of artificial intelligence and automation,” he told China Daily. “They will allow commanders to control them in a real-time manner, or to use a fire-and-forget mode, or even to add more tasks to in-flight missiles.”


Russia’s Military developing highly autonomous missile for its stealth fighter

Tactical Missiles Corporation CEO Boris Obnosov said Thursday that the new weapon, which he did not name, would be released within the next few years and would take inspiration from Russia’s greatest military rival, the U.S. Speaking at the annual Zhukovsky-based MosAeroShow (MAKS-2017), Obnosov told attendees that he studied the U.S.’s use of the Raytheon Block IV Tomahawk cruise missile against Russia’s allies in Syria and sought to emulate its advanced technology, such as the ability to switch targets mid-flight, in an upcoming weapon

Earlier this year, General Viktor Bondarev, commander-in-chief of Russia’s air force, discussed equipping such smart missiles to the proposed next-generation Russian stealth fighter, the Tupolev PAK DA. What the PAK DA lacks in supersonic speed, it would reportedly make up for in stealth, electronic innovations and the artificial intelligence-capable missile, which Bondarev said was already in the works as of February.

“It is impossible to build a missile-carrying bomber invisible to radars and supersonic at the same time. This is why focus is placed on stealth capabilities. The PAK DA will carry AI-guided missiles with a range of up to 7,000 kilometers (about 4,350 miles) Such a missile can analyze the aerial and radio-radar situation and determine its direction, altitude and speed. We’re already working on such missiles,” Bondarev told Russia’s official Rossiyskaya Gazeta newspaper in comments translated and analyzed by The Aviationist.


Intelligent Guided Missile

With escalating cost of a missile and the potential damage that an intruding aircraft can cause, there is a need to improve the single shot kill probability of a missile to hundred percent. Present Guided missiles using conventional algorithms like proportional navigation algorithm and its variants are optimal when the speed of missile is very high and the maneuvering capability of the target is low.

However the efficiency of missile may be degraded in battlefield due to many reasons like in case of highly maneuverable fifth generation aircrafts with speeds between Mach-2 and Mach-3. The radars data link is also vulnerable to jamming by the adversary therefore autonomous missile is highly effective in such scenarios.

Recent advances in distributed Artificial Intelligence such as deploying intelligent agents (lA) hold promise of improving the performance and decreasing the misdistance (distance between the target and the closest point of approach of the missile to a small value). Intelligent agents are software entities that come under the category of distributed Artificial Intelligence, and are associated with problem solving functions. They are characterized by some general attributes like autonomy, social ability etc.

M.S Vinoth and others from Department of Computer Sciences Vellore Institute of Technology Tamilnadu , India have proposed  incorporating an IA system on-board a missile that will enhance the kill probability or even achieve the most coveted fire and forget capability.

The on-board radar based sensors on the missile will detect any hostile ground or air activity the missile will directly break from the wireless ground based link and then the control is shifted to the intelligent agent and the series of counter moves will be affected to shoot down the enemy intruder. By this modification the already airborne missile will have much lesser reaction time compared to the traditional radar based and ground stationed SAM(surface to air missile entities) , there by effectively saving the time and increasing the kill probability of the missile. The missile needs to have a much higher speed advantage or to use a combination of artificial intelligence and modern control algorithms, authors say.


References and resources also include:

US Army’s Warfighter Information Network-Tactical (WIN-T) enables mission command and secure reliable voice, video and data communications anytime, anywhere without the need for fixed infrastructure.

Today’s soldiers expect to have network access anywhere, anytime. With the Warfighter Information Network-Tactical (WIN-T), commanders can communicate on-the-move and soldiers can have their voices heard, their texts received, and their location displayed on a map.

The WIN-T network allow all Army commanders, and other communications network users, at all echelons, to exchange information internal and external to the theater, from wired or wireless telephones, computers (internet-like capability) or from video terminals. WIN-T is the Army’s tactical communications network backbone that enables mission command and secure reliable voice, video and data communications anytime, anywhere without the need for fixed infrastructure.

By connecting soldiers with their commanders, WIN-T is changing the way the U.S. Army fights by providing life-saving information on-the-move, anywhere in the world. WIN-T enables soldiers to: stream real-time video over the network, view a topographical map of friendly forces, send texts requesting medical assistance, digitally call for artillery support, and access mission command apps like CPOF and TIGR.

Command Post of the Future (CPOF) enables warfighters to visualize the battlefield and plan the mission through a dynamic view of critical resources and events. Collaborators across echelons and distances can maintain situational awareness while automating many of their daily tasks. TIGR – Tactical Ground Reporting System – provides updated intelligence such as maps showing insurgent or roadside bomb locations and incident reports from certain high-risk locations.

With WIN-T, Commanders and Soldiers can leverage mission command applications at any location, from traditional command posts, to network-equipped vehicles crossing the battlefield, even from the belly of C17 aircraft en route to an objective.

It is the Army’s 21st Century C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance) infrastructure that provides capabilities that are mobile, secure, survivable, seamless, and capable of supporting multimedia tactical information systems.

WIN-T employs a combination of terrestrial, airborne, and satellite-based transport options, to provide robust, redundant connectivity. Leveraging satellite and line-of-sight capabilities for optimum efficiency, effectiveness and operational flexibility, the WIN-T network provides the data “pipe” that other communication and mission command systems need to connect into in order to operate.

The Army is conducting a wide-ranging review of “a whole series of vulnerabilities” in its communications systems that extends far beyond the troubled WIN-T program,  Gen. Mark Milley and Acting Army Secretary Robert Speer told reporters after a Senate appropriations hearing .

What’s the motivation? “In the electromagnetic spectrum…there’s a whole series of vulnerabilities,” Milley said, who’s criticized Army electronics being too easy for enemies to hack, jam, or triangulate for artillery bombardments. “What we want to make sure is that the Army, as part of a joint force, has the ability to effectively conduct mission command,” he went on, using the Army’s more initiative-friendly concept for what used to be called command and control. “A key component of that is to be able to communicate — voice, digital, video, and so on — in any environment, globally…and against any foe.”

The location tracking services depend on GPS which is vulnerable to jamming attacks.Typical military jammers are able to affect GPS receiver for many tens of kilometers by line of sight. It’s a problem because best accuracy, availability and global coverage of PNT data is through GPS/GNSS.

Engineers and WIN-T developers at GD are currently working on a range of “hardening” strategies, tactics and technological adjustments to address changing threats, Bill Weiss, Vice President and General Manager, Ground Systems, General Dynamics Mission Systems, told Scout Warrior.

Also, WIN-T developers are making progress with an emerging strategy described as “keep-out-zones,” a method of deliberately emanating electromagnetic signals in the direction of friendly forces and, by design, away from an enemy. “Some radios broadcast in an omni-directional fashion. The antenna in WIN-T is sophisticated. It has the option to stream a beam and only radiate in a certain direction,” Paul Bristow, Chief Network Architecture for General Dynamics Mission Systems, told Scout Warrior.

WIN-T developers also say technical progress is being made with efforts to refine and operationalize emerging “precision, navigation and timing” technologies able to facilitate relevant connectivity in the event that GPS is compromised. Militaries are taking two approaches one is integration of GPS with complementary technologies such as chip-scale atomic clocks and small inertial measurement units of the Micro-Electro Mechanical Systems (MEMS). Other approach is developing entirely new PNT technologies. The latest PNT technology, being developed through DARPA and the Army Emerging Technologies Office, will no longer depend on GPS technology, which requires sending and receiving signals to satellites and therefore can be disabled by electromagnetic and cyber space interference.

WIN-T is used by every echelon on the battlefield (Theater through Unit of Action) and consists of infrastructure and network components from the maneuver battalion to the theater rear boundary. Major components of the WIN-T network infrastructure include switching, routing, transmission media, network management, information assurance (IA), subscriber services and user interfaces to support user multimedia (voice, data, messaging, and video) requirements.


WIN-T Increment 1: Communications At The Halt

Joint Network NodeOriginally known as the Joint Network Node Network (JNN-N) program, WIN-T Increment 1 began fielding in 2004 to support combat missions during Operation Enduring Freedom and Operation Iraqi Freedom. With WIN-T Inc. 1, for the first time in history, the soldiers in the battlefield had a high-speed, interoperable voice and data communications network down to the battalion level. Fielding of WIN-T Inc. 1 was completed to the U.S. Army, National Guard and Reserves in June 2012.


Similar to most Americans’ Internet connections at home, but with added security and the ability to network in the most remote environments, WIN-T Inc. 1 provides the U.S. Army’s tactical force with secure high speed, high capacity voice, data and video communications “at-the-halt” granting soldiers the ability to quickly communicate with their operations center.


It has three types of transportable network nodes: the Tactical Hub Node (THN) that supports division headquarters, the Joint Network Node (JNN) that supports brigade level headquarters and the Battalion Command Post Node (BnCPN) that supports battalion level headquarters.


A communications network management software solution (PacStar’s IQ-Core Software ) deployed in 2016 across the U.S. Army has proven to drastically reduce network downtime as soldiers operate in an increasingly complex command post environment. An estimated 80 percent of network downtime in combat zones is caused by equipment misconfiguration, not equipment failure, says PacStar’s Chief Technical Officer Charlie Kawasaki. “By eliminating those misconfigurations, we’re keeping the critical network services available,” Kawasaki says. “That’s what we’re talking about—the ability to potentially reduce downtime by hours.”


WIN-T Inc. 1 is currently in use by soldiers in the Army, National Guard and Army Reserves. WIN-T Increments 2 and 3 build on the capabilities of Inc. 1 with on-the-move networking and further security, bandwidth and connectivity.


WIN-T Increment 2: Communications On-The-Move

Combat vehicles integrated with WIN-T Increment 2 provide the on-the-move communications, mission command and situational awareness that commanders need to lead from anywhere on the battlefield. WIN-T Increment 2 enables deployed Soldiers down to the company level operating in remote and challenging terrain to maintain voice, video and data communications while on patrol, with connectivity rivaling that found in a stationary command post.


Increment 2 enables mission command from brigade to division to company through a completely ad-hoc, self-forming, self-healing networks.  Commanders and select staff now have the ability to maneuver anywhere on the battlefield and maintain connectivity to the network, without the need to stop and set up communications, making them vulnerable to attack. Army Chief of Staff Gen. Mark Milley has repeatedly stated that nothing stationary will survive long in the high-intensity conflicts of the future.


The Army’s mobile Satcom and high-bandwidth communications network, Warfighter Information Network Inc. 2, has been fielded to at least 16 Brigade Combat Teams and has performed well in combat during ongoing ground wars.


WIN-T Increment 2 enables high-capacity voice, video and data transmission in an electronically-contested environment. It is resistant to jamming and cyber attacks. Its transmissions are double-encrypted, so they can’t be intercepted and understood by the enemy. It can be modified to minimize signatures that adversaries might exploit for targeting. And most important, it is fully functional from tactical vehicles on the move.


The system provides on-the-move network capability and a mobile infrastructure by employing military and commercial satellite connectivity and line-of-sight (terrestrial) radios and antennas to achieve end-to-end connectivity and dynamic networking operations. The 10th Mountain Division was the first to have this new capability when it deployed for Afghanistan in July 2013. WIN-T Inc. 2’s unique value was immediately recognized, as it provided soldiers with communications even as fixed infrastructure was removed.


On-The-Move: The tactical communication nodes in Inc. 2 are the first step to providing a mobile infrastructure on the battlefield. Consisting of mobile points of presence systems (installed on select vehicles at battalion levels and above, which include four companies of up to 200 soldiers and about 10 to 30 vehicles each), vehicle wireless packages, and the soldier network extension (for Company-level connectivity).


WIN-T Increment 3: Simplifying, Securing and Expanding the Network

WIN-T Inc. 3 advancements simplify WIN-T “network operations” for greater soldier utility and ease of use. And as threats within cyber space continue to evolve and grow, Inc. 3 ensures the entire WIN-T portfolio remains cyber secure with ongoing upgrades and development of Type 1 encryption for the network.


WIN-T Inc. 3 will also expand the reach of the network to provide a fully mobile and flexible tactical networking capability needed to support a highly dispersed force over isolated areas. This is especially important as the Army transitions to a faster, leaner force to handle future threats and missions across the globe.


With continued network enhancements, Inc. 3 provides a leap forward in network capacity, as well as improvements to the overall reliability and robustness of the network.


WIN-T Inc 3 develops the Network Operations (NetOps) software to meet the Army’s Network Convergence goals. NetOps provides the monitoring, control and planning tools to ensure management of the voice, data and internet transport networks. The NetOps software will also provide Information Assurance and Network Centric Enterprise Services.


Inc 3 also develops the enhanced Net Centric Waveform (NCW) version 10.x for increased throughput capability beyond line of sight (BLOS) satellite communication and the Highband Networking Waveform (HNW) version 3.0 for line of sight (LOS) communications. NCW version 10.x testing will support Army Strategic Command certification of the waveform for use on Wideband Global Satellites and subsequent insertion into WIN-T Inc 1 and Inc 2. HNW version 3.0 will be delivered to the Joint Tactical Networking Center (JTNC) Information Repository for commercial development application. Both NCW and HNW provide improved network capacity and robustness


Cyber: Protecting and Defending the Network

Cybersecurity and anti-jam capabilities are a critical part of WIN-T. With the amount of voice and data information that can now flow between soldiers on the ground and back up to commanders at higher echelons, protecting and defending the integrity of the network is a paramount concern.



References and Resources also include:

Technical breakthroughs in Soft Robotics promise to bring robots into all aspects our daily lives

Robots have already become an indispensable part of our lives. However currently, most robots are relatively rigid machines which make unnatural movements. Inspired by living organisms, soft material robotics hold great promise for areas where robots need to contact and interact with humans, such as manufacturing and healthcare. Unlike rigid robots, soft robots can replicate natural motion – grasping and manipulation – to provide medical and other types of assistance, perform delicate tasks, or pick up soft objects

Soft robotics differ from traditional counterparts in some important ways: Soft robots have little or no hard internal structures. Instead they use a combination of muscularity and deformation to grasp things and move about. Rather than using motors, cables or gears, soft robots are often animated by pressurized air or liquids. In many cases soft robotics designs mimic natural, evolved biological forms hence also called bio-inspired robots. This, combined with their soft exteriors, can make soft robots more suitable for interaction with living things or even for use as human exoskeletons.

The emerging field of soft robotics aims to improve robot/human interactivity promising to bring robots into all aspects our daily lives, including wearable robotics, surgical robots, micromanipulation, search and rescue, and others. Soft robots can become aides for the disabled or the elderly if they can be trusted not to hurt the people they come into contact with. Miniature soft robots could even serve as surgical tools inside the body. Robots with greater flexibility could also help in military operations, where level terrain and unobstructed areas are rare, whether as a fully intact robot or as, say, a strap-on arm with a pneumatically controlled hand that could extend the reach, strength or capability of what a person could do.

Soft Robotics arms can come in handy when carrying these soldiers without causing injury. “We have lost medics throughout the years because they have the courage to go forward and rescue their comrades under fire. With the newer technology, with the robotic vehicles we are using even today to examine and to detonate IEDs [improvised explosive devices], those same vehicles can go forward and retrieve casualties,” Major General Steve Jones, commander of the Army Medical Department Center, said. Evacuating casualties was only one of the roles for robots in battlefield medicine that Jones discussed. Another option is delivering medical supplies to dangerous areas, supporting troops operating behind enemy lines.

“Despite its importance and considerable demands, the field of Soft Robotics faces a number of fundamental scientific challenges: the studies of unconventional materials are still in their exploration phase, and it has not been fully clarified what materials are available and useful for robotic applications; tools and methods for fabrication and assembly are not established; we do not have broadly agreed methods of modeling and simulation of soft continuum bodies; it is not fully understood how to achieve sensing, actuation and control in soft bodied robots; and we are still exploring what are the good ways to test, evaluate, and communicate the soft robotics technologies,” says IEEE Robotics and Automation Society.

Researchers are experimenting with different materials and designs to allow  rigid, jerky machines to bend and flex in ways that mimic and can interact more naturally with living organisms. However, increased flexibility and dexterity has a trade-off of reduced strength, as softer materials are generally not as strong or resilient as inflexible ones, which limits their use.

Scientists are also studying how soft robots could lead to major breakthroughs in the development of self-repairing, growing and self-replicating robots, according to the IEEE Robotics and Automation Society. Borgatti explained how soft robots can react to their environments – a major factor for future government use. For example, soft robots can be designed to navigate difficult terrain like shifting sand and fall without being damaged – picking themselves up and correcting their course.

Soft Robotics

Soft robotics differ from traditional counterparts in some important ways: Soft robots have little or no hard internal structures. Instead they use a combination of muscularity and deformation to grasp things and move about. Rather than using motors, cables or gears, soft robots are often animated by pressurized air or liquids. In many cases soft robotics designs mimic natural, evolved biological forms hence also called bio-inspired robots. This, combined with their soft exteriors, can make soft robots more suitable for interaction with living things or even for use as human exoskeletons.

They are crucial in the systems that deal with uncertain and dynamic task-environments, e.g. grasping and manipulation of unknown objects, locomotion in rough terrains such as ocean floor, and physical contacts with living cells and human bodies. These robots must move over rough terrain without getting stuck and need manipulators that can grab whatever strangely shaped Soft and deformable structure objects they encounter.

Number of researchers have been investigating unconventional materials for robotic systems, in which soft materials such as polymer based materials are examined for novel sensory devices and actuators. The newly developed smart materials, sensors and actuators were then integrated into micro-robots of various kinds. Also the flexible body structures of animals were replicated in the reconfigurable robots.

“There is a great need in the health care system for lightweight, lower-cost wearable exoskeleton designs to support stroke patients, individuals diagnosed with multiple sclerosis and senior citizens who require mechanical mobility assistance,” said Larry Jasinski, CEO of ReWalk. Currently in the United States, there are an estimated 3 million stroke patients and 400,000 MS patients who are suffering from limited mobility due to lower limb disabilities.

Many industries are searching for new ways to use robots, including developing machines that can work alongside humans and those that are more versatile than the single-task assembly line bots of years past. Company Soft Robotics has developed fingerlike grippers are made of flexible material, such as silicone, and powered by compressed air especially useful in warehouse and assembly line markets — particularly in the food industry, where robots aren’t typically trusted to handle delicate items like fresh produce.



Scientists develop robot that can feel

Group of roboticists in the Department of Biomedical Engineering at the Georgia Institute of Technology in Atlanta,  has developed a robot arm that moves and finds objects by touch. In a paper published in the International Journal of Robotics Research, the Georgia Tech group described a robot arm that was able to reach into a cluttered environment and use “touch,” along with computer vision, to complete exacting tasks.

Dr. Kemp said the researchers using digital simulations and a simple set of primitive robot behaviors were able to develop algorithms used gave the arm qualities that seemed to mimic human behavior. For example, the robot was able to bend, compress and slide objects. Also, given parameters designed to limit how hard it could press on an object, the arm was able to pivot around objects automatically.

The arm was designed to essentially have “springs” at its joints, making it “compliant,” a term roboticists use to define components that are more flexible and less precise than conventional robotic mechanisms. Compliance has become increasingly important as a new generation of safer robots has emerged.The robot also has an fabric based artificial “skin” equipped with force sensors and thermal sensors that can sense pressure or touch enabling the home care robot to lightly touch different materials and identify it.

According to Georgia Tech, Director of the Healthcare Robotics Lab at Georgia Tech Charles C. Kemp said that, “These environments tend to have clutter. In a home, you can have lots of objects on a shelf, and the robot can’t see beyond that first row of objects.” The combination of the sensors can help the home care robot to know the difference between wood and metal. The experts from IEEE Spectrum indicate that the technique copies the way how the human skin uses thermal conductivity to classify different materials.

New robot has a human touch

A group led by Robert Shepherd, assistant professor of mechanical and aerospace engineering and principal investigator of Organic Robotics Lab, has published a paper describing how stretchable optical waveguides act as curvature, elongation and force sensors in a soft robotic hand.

“Most robots today have sensors on the outside of the body that detect things from the surface,” Doctoral student Huichan Zhao is lead author of “Optoelectronically Innervated Soft Prosthetic Hand via Stretchable Optical Waveguides,”  said. “Our sensors are integrated within the body, so they can actually detect forces being transmitted through the thickness of the robot, a lot like we and all organisms do when we feel pain, for example.”

Optical waveguides have been in use since the early 1970s for numerous sensing functions, including tactile, position and acoustic. Fabrication was originally a complicated process, but the advent over the last 20 years of soft lithography and 3-D printing has led to development of elastomeric sensors that are easily produced and incorporated into a soft robotic application.

Shepherd’s group employed a four-step soft lithography process to produce the core (through which light propagates), and the cladding (outer surface of the waveguide), which also houses the LED (light-emitting diode) and the photodiode.

The more the prosthetic hand deforms, the more light is lost through the core. That variable loss of light, as detected by the photodiode, is what allows the prosthesis to “sense” its surroundings.

“If no light was lost when we bend the prosthesis, we wouldn’t get any information about the state of the sensor,” Shepherd said. “The amount of loss is dependent on how it’s bent.”

The group used its optoelectronic prosthesis to perform a variety of tasks, including grasping and probing for both shape and texture. Most notably, the hand was able to scan three tomatoes and determine, by softness, which was the ripest.

This work was supported by a grant from Air Force Office of Scientific Research, and made use of the Cornell NanoScale Science and Technology Facility and the Cornell Center for Materials Research, both of which are supported by the National Science Foundation.

Robot Octopus Points the Way to Soft Robotics with Eight Wiggly Arms

Cecilia Laschi, professor at the BioRobotics Institute at the Scuola Superiore Sant’Anna, in Pisa, Italy, and her team are investigating soft robots that mimic the form of the octopus. “The octopus has neither an internal nor external skeleton, and its eight arms can bend at any point, elongate and shorten, and stiffen to apply force. It can twist its arms around objects and manipulate them with great dexterity,” she writes. Team has already built robot octopus that could crawl along the seafloor mimicking locomotion of octopus.

The team is creating artificial muscles using materials called shape-memory alloys (SMAs). “When heated, SMAs deform to a predefined shape, which they “remember.” We fashioned SMA wires into springs and ran electric current through them to heat them, causing the springs to scrunch up in a way that imitates muscular contractions.”

“For the Octopus project, my team constructed a prototype arm using SMA springs to stand in for the longitudinal and transverse muscles found in the limbs of a real octopus. By sending current through different sets of springs, we made the underwater arm bend at multiple points, shorten and elongate, even grasp things,” she explains.

“Our work is primarily meant to demonstrate the potential of soft robotics, and much work remains before a robot octopus will be ready to crawl out of the lab.”

Harvard researchers have created the first soft octopus robot that is completely self-contained. It is basically a pneumatic tube has no hard electronic components—no batteries or computer chips—and moves without being tethered to a computer.

The octobot is powered by hydrogen peroxide is pumped into two reservoirs inside the middle of the octobot’s body. Pressure pushes the liquid through tubes inside the body, where it eventually hits a line of platinum, catalyzing a reaction that produces a gas. From there, the gas expands and moves through a tiny chip known as a microfluidic controller.

It alternately directs the gas down one half of the octobot’s tentacles at a time which enables octopus to wiggle its tentacles. The octobot can move for about eight minutes on one milliliter of fuel.

“You have to make all the parts yourself,” says Ryan Truby, a graduate student in Jennifer Lewis’s lab at Harvard, where the materials half of this research is taking place. The mold for the octopus shape and the microfluidic chip were among the things developed nearby in Robert Woods’s lab.

Harvard Engineers Create a 3D Printed Autonomous Robot

SEAS researchers have built one of the first 3-D printed, soft robot that moves autonomously. The design offers a new solution to an engineering challenge that has plagued soft robotics for years: the integration of rigid and soft materials.

The robot is constructed of two main parts: a soft plunger like body with three pneumatic legs and the rigid core module, containing power and control components and protected by a semisoft shield created with a 3-D printer. This integration of the rigid components with the body of the soft robot through a gradient of material properties eliminates an abrupt, hard-to-soft transition that is often a failure point.

This design combines the autonomy and speed of a rigid robot with the adaptability and resiliency of a soft robot and, because of 3-D printing, is relatively cheap and fast

The robot is combustion-powered, to initiate movement, the robot inflates its pneumatic legs to tilt its body in the direction it wants to go. Then butane and oxygen are mixed and ignited, catapulting the robot into the air. It’s a powerful jumper, reaching up to six times its body height in vertical leaps and half its body width in lateral jumps. In the field, the hopping motion could be an effective way to move quickly and easily around obstacles.

“The wonderful thing about soft robots is that they lend themselves nicely to abuse,” said Nicholas Bartlett, first author of the paper and a graduate student at SEAS. “The robot’s stiffness gradient allows it to withstand the impact of dozens of landings and to survive the combustion event required for jumping. Consequently, the robot not only shows improved overall robustness but can locomote much more quickly than traditional soft robots.”

The robot’s jumping ability and soft body would come in handy in harsh and unpredictable environments or disaster situations, allowing it to survive large falls and other unexpected developments.

3D-Printed ‘Bionic Skin’ Could Give Robots the Sense of Touch

Engineering researchers at the University of Minnesota have developed a revolutionary process for 3D printing stretchable electronic sensory devices that could give robots the ability to feel their environment. The discovery is also a major step forward in printing electronics on real human skin.

This ultimate wearable technology could eventually be used for health monitoring or by soldiers in the field to detect dangerous chemicals or explosives.“While we haven’t printed on human skin yet, we were able to print on the curved surface of a model hand using our technique,” McAlpine said. “We also interfaced a printed device with the skin and were surprised that the device was so sensitive that it could detect your pulse in real time.”

McAlpine and his team made the unique sensing fabric with a one-of-a kind 3D printer they built in the lab. The multifunctional printer has four nozzles to print the various specialized “inks” that make up the layers of the device—a base layer of silicone, top and bottom electrodes made of a conducting ink, a coil-shaped pressure sensor, and a sacrificial layer is later washed away in the manufacturing process.

“This is a completely new way to approach 3D printing of electronics,” McAlpine said. “We have a multifunctional printer that can print several layers to make these flexible sensory devices. This could take us to so many directions from health monitoring to energy harvesting to chemical sensing.


Soft Robotic Fingers Recognize Objects by Feel

Rus and her team at Distributed Robotics Lab at CSAIL have created bendable and stretchable robotic fingers made out of silicone rubber that can lift and handle objects as thin as a piece of paper and as delicate as an egg.

Rus incorporated “bend sensors” into the silicone fingers so that they can send back information on the location and curvature of the object being grasped. Then, the robot can pick up an unfamiliar object and use the data to compare to already existing clusters of data points from past objects.

“By embedding flexible bend sensors into each finger, we got an idea of how much the finger bends, and we can close the loop from how much pressure we apply,” says Katzschmann. “In our case, we were using a piston based closed pneumatic system.”

Currently, the robot can acquire three data points from a single grasp, meaning the robot’s algorithms can distinguish between objects which are very similar in size. The researchers hope that further advances in sensors will someday enable the system to distinguish between dozens of diverse objects.

 Research Challenges

Due to the soft materials used, these robots can not only squeeze into tight spaces, but also recover more easily from collisions and pick up and handle irregularly-shaped objects. However, because of soft robots’ flexibility, they often struggle with correctly measuring where an object is, or whether they actually picked the object up.

Characterizing and predicting the behavior of soft multi-material actuators is challenging due to the nonlinear nature of both the hyper-elastic material and the large bending motions they produce. Key challenges in the design and manufacture of soft robots include the complex fabrication processes and the interfacing of soft and rigid components.

Fully soft sensors that can be incorporated into the actuator design during the manufacturing process are needed to control soft actuators; they provide means of monitoring their kinematics, interaction forces with objects in the environment and internal pressure.

The university research is focused on developing new materials like dielectric elastomers, carbon nanotube yarn and self-healing materials and on designing the controllers and actuators that animate them. New actuator technologies and fabrication approaches will bring about better force-speed operating points, variable impedance, more convenient form factors, and actuators without transmission mechanisms.

Polymer Embedded With Metallic Nanoparticles Enables Soft Robotics

Researchers at North Carolina State University (NCSU), in Raleigh, have developed a technique through movement can be induced into polymer through application of magnetic field by embedding nanoparticles of magnetite—an iron oxide—into a polymer.

“Using this technique, we can create large nanocomposites, in many different shapes, which can be manipulated remotely,” said Sumeet Mishra, lead author of the paper, in a press release. “The nanoparticle chains give us an enhanced response, and by controlling the strength and direction of the magnetic field, you can control the extent and direction of the movements of soft robots.”

In research described in a paper published in the journal Nanoscale, the NCSU researchers describe a process that starts with dispersing the nanoparticles in a solvent. Next, a polymer is dissolved into the mixture and the resulting fluid is poured into a mold. Then a magnetic field is applied that arranges the magnetite nanoparticles into parallel chains. Once the solution dries in the mold, the chains of nanoparticles are locked into place.

“The key here is that the nanoparticles in the chains and their magnetic dipoles are arranged head-to-tail, with the positive end of one magnetic nanoparticle lined up with the negative end of the next, all the way down the line,” said Joe Tracy, an associate professor at NCSU and corresponding author of the paper, in the press release. “When a magnetic field is applied in any direction, the chain re-orients itself to become as parallel as possible to the magnetic field, limited only by the constraints of gravity and the elasticity of the polymer.”


References and Resources also include:

DARPA’s N-ZERO extends the lifetime of IoT devices and remote sensors from months to years

Today U.S. soldiers are being killed because the Defense Department cannot deploy all the sensors it would like to. DoD could  deploy sensors every few yards to detect  buried  improvised explosive device (IED). As it is, every sensor deployed today has to be battery powered, so even if vast sensor nets were deployed it would put more soldiers in jeopardy by forcing them to expose themselves to ambush attacks while changing sensor batteries.

By 2018 the DARPA’s N-Zero initiative aims to have deployable sensor networks that require near-zero standby-power, a goal the team quickly found that was impossible without microelectromechanical systems (MEMS). In addition the teams discovered an extra benefit of MEMS — an advantage the team had never imaged possible. MEMS provides not just near-zero standby power, but can be configured for absolute zero standby power by using the power from the signal to be detected itself to power-up the transmitter. And in some situations, the transmitter too can be powered without a battery, by storing up energy on a super-capacitor from renewable sources — from solar to vibration harvesters.

The Department of Defense has an unfilled need for persistent, event-driven sensing capabilities, where physical, electromagnetic and other sensors can remain dormant, with near zero-power consumption, until awakened by an external trigger or stimulus. Current state-of-the-art sensors use active electronics to monitor the environment for the external trigger, consuming power continuously and limiting the sensor lifetime to months or less.

The N-ZERO program intends to extend the lifetime of remotely deployed communications and environmental sensors from months to years, by supporting projects that demonstrate the ability to continuously and passively monitor the environment, waking an electronic circuit only upon the detection of a specific trigger signature. Specifically, N-ZERO seeks to extend unattended sensor lifetime from weeks to years, cut costs of maintenance and the need for redeployments. Alternatively, N-ZERO could also reduce battery size for a typical ground-based sensor by a factor of 20 or more while still keeping its current operational lifetime.

“We wanted to learn how to reduce our sensors power envelope so that we could deploy them right at the tactical edge with a battery that does not need to be replaced for a long period of time,” said DARPA program manager Roy (Troy) Olsson in his keynote address titled Event Driven Persistent Sensing.

A team of researchers at Northeastern University have developed a new sensor powered by the very infrared energy it’s designed to detect. The device, which was commissioned as part of DARPA’s Near Zero Power RF and Sensor Operation (N-ZERO) program, consumes zero standby power until it senses infrared (IR) wavelengths. The sensor shall have many military applications that can detect vehicles  and tanks and even identify them  weather it is a truck, a car, or an aircraft by detecting heat emitted by them in IR spectra and analysing  the heat or IR signature which is different because of  engines that burn gasoline or diesel fuels produce emissions made up of different chemical compounds.

Requirement of  new technologies to power IoT and wireless sensor networks

DARPA’s N-ZERO program can also enable the future billions of Internet of Things (IoT) devices that shall be deployed ‘everywhere’ and to be accessed ‘any time’ from ‘anywhere’.“What we can do today really doesn’t fulfill the vision of the Internet of Things,” Troy Olsson, DARPA’s N-ZERO program manager, told SIGNAL. “We can either connect devices that have power already, like your refrigerator, or devices that you can recharge every day or every couple of days, like a cellular phone. You can connect and interconnect those, and some people call that the Internet of Things.” For Olsson, true IoT will involve sensors everywhere that are untethered from either a power supply or from having to be recharged constantly

To power future billions of Internet of Things (IoT) shall require billions of batteries to be purchased, maintained, and disposed of. Energy harvesting presents the best alternative for large-scale self-contained IoT is ambient energy sources. State-of-the-art (SOA) sensors use active electronics to monitor the environment for the external trigger, consuming power continuously and limiting the sensor lifetime to durations of months or less. In addition, it increases the cost of deployment, either by necessitating the use of large, expensive batteries or by demanding frequent battery replacement. It also increases Warfighter exposure to danger. Researchers have evolved many approaches to tackle the energy consumption of battery powered devices. Wireless sensor network standards have been specifically designed to take into account the scarce resources of nodes.

Sensor devices are made up of sensing capabilities, communicating components and data processing. The sensor nodes gather information or detect special events and send the data to the base station to be processed. Radio module is the main component that causes battery depletion. To reduce energy dissipation due to wireless communications, researchers have tried to optimise radio parameters such as coding and modulation schemes, power transmission and antenna direction.

Another category of solutions aims to reduce the amount of data to be delivered to the sink. Two methods can be adopted jointly: the limitation of unneeded samples and the limitation of sensing tasks because both data transmission and acquisition are costly in terms of energy.

Idle states are major sources of energy consumption at the radio component. Sleep / wakeup schemes aim to adapt node activity to save energy by putting the radio in sleep mode.

DARPA seeks to transform the energy efficiency of these unattended sensors through elimination or substantial reduction of the standby power consumed by the sensors while they await a signature of interest. The improved energy efficiency is expected to result in an increase in the sensor mission lifetime from months to years.



DARPA’s N-ZERO program

The program intends to exploit the energy in the signal signature itself to detect and discriminate the events of interest while rejecting noise and interference. N-ZERO program intends to develop the underlying technologies and demonstrate the capability to continuously and passively monitor the environment, and wake-up an electronic circuit upon detection of a specific trigger signature. Thus, sensor lifetime will be limited only by processing and transmission of confirmed events, or ultimately by the battery self-discharge.

The N-ZERO program has three phases. The first, which ended December 2016, took 15 months to complete. The second and third phases will each take one year. Some research teams achieved goals in the program’s first phase that they were expected to reach much later.

Ultimately, the goal of the N-ZERO program is to design, build, and test intelligent sensors and microsystems that exploit the energy in, and the unique features of, a signature of interest to process and detect the signature’s presence, reject noise and interference, while consuming less than 10 nW. The goal is to use less than 10 nanowatts (nW) during the sensor’s asleep-yet-aware phase—an energy drawdown roughly equivalent to the self-discharge (battery discharge during storage) of a typical watch battery, and at least 1,000 times lower than state-of-the-art sensors.

It should also attain a low false alarm rate of 1 per hour or better in an urban environment. Upon detection of a signal having the signature of interest, the N-ZERO component devices must produce a logic state capable of waking up commercial-off-the-shelf (COTS) electronics for further (post wake-up) processing and signal communication.

There are two primary challenges for the N-ZERO program in developing an “OFF-but-Alert” sensor technology. The first challenge is to close the sizable gap between the extremely small signal levels measured by RF and physical sensors and the relatively large threshold voltages required by state-of-the-art comparators. N-ZERO aims to bridge that gap without supplying any active power (≤ 10 nW) in the standby state when the signatures of interest are absent.

The second challenge is the discrimination of the events or signatures of interest from noise and interference, without supplying active power. The critical technologies created by the N-ZERO program are intended to establish methods to provide large passive voltage gain, develop passive signal processing circuits to prevent false detection, and realize comparators operating at extremely low threshold voltages with near zero power consumption enabled by steep sub-threshold swing. This tri-prong approach is intended to result in microsystems capable of detecting and processing signals with near zero power consumption (≤ 10 nW).

DARPA has been able to create zero-power receivers that can detect very weak signals — less than 70 decibel-milliwatt radio-frequency (RF) transmissions, a measure that is better than originally expected. The system has also been able to detect objects correctly without raising a false alarm, which can crimp battery life. In the program’s current phase, the sensors need to distinguish between cars, trucks and generators in an urban environment at a close range and in the final phase, they will be required to classify those same targets from 10 meters (33 feet).

“The ability to sense and classify cars, trucks and generators in … both rural and urban backgrounds from a distance of a little over 5 meters away and being able to do that with almost 10 nanowatts of power consumption is a big accomplishment in phase one of the program,” Olsson says.


DARPA to develop new IR-based sensor technology

A team of researchers at Northeastern University in Boston will develop a sensor powered by IR energy, as part of DARPA’s Near Zero Power RF and Sensor Operation (N-ZERO) programme.

DARPA Microsystems Technology Office N-ZERO Program manager Troy Olsson said: “What is really interesting about the Northeastern IR sensor technology is that, unlike conventional sensors, it consumes zero stand-by power when the IR wavelengths to be detected are not present.

“When those IR wavelengths are present and impinge on the Northeastern team’s IR sensor, the energy from the IR source heats the sensing elements which, in turn, causes physical movement of key sensor components. These motions result in the mechanical closing of otherwise open circuit elements, thereby leading to signals that the target IR signature has been detected.”

The IR sensor technology features multiple sensing elements, each of which is adapted to absorb a specific IR wavelength.

These elements combine into complex logic circuits that are capable of analysing IR spectrums, which allow sensors to detect IR energy in the environment and specify if that energy derives from a fire, vehicle, person or some other IR source.

The sensor also includes a grid of nanoscale patches whose specific dimensions limit them to absorb only particular IR wavelengths, DARPA stated.

Northeastern University Electrical and Computer Engineering associate professor Matteo Rinaldi said: “The charge-based excitations, called plasmons (that can be thought of somewhat like ripples on the surface of water), are highly localised below the nanoscale patches and effectively trap specific wavelengths of light into the ultra-thin structure, inducing a relatively large and swift spike in its temperature.”

 DARPA Award Funds Richard Shi’s Work to Develop New Low-Power Sensors

Defense Advanced Research Projects Agency (DARPA) grant, University of Washington, EE Professor Richard Shi will be developing specialized sensors that are able to operate with minimal power and remain dormant until triggered.

Through the Near Zero Power RF and Sensor Operations (N-ZERO) program, Shi will develop specialized sensors that are capable of continuously and passively monitoring the environment, with the ability to fully activate in response to specific triggers. Current sensors consume power continuously, which in turn limits the sensor lifetime to months. Expensive batteries must also be frequently replaced.

In addition to current sensors consuming power continuously, a considerable amount of energy is also used by the electronic devices that communicate with the sensors. Therefore, the project will also entail developing radio receivers that are capable of being activated by a radio frequency trigger. Similar to the sensors, this will enable the radio receivers to expand power only when useful information is communicated.


DARPA awards $1.8 million for ‘near-zero’ power sensors at UC Davis

The U.S. Defense Advanced Research Projects Agency (DARPA) has presented a $1.8 million grant to a project headed by David Horsley, a professor in the UC Davis Department of Mechanical and Aerospace Engineering. The project, “Ultralow Power Microsystems Via an Integrated Piezoelectric MEMS-CMOS Platform,” includes the participation of co-PIs Xiaoguang “Leo” Liu and Rajeevan Amirtharajah, both professors in the UC Davis Department of Electrical and Computer Engineering.

Horsley’s group has teamed up with InvenSense, the company that makes the motion sensors — gyro and accelerometer — in everybody’s smart phones. “DARPA likes to have technology that can be translated into a practical application,” Horsley said. “One strength of our program is that we’re working directly with a high-volume manufacturer, so the chips we are designing are being made at a production facility and can be rapidly transitioned to production for DoD use at the end of the program.”

The program goal is to develop an acoustic sensor and an acceleration sensor that run on near-zero power, producing a wake-up signal when a particular signature is detected: say, a car or truck driving by, or a generator being switched on. “But we don’t have to be able to distinguish between any of those vehicles,” Horsley noted. “In Phase Two, however, we will have to be able to say, ‘This was a truck’ or ‘This was a car.'”

Horsley said the sensors are “kind of like having the ultimate geophone, where you’re sensing for earthquakes, sensing vibrations in the earth.”

“We have sensors that we’re testing now that are running at below 10 nanowatts,” he said. By way of comparison, the existing sensors in smart phones, although already operating on low power, nonetheless require about 10 milliwatts: roughly 1 million times more power than the sensors being developed by Horsley’s team.

“At the end of Phase One, which will be coming up toward the end of this year, we’re going to deliver the hardware,” Horsley said. “We have a very-low-power acceleration sensor and a microphone that we’re going to deliver to the government, and they’re going to have this independently evaluated at Lincoln Lab at MIT.”

Horsley believes that in the not-too-distant future an ultra-low-powered remote sensor could be triggered by events other than ground noise. One could, for example, have a microphone that’s on all the time listening for a specific keyword. “So one vision for this technology is that … you wouldn’t have to fire up a processor, like an applications processor, or get connected to the cloud to be able to have it do keyword recognition,” Horsley said. “That’s pretty far from where we are now, but it certainly seems like we’re in the right direction to get there.”


References and Resources also include:

DARPA’s Fast Lightweight Autonomy (FLA) program is advancing autonomy to aid military operations in dense urban areas or heavily wooded forests

“The goal of Fast Lightweight Autonomy (FLA)  is to develop advanced algorithms to allow unmanned air or ground vehicles to operate without the guidance of a human tele-operator, GPS, or any datalinks going to or coming from the vehicle,” said JC Ledé, the DARPA FLA program manager. Autonomous flight capabilities are being developed and demonstrated using custom payloads on a commercial quadrotor platform (DJI Flamewheel 450 airframe, E600 motors with 12″ propellers, and 3DR Pixhawk autopilot).

A traditional approach to operating small UAVs uses a human operator as the pilot. The air vehicles are typically remotely controlled with the operator watching the vehicle or teleoperated with the operator watching data from on-board sensors. These techniques work only when a highly skilled operator is coupled with a communications channel having high availability and manageable latency. However, the approach breaks down when obstacles are added to the environment, as communications degrade, and as vehicle speed increases.

Birds and flying insects maneuver easily at high speeds near obstacles. The FLA program asks the question “How can autonomous flying robotic systems achieve similar high-speed performance?”

Another traditional approach to controlling small, unmanned air vehicles uses Global Positioning System (GPS) coordinates to specify a flight path as a series of predetermined waypoints. This method for navigation has proven effective only in situations where GPS is available. It fails when GPS is lost due to interference such as jamming or poor reception indoors; as well as in settings in which GPS bounds on accuracy are not adequate for the size and speed of the platform.

Birds and flying insects are able to perform well without using predetermined waypoints or an external position reference system.

“Most people don’t realize how dependent current UAVs are on either a remote pilot, GPS, or both. Small, low-cost unmanned aircraft rely heavily on tele-operators and GPS not only for knowing the vehicle’s position precisely, but also for correcting errors in the estimated altitude and velocity of the air vehicle, without which the vehicle wouldn’t know for very long if it’s flying straight and level or in a steep turn. In FLA, the aircraft has to figure all of that out on its own with sufficient accuracy to avoid obstacles and complete its mission.”

The technology is intended to support unmanned aerial vehicle flights in GPS-denied or GPS-unavailable environments and aid military operations or search and rescue missions, among others, DARPA said. Potential applications for the technology include safely and quickly scanning for threats inside a building before military teams enter, searching for a downed pilot in a heavily forested area or jungle in hostile territory where overhead imagery can’t see through the tree canopy, or locating survivors following earthquakes or other disasters when entering a damaged structure could be unsafe.

The Defense Advanced Research Projects Agency put unmanned quadcopters through a series of tests to demonstrate autonomous flight without the aid of human operators or global positioning systems. DARPA said three research teams under the Fast Lightweight Autonomy program flew small unmanned quadcopters through various environments using onboard cameras and sensors as well as smart algorithms for autonomous navigation.

“I was impressed with the capabilities the teams achieved in Phase 1,” Ledé said. “We’re looking forward to Phase 2 to further refine and build on the valuable lessons we’ve learned. We’ve still got quite a bit of work to do to enable full autonomy for the wide-ranging scenarios we tested, but I think the algorithms we’re developing could soon be used to augment existing GPS-dependent UAVs for some applications. For example, existing UAVs could use GPS until the air vehicle enters a building, and then FLA algorithms would take over while indoors, while ensuring collision-free flight throughout. I think that kind of synergy between GPS-reliant systems and our new FLA capabilities could be very powerful in the relatively near future.”

If successful, FLA would reduce operator workload and stress and allow humans to focus on higher-level supervision of multiple formations of manned and unmanned platforms as part of a single system.


Fast Lightweight Autonomy (FLA) program

The program focuses on autonomy and not on the flight platform, where “autonomy” includes sensing, perception, planning, and control. The goal of the FLA program is to explore non-traditional perception and autonomy methods that could enable a new class of algorithms for minimalistic high-speed navigation in cluttered environments. The FLA program will demonstrate a sequence of capabilities, beginning with lower-clutter, fly-by missions and progressing to higher-clutter, fly-through missions.


Through this exploration, the program aims to develop and demonstrate the capability for small (i.e., able to fit through windows) autonomous UAVs to fly at speeds up to 20 m/s with no communication to the operator and without GPS waypoints.


The FLA program is focused on developing a new class of algorithms that enables UAVs to operate in GPS-denied or GPS-unavailable environments—like indoors, underground, or intentionally jammed—without a human tele-operator.


Under the FLA program, the only human input required is the target or objective for the UAV to search for—which could be in the form of a digital photograph uploaded to the onboard computer before flight—as well as the estimated direction and distance to the target. A basic map or satellite picture of the area, if available, could also be uploaded. After the operator gives the launch command, the vehicle must navigate its way to the objective with no other knowledge of the terrain or environment, autonomously maneuvering around uncharted obstacles in its way and finding alternative pathways as needed.


Phase 1 of DARPA’s Fast Lightweight Autonomy (FLA) program concluded recently following a series of obstacle-course flight tests in central Florida. During the tests, researchers provided targets or objectives for the UAVs by uploading images or estimated direction and distance. The quadcopters had to self-navigate through various obstacle-strewn locations such as building interiors, wooded areas and a hangar before flying back to the starting point.


The recent four days of testing combined elements from three previous flight experiments that together tested the teams’ algorithms’ abilities and robustness to real-world conditions such as quickly adjusting from bright sunshine to the dark building interiors, sensing and avoiding trees with dangling masses of Spanish moss, navigating a simple maze, or traversing long distances over feature-deprived areas. On the final day, the aircraft had to fly through a thickly wooded area and across a bright aircraft parking apron, find the open door to a dark hangar, maneuver around walls and obstacles erected inside the hangar, locate a red chemical barrel as the target, and fly back to its starting point, completely on their own.


Each team showed strengths and weaknesses as they faced the varied courses, depending on the sensors they used and the ways their respective algorithms tackled navigation in unfamiliar environments. Some teams’ UAVs were stronger in maneuvering indoors around obstacles, while others excelled at flying outdoors through trees or across open spaces.


Success was largely a matter of superior programming. “FLA is not aimed at developing new sensor technology or to solve the autonomous navigation and obstacle avoidance challenges by adding more and more computing power,” Ledé said. “The key elements in this effort, which make it challenging, are the requirements to use inexpensive inertial measurement units and off-the-shelf quadcopters with limited weight capacity. This puts the program emphasis on creating novel algorithms that work at high speed in real time with relatively low-power, small single board computers similar to a smart phone.”



References and Resources also include: