DARPA developing collaborative intelligent Wireless Networks and Adaptive Radios to cooperatively share or dominate increasingly congested spectrum

Ongoing wireless revolution is fueling a voracious demand for access to the radio frequency (RF) spectrum around the world.  In the civilian sector, consumer devices from smartphones to wearable fitness recorders to smart kitchen appliances are competing for bandwidth. Around 50 billion wireless devices are projected to be vying for access to mobile communications networks within the next few years and by 2030, the demand for wireless access could be 250 times what it is today.  However, as the use of wireless technology proliferates, radios and communication devices often interfere with and disrupt other wireless devices.


Military spectrum requirements are  also increasing exponentially as military operations increasingly rely on access to the wireless spectrum in order to assess the tactical environment and coordinate and execute their critical missions. The demand for more and timely information at every echelon is driving an increase in DoD’s need for spectrum.“Increasingly lower echelons, including individual soldiers, require situational awareness information resulting in more spectrum-enabled network links.” Managing this increasing demand, while combating what appears to be a looming scarcity of RF spectrum is a serious problem for our nation, both militarily and economically, says DARPA.


However, spectrum is a finite resource and additionally DOD has to free up 500 MHz of the spectrum it has for commercial use by 2020 leading to scarcity of spectrum for DOD use. Spectrum congestion is a growing problem, DARPA officials explain. It increasingly limits operational capabilities due to the increasing deployment and bandwidth of wireless communications, the use of network-centric and unmanned systems, and the need for increased flexibility in radar and communications spectrum to improve performance and overcome sophisticated countermeasures.



In March 2016, DARPA launched the  Spectrum Collaboration Challenge (SC2), an initiative designed to ensure that the exponentially growing number of military and civilian wireless devices will have full access to the increasingly crowded electromagnetic spectrum. These networks will be capable of intelligently optimizing the spectrum by collaborating with, and learning from, the other systems that occupy the spectrum with them.


A Vanderbilt team of researchers and alumni – dubbed MarmotE – won the Round 1 in mid-December of the U.S. Defense Advanced Research Projects Agency’s Spectrum Collaboration Challenge (SC2), leading the top 10 teams, each awarded $750,000 in prize money. This was the first event of the three-year long tournament. Round 2 is set for December 2018. The ultimate SC2 winners will walk away in 2019 with $2 million in prize money.


Spectrum Collaboration Challenge administrator Paul Tilghman said: “SC2 sets out to bring the software defined radio and artificial intelligence communities together to fundamentally rethink 100 years of spectrum practise, and tackle the original and enduring spectrum grand challenge: efficient coexistence of all wireless communications.

DARPA Spectrum Challenge

Defense Advanced Research Projects Agency (DARPA) Spectrum Collaboration Challenge (SC2) will use a series of tournament events to spur development of next-generation wireless networks which make more effective use of the RF spectrum. Currently the spectrum is managed by nearly a century old technique, by isolating wireless systems by dividing the spectrum into exclusively licensed bands, which are allocated over large, geographically defined regions. This approach rations access to the spectrum in exchange for the guarantee of interference-free communication. However, allocation is human-driven and not adaptive to the dynamics of supply and demand. At any given time, many allocated bands are unused by licensees while other bands are overwhelmed, thus squandering the spectrum’s enormous capacity and unnecessarily creating conditions of scarcity.


The current situation also poses potential security risks for the military, creating the impression of reliable and unfettered access to the spectrum while in actuality creating a well-defined target for adversaries that may wish to disrupt wireless operations. First responder radios need to be able to communicate reliably in such congested and contested environments and to share radio spectrum without direct coordination or spectrum preplanning. SC2 competitors will reimagine spectrum access strategies and develop a new wireless paradigm in which radio networks, will autonomously collaborate and reason about how to share the RF spectrum, avoid interference, and jointly exploit opportunities to achieve the most efficient use of the available spectrum.


The competition will unfold in three year-long phases beginning in 2017 and finishing, for those teams that survive the two Preliminary Events, in a high-profile Championship Event in late 2019.  All 30 teams will have to meet several requirements throughout the year to prepare for the Preliminary Event #1 Competition in December 2017. Top performers during phase 1 will proceed to phase 2 next year, which culminates in another event competition in December 2018. The DARPA has selected 30 teams for Phase I of the Spectrum Collaboration Challenge.


The third and final phase, to be held until the final competition at the end of 2019, will award $2 million, $1 million, and $750,000 prizes, respectively, to the top three finishers. The team whose advanced, software-defined radios collaborate most effectively with a diversity of simultaneously operating radios in a manner that optimizes spectrum usage for the entire communicating ensemble will walk away with a grand prize of $2 million.


The DARPA Spectrum Challenge aims to stimulate the development of innovative approaches to adaptive, software-based radio communications in such multi-user environments. Teams will compete to create protocols for software-defined radios that best use communication channels in the presence of other dynamic users and interfering signals. The Challenge is not focused on developing new radio hardware, but instead seeks algorithmic strategies for guaranteeing successful communication in the presence of other radios without explicit coordination.


To host the new Challenge, DARPA aims to construct the largest-of-its-kind wireless testbed, which will serve during and after the SC2 as a national asset for evaluating spectrum-sharing strategies, tactics, and algorithms for next-generation radio systems.


This new breed of collaborative intelligent radio networks could give rise to a rich spectral ecosystem able to accommodate an enormous diversity of communicating devices while operating 100 to 1,000 times more efficiently than today’s wireless networks


New DARPA Grand Challenge to Focus on Spectrum Collaboration

The win is especially significant for Peter Volgyesi, a research scientist and Miklos Maroti, a research associate professor at Vanderbilt’s Institute for Software Integrated Systems. For the preliminary event, 475 fully autonomous matches were run with the 19 qualified teams’ radio designs in SC2’s custom testbed environment, known as Colosseum. The final matches for the first event were carried out across six different communications scenarios designed to mirror real-world congested environments, but with more complexity than existing commercial radios are equipped to handle.


The competing teams faced fluctuating bandwidths and interference from other competitors as well as DARPA designed bots that tested and challenged their radio designs. Each team’s radio performance was scored based on its collaborative spectrum sharing abilities.


“Central management of the spectrum is simply not scalable and pretty wasteful, but ad-hoc sharing as implemented in WiFi is not working either,” said Maroti. “The best solution to spectrum management would be a combination of distributed cooperation and adaptation driven by the latest advances of machine learning.”


The Round 1 competition found that when two radio networks were asked to share the spectrum, the top performing teams were successful at adapting their spectrum usage so that both networks could successfully transmit with minimal interference.


Fully autonomous sharing of the spectrum with three simultaneous wireless technologies however, remains a difficult challenge. When three different technologies attempt to coexist simultaneously there is a smaller set of overlapping strategies that will fulfill each individual radio network’s needs. This causes conflict and requires a higher degree of agility and reasoning, which will be required to be successful in the next phase.


The next preliminary event will further challenge competitors with an interference environment beyond what existing commercial and military radios can handle—upping the number of simultaneous wireless network types from three to five, and raising the total number of radios from 15 to 50.


SC2 teams will take advantage of recent advances in artificial intelligence and machine learning and the expanding capacities of software-defined radios to develop breakthrough capabilities that can help bring about spectrum abundance.


After two intense days of competition, teams from Tennessee Technological University and Georgia Tech Research Institute and an independent team of individuals emerged as the overall winners, earning a total of $150,000 in prize money. The agency’s Spectrum Collaboration Challenge (SC2) will reward teams for developing smart systems that collaboratively, rather than competitively, adapt in real time to today’s fast-changing, congested spectrum environment—redefining the conventional spectrum management roles of humans and machines in order to maximize the flow of radio frequency (RF) signals.



The challenge is expected to both take advantage of recent significant progress in the fields of artificial intelligence and machine learning and also spur new developments in those research domains, with potential applications in other fields where collaborative decision-making is critical.



Both the preliminary and final events included two separate tournaments, each with its own goals:


Cooperative tournament:

In each match, three teams attempted to effectively share the spectrum while transmitting random data files from their source radio to their destination radio over the same 5 MHz UHF band. A team’s match score was its total packets delivered plus the higher of the two other teams’ delivered packets—thus motivating cooperative behavior.


Teams could not coordinate in advance on how to share the spectrum; instead, they had to develop and implement algorithms to enable their assigned software-defined radios to dynamically communicate at a high rate while leaving spectrum available for the other two teams to do the same. This event tested conditions encountered during military operations involving multiple units and coalition partners, and also has possible future commercial applications.


Competitive tournament:

In each match, two teams sought to dominate the spectrum, with the winner being the first to transmit all its files of random data (or to successfully transmit the most packets in three minutes) from a source radio to a destination radio. Teams had to develop and implement algorithms to enable their assigned software-defined radio to dynamically communicate at a high rate in the presence of competitors’ signals within the same 5 MHz UHF band. This event tested conditions directly applicable to military communications, where radios must deliver high-priority data in congested and often contested electromagnetic environments.



The XG Program is developing technology and system concepts for military radios to dynamically access spectrum in order to establish and maintain communications. The goal is to demonstrate the ability to access 10 times more spectrum with near-zero setup time; simplify RF spectrum planning, management and coordination; and automatically de-conflict operational spectrum usage.


XG technology assesses the spectrum environment and dynamically uses spectrum across frequency, space and time. XG is designed to be successful in the face of jammers and without harmful interference to commercial, public service, and military communications systems. XG is transitioning to the Army to solve spectrum challenges in-theater.


In 2005, Shared Spectrum Company was awarded the prime contract to for Phase III of the neXt Generation Communications (XG) program funded by the Department of Defense’s (DoD) Defense Advanced Research Projects Agency (DARPA) and managed by the Air Force Research Laboratory (AFRL).


DARPA’s Advanced RF Mapping (Radio Map)

For warfighters efficiently managing the congested RF spectrum has become critical to ensure effective communications and intelligence gathering.


DARPA’s Advanced RF Mapping (RadioMap) program seeks to provide real-time awareness of radio spectrum use across frequency, geography and time. The goal is to provide a map that gives an accurate picture of spectrum use in complex environments. With this information, spectrum managers and automatic spectrum allocation systems can operate much more efficiently, reducing the problems caused by spectrum congestion and enabling better mitigation of interference problems.


The program plans to provide this information in part by using radios deployed for other purposes, like data and voice communications systems. The program aims to develop ways to use the capabilities of modern radios to sense the spectrum when they are not communicating.


“RadioMap adds value to existing radios, jammers and other RF electronic equipment used by our military forces in the field,” said John Chapin, DARPA program manager. “This program doesn’t require purchasing new spectrum-sensing devices. Rather, it uses existing radios and jammers that do double-duty. In the ‘down’ time when they aren’t performing their primary function, the devices sense the spectrum around them and, through RadioMap technology, provide an accurate picture of what frequencies are currently in use and where.”


The map can be likened to traffic cams in busy cities that show the flow of traffic at different times of the day, giving real-time awareness of whether a section of road is jammed with traffic or clear, helping drivers plan their commute. Such systems aren’t designed to show specific licenses plates or vehicle types, but rather are designed to help see and avoid congestion, resulting in smoother traffic flow.


RadioMap isn’t designed to deal with specifics of transmissions, rather its purpose is to identify frequency usage—where and when the radio frequency “highway” is jammed or clear—allowing better planning and allocation of the spectrum to warfighters overseas operating in RF congested environments.


Another goal of the RadioMap program is to assist small tactical units such as platoons or companies that rarely carry equipment for monitoring radio emissions. With RadioMap, the radios already carried by these units would do double duty to inform the troops about nearby threats and opportunities that are visible in the RF spectrum.


RadioMap contracts

Vencore, Inc. announced that its innovative research arm, Vencore Labs, received a$5.0 million award for phase three of the Advanced Radio Frequency (RF) Mapping (RadioMap) program, which is supported by the Air Force Research Lab (AFRL) and the Defense Advanced Research Projects Agency (DARPA). Vencore Labs will be responsible for developing software associated with (i) distributed command and control (C2), (ii) management of the RadioMap tasking, and (iii) software agents for C2 of RadioMap tasks on RF platforms.  Vencore Labs performs as a subcontractor to Lockheed Martin on this program.


“Vencore Labs plans to continue expanding the capabilities and maturing the technology we developed for phases one and two of this effort.  This third phase includes plans for testing RadioMap on RF-capable equipment that is typical in current forward-deployed environments, along with a field test and demonstration.”


RadioMap intends to develop technology that visually overlays spectrum information on a map enabling rapid frequency deconfliction and maximizing use of available spectrum for communications and intelligence, surveillance and reconnaissance (ISR) systems.


DARPA  gave Lockheed Martin an $11.8 million contract for Phase 3 of RadioMap, during which the contractor is to build on the work in phases 1 and 2 to develop a working system capable of transitioning to the military services. During Phase 2 of the program, DARPA conducted successful tests at Quantico Marine Base, and John Chapin, DARPA program manager Chapin said the Marines will likely participate in Phase 3. “The Marine Corps is an ideal transition partner for RadioMap,” he said. “They have in place the doctrine, organizational structure, and information systems framework that can effectively integrate RadioMap software.”



DOD’s Electromagnetic Spectrum Strategy

DoD’s growing requirements to gather, analyze, and share information rapidly; to control an increasing number of automated Intelligence, Surveillance, and Reconnaissance (ISR) assets; to command geographically dispersed and mobile forces to gain access into denied areas; and to “train as we fight” requires that DoD maintain sufficient spectrum access,” says DODS’s Electromagnetic Spectrum Strategy unveiled in February 2014


However, adversaries are aggressively developing and fielding electronic attack (EA) and cyberspace technologies that significantly reduce the ability of DoD to access the spectrum and conduct military operations. This requires development, fielding, and integration of complex EA, electronic support (ES), and electronic protection (EP) technologies to attack adversary’s command, control, communications, and computers; ISR; improvised explosive devices (IEDS); and area denial weapon systems, all of which require access to spectrum.


Concurrently, the unprecedented consumer demand for wireless mobility and data consumption has resulted in reduction and fragmentation of spectrum of defence. Only 1.4 percent of the RF spectrum from 0 to 300 GHz is available exclusively to the U.S. government. Additionally, the Defense Department also is under a mandate to give up 500 MHz of bandwidth for civilian use by 2020.


This has resulted in complex Defense, spectrum management within and between the armed services, and any errors in the spectrum management plan may result in the denial of critical strategic and tactical links. The second is relatively easy for adversaries to target such a small part of the RF spectrum allocated exclusively to the government through jamming or electronic attack.


DOD’s Electromagnetic Spectrum Strategy 2013 called for ensuring the access to the congested and contested electromagnetic environment of the future, by adopting new agile and opportunistic spectrum operations, and through systems which are more efficient, flexible and adaptable and adopting new technologies capable of more efficient use of the spectrum and reduced risk of interference.


The DoD EMS Strategy will focus on the following goals:

Advancing the spectrum-dependent technologies that are more efficient, flexible, and adaptable in their use of spectrum.

This will include:

  • Expediting development of technologies that increase a spectrum-dependent system’s ability to access wider frequency ranges, exploit spectrum efficiency gains, utilize less congested bands, and adapt to changing electromagnetic environments;
  • Pursuing spectrum sharing opportunities;
  • Evaluating commercial service capabilities (such as smartphones) for mission use; and
  • Improving DoD’s oversight of spectrum use.


Increasing the agility of DoD’s spectrum operations. This will include:

  • Managing spectrum-dependent systems in near-real-time by developing tools and techniques to quantify spectrum requirements and identify and mitigate spectrum issues;
  • Improving the ability to identify, predict, and mitigate harmful interference; and
  • Pursuing access to spectrum allocated for non-federal use and spectrum sharing technologies.


Encouraging DoD participation in changing national and international spectrum policy and regulation. In particular, DoD will focus on:

  • Developing innovative alternatives that consider both DoD and commercial interests; and
  • Improving its ability to adapt and implement regulatory and policy changes while maintaining full military capability


Opportunistic use of the spectrum is one of the promising approaches being pursued by both DoD and the wireless community. Therefore DoD systems must become more spectrally efficient, flexible, and adaptable, and DoD spectrum operations must become more agile in their ability to access spectrum in order to increase the options available to mission planners.

“DoD will also continue to adopt new tools and techniques to manage the spectrum more effectively, making our spectrum operations more agile,” says DOD’s spectrum strategy. Cognitive radio systems, improved spectrum sensing, and geo-location databases are among new opportunistic use technologies being considered.


References and Resources also include:

  1. http://www.darpa.mil/news-events/2014-04-02a
  2. http://www.darpa.mil/program/spectrum-challenge
  3. http://www.darpa.mil/news-events/2016-07-19a
  4. https://www.fbo.gov/index?s=opportunity&mode=form&id=bbaf4ae8ac8e438e353dc30c74ab56af&tab=core&_cview=1
  5. http://www.prnewswire.com/news-releases/vencore-labs-extends-its-involvement-to-third-phase-of-darpas-radiomap-program-300331396.html
  6. https://engineering.vanderbilt.edu/news/2018/vanderbilt-wins-top-prize-in-first-round-of-darpa-spectrum-collaboration-challenge/

Underwater acoustic communications being improved solving many challenges for Navy and commercial customers.

Radio waves do not propagate well underwater due to the high attenuation. In fact, radio waves propagate at long distances through conductive salty water only at extra low frequencies (30− 300Hz), which require large antennae and high transmission power.

Underwater acoustic communication is a technique of sending and receiving messages below water.  The acoustic waves are low frequency waves which offer small bandwidth but have long wavelengths. Thus, acoustic waves can travel long distances and are used for relaying information over kilometers. There are several ways of employing such communication but the most common is by using hydrophones.

The acoustic communications also suffer from the large propagation delays of acoustic waves and high bit error rates of the underwater acoustic channel, multi-path propagation and time variations of the channel. In addition, acoustic waves are affected by turbulence caused by tidal waves and suspended sediments, acoustic noise and pressure gradients.

In 1996, Stojanovic et al. proposed a UWAC system at 40 kbps. In 2002, Zielinski et al. constructed an 8-kbps digital UWAC system over 13 km in length and 20 m in depth. In 2005, Ochi et al. preliminarily employed 32-quadrature amplitude modulation (QAM) to construct a 125-kbps UWAC system with a symbol error rate of 10−4 4. Zakharov et al. demonstrated the UWAC system with an orthogonal frequency division multiplexing (OFDM) data stream.

In addition, Li et al. proposed a UWAC system that applies the multiple-input-multiple-output technique. Moreover, Song et al. demonstrated a UWAC system with 60-kbps 32-QAM data covering a bandwidth of 32 kHz in a seawater environment more than 100 m deep with a distance of over 3-km. However, despite the aforementioned research, the transmission rate of the UWAC system is limited by its narrow modulated bandwidth.

The underwater acoustic wave (UWAC) system can be applied only in low-noise environments for low-speed content. This is because of its strong attenuation in seawater, exhibiting inverse proportionality to the wavelength, as well as its significant propagation delay and the low signal-to-noise ratio (SNR) of data in the context of background ocean noise.

Acoustic modems

Underwater acoustic communication is relatively slow when compared to radio communication. This has to do largely with the speed of sound in water which is roughly 1500 meters/second. The result is a relatively low baud rate (typically 9600 baud).

Not only is the medium slow but there are complications with the transmission due to signal absorption, geometric spreading losses, boundary effects, and multipath to name a few. Manufacturers have several techniques they employ to handle these challenges. The techniques come in the form of signal processing, data packaging, and coding schemes. These techniques, which are not the same for all manufacturers, help ensure reliable communication and possibly identify bit loss and/or repair these lost portions of data at the receiver end.

Two characteristics are required for acoustic communication. On the one hand, it is needed a modulation phase (in the transmitter) and a demodulation phase (in the receiver) using a carrier wave to optimize the quantity of information sent and to decrease the effects of noise and interference. It is also needed a medium to transport that carrier wave.

The water is a good medium to transmit the carrier wave with low noise rate. These facts make acoustic communication the most useful way to transfer data under the sea. However, acoustic modems have also some problems as transmission loss, propagation delay and Doppler Effect, refraction due to the variations of the temperature and pressure, multipath and even frequency attenuation. The current developed acoustic modems can only support point-to-point, low-data-rate and delay-tolerant applications.

There are several methods of transmitting data acoustically (i.e. modulation), but the most common method is the use of spread spectrum. Briefly, this is a method of sending data at several different frequencies (Multi-Frequency ­Shifted Key, MFSK) in order to increase data throughput. Another modulation scheme is the Phase Shifted Key, or PSK; this modulation scheme permits higher baud rates but is more susceptible to error sources.

The data are packed to ensure that a few errors will not corrupt the entire data message. This means that large amounts of data are sent as a series of these data packages. A typical data package is approximately 4 kb. A package contains the data plus additional bytes of data for identifying the package boundaries, modem identity, checksum, and error correction codes.

Some modems allow for a configuration where a retransmission request is sent from the receiver if errors are detected in a data package. The implication of lost data is that it must be retransmitted. This affects the effective baud rate if a modem is operating at a high acoustic baud rate.

Apart from the modulation schemes and packaging techniques there are also techniques to minimize the effects of multipath. Multipath is the reception of the same signal several times, yet slightly delayed from one another. Since the signal is the same frequency and arrives at more or less the same time, it is challenging to separate the original signal from time delayed versions overlapping each other.

As the name suggests, multipath is the source of these “different” signals that are reflections of the original signal from boundaries that lie between the transmitter and receiver. Multipath is most prominent over long ranges and shallow water, whereby the original signal can bounce between the surface and bottom before arriving at the receiver. There are a few tricks in use to reduce the effects of multipath. These are convolutional coding, multipath guard period, and data redundancy.

Convolutional coding is data in a following frame that is capable of correcting up to one bit errors in the data frame previously sent.

Multipath guard is a time delay inserted between data frames. Increasing the delay between frames reduces the interference from multipath.

Data redundancy is simply the process by which data is retransmitted in the same data frame.

All of these methods improve the reliability of a transmission, however they also reduce the data transmission rate. This means there is a trade off between reliability and data rate.

In essence, an underwater modem consists of:

1) A power unit, which has a battery and a set of DC/DC converters,

2) A processing unit, which usually consists of a small processor and memory (sometimes, it can be added as an external memory),

3) The physical hydrophone and loudspeaker,

4) Circuitry (used to adapt the digital signals to the processor) and the analog to digital converter and the digital to analog converter to adapt changes between the medium and the electric circuit.


Breakthrough in Full-Duplex Acoustic Underwater Data Communications

QinetiQ North America (QNA) has announced that it has successfully demonstrated full-duplex underwater acoustic data communications, using its new, proprietary DOLPHIN technology.  Developed in partnership with Optimal Systems Laboratory, Inc. (OSL), located in Cambridge, MA, the DOLPHIN Communication technology will enable groundbreaking capabilities for the undersea domain. QNA and OSL have developed a unique way of using patented cancelation technology that will enable simultaneous transmit and receive (STAR) acoustic communications.  This technology will make it possible to create extensive undersea data and communication wireless networks, solving many challenges for Navy and commercial customers.

Dolphin benefits include:

  • Enables self-forming acoustic underwater networks to operate similarly to wireless land networks with nodes in motion
  • Multi-component control networks for fixed and mobile assets anywhere underwater
  • Frequency independence, allowing DOLPHIN Comms to be configurable on most systems and platforms (unmanned underwater vehicles, submarines, ships, etc.)
  • Full duplex communications greatly improves acoustic data transfer performance over other technology available today
  • Dolphin Comms STAR enables extremely low power communications


Use of vector sensor receivers

A vector sensor is capable of measuring important non-scalar components of the acoustic field such as the wave velocity, which cannot be obtained by a single scalar pressure sensor.

In recent decades, extensive research has been conducted on the theory and design of vector sensors. Many vector sensor signal processing algorithms have been designed. They have been mainly used for underwater target localization and sonar applications.

Earlier underwater acoustic communication systems have been relying on scalar sensors only, which measure the pressure of the acoustic field. Vector sensors measure the scalar and vector components of the acoustic field in a single point in space, therefore can serve as a compact multichannel receiver. This is different from the existing multichannel underwater receivers, which are composed of spatially separated pressure-only sensors, which may result in large-size arrays.

In general, there are two types of vector sensors: inertial and gradient. Inertial sensors truly measure the velocity or acceleration by responding to the acoustic medium motion, whereas gradient sensors employ a finite-difference approximation to estimate the gradients of the acoustic field such as velocity and acceleration.

The 1×3 single-input multiple-output (SIMO) vector sensor communications system, there is one transmitter pressure transducer, whereas for reception we use a vector sensor, which measures the pressure and the y and z components of the velocity. With more pressure transmitters, one can have a multiple-input multiple-output (MIMO) system also.


References and Resources also include:



Wu, T.-C. et al. Blue Laser Diode Enables Underwater Communication at 12.4 Gbps. Sci. Rep. 7, 40480; doi: 10.1038/srep40480 (2017)

Underwater Acoustic Modems, Sandra Sendra, Member, IEEE, Jaime Lloret, Senior Member, IEEE, Jose Miguel Jimenez, and Lorena Parra, IEEE SENSORS JOURNAL, VOL. 16, NO. 11, JUNE 1, 2016 4063

New class of Electronics: Biodegradable, Reconfigurable, Transient, Self Destructing for Security and Biomedical applications

Consumer electronics constitute a rapidly increasing source of waste. Cell phones, tablets and the like are typically made of non-renewable, non-biodegradable, partly environmentally toxic materials. A report from United Nations University (UNU) found that the world produced 41.8 million metric tons of e-waste in 2014 – an amount that would fill 1.15 million 18-wheel trucks. Lined up, those trucks would stretch from New York to Tokyo and back. The Environmental Protection Agency estimates that only 15-20% of e-waste is recycled, the rest of these electronics go directly into landfills and incinerators.

Electronic waste isn’t just waste, it contains some very toxic substances, such as mercury, lead, cadmium, arsenic, beryllium and brominated flame retardants. When the latter are burned at low temperatures they create additional toxins, such as halogenated dioxins and furans – some of the most toxic substances known to humankind. The toxic materials in electronics can cause cancer, reproductive disorders, endocrine disruption, and many other health problems if this waste stream is not properly managed. To overcome this challenge, Researchers have started developing Nontoxic Bio degradable materials and vanishing electronics that will be better for the environment.

While traditional electronic devices are non-renewable, non-biodegradable, partly toxic, non bio-compatible, fixed in form and function, Researchers are now designing a new class of electronics, called Transient electronics enabled by new materials, with the key attribute being the ability to physically dissolve into the surrounding environment at a well-controlled rate, with minimum or non-traceable remains, after a period of stable operation. Some are capable of self-destruction on command or in response to environmental conditions, such as temperature.  Some are reconfigurable devices and circuits whose electronic structures continuously change over time.

In future these Transient materials shall have many potential applications including in zero-waste environment, bioelectronics, military and defense data security, hardware-secure memory modules, and sensors.

Graphene-enhanced technology created electronics that vaporize in response to radio waves

Researchers from Cornell University and Honeywell Aerospace have designed a graphene-enhanced transient electronics technology in which the microchip self-destructs by vaporizing – an action that can be remotely triggered – without releasing harmful byproducts. In addition to transient electronics, the technology might find application in environmental sensors that can be remotely vaporized once they’re no longer needed.

A silicon-dioxide microchip is attached to a polycarbonate shell. Microscopic cavities within the shell contain rubidium and sodium bifluoride. When triggered remotely by using radio waves, these chemicals thermally react and decompose the microchip. The radio waves open graphene-on-nitride valves that keep the chemicals sealed in the cavities, allowing the rubidium to oxidize, release heat and vaporize the polycarbonate shell. The sodium bifluoride releases hydrofluoric acid to etch away the electronics.

“Our team has also demonstrated the use of the technology as a scalable micro-power momentum and electricity source, which can deliver high peak powers for robotic actuation,” said the researchers.


Researchers produce Electronic Chip ‘As safe as fertiliser’

Research team, led by professor Zhenqiang ‘Jack’ Ma of University of Wisconsin-Madison in their paper, published in the journal Nature Communications, have succeded in developing electronic chips whose support is based describes the new device based on CNF, cellulose nanofibril, a material that is perfectly biodegradable. Researchers proved that CNF has the electronic properties required for the support.

In a typical semiconductor electronic chip, the active region comprises the top thin layer and is only a small portion of the chip, whereas more than 99% of the semiconductor materials are in the support. And in microwave chips for wireless functions, only a tiny fraction of the lateral chip area is used for the required active transistors/diodes, the rest being used only for carrying other non-active components. Therefore, a chip with wood fibrils as its support might reduce environmental pollution from discarded consumer electronics by more than 99%.

‘The majority of material in a chip is support. We only use less than a couple of micrometres for everything else,’ Ma says. ‘Now the chips are so safe you can put them in the forest and fungus will degrade it. They become as safe as fertiliser.’ Although, we comment, the remaining 1% of semiconductor materials might still prove to be a good reason for proper recycling.


Scientists develop dissolving battery

Prof Montazami with a team of scientists have developed the lithium-ion battery which is self-destructing, it is capable of dissolving when exposed to heat or liquid. Iowa State University mechanical engineering professor Reza Montazami said it was the first practical transient battery. It could be used to keep military secrets confidential, and in environmental monitoring devices.

It delivers 2.5 volts and can power a desktop calculator for 15 minutes. It measures 5mm in length, is 1mm thick and 6mm wide, and is similar to commercial batteries in terms of its components, structure and electrochemical reactions.

It contains an anode, cathode and an electrolyte separator within two layers of polyvinyl alcohol-based polymer. When dropped in water, the battery’s polymer casing swells and the electrodes are broken apart, causing it to dissolve. The entire process takes around half an hour. However, it contains nanoparticles which do not degrade, meaning it does not dissolve entirely.

“Unlike conventional electronics that are designed to last for extensive periods of time, a key and unique attribute of transient electronics is to operate over a typically short and well-defined period, and undergo fast and, ideally, complete self-deconstruction and vanish when transiency is triggered,” the scientific paper stated.

While this particular battery could not be used in the human body as it contain lithium, researchers have been examining how batteries could dissolve harmlessly within the human body, and prevent the pain of removal, for several years.


Biodegradable materials for Medical Implants

Electronic systems entirely built with biodegradable materials are of growing interest for their potential applications in systems that can be integrated with living tissue and used for diagnostic and/or therapeutic purposes during certain physiological processes. The devices can be degraded and resorbed in the body, so no operation is needed to remove them and adverse long-term side effects are avoided.


University of Illinois’s Biodegradable Battery

Scientists at the University of Illinois at Urbana–Champaign, led by Rogers, have been developing a new class of electronics devices that can dissolve completely into the environment after carrying out the desired function.

Researchers have developed biodegradable battery which degraded completely in water after three weeks, could be used to power temporary medical implants and other limited-duration electronics.

The key enabling technologies for the development of transient electronics are circuits made from extremely thin sheets of silicon, electrodes made from water soluble metals like magnesium, zinc and tungsten, polymers like cellulose and rice paper as insulators and silk for packaging.

The biodegradable demonstration battery developed by John Rogers and his colleagues utilized magnesium foil anode and phosphate-buffered saline electrolyte. The researchers are now conducting further studies, centered on developing degradable polymer-based materials that would make suitable platforms for other electronic components, including work on transient LED transistor technology.

The technology has potential to create a revolution in medical devices: In place of present biological implants that require risky surgery to remove them, future biological implants shall degrade once their function has been fulfilled. Many other medical applications are being researched, from temporary sensors that can monitor conditions inside the body to sensors that can be stored with food to indicate when the food is getting spoiled. The technology is also promising in many commercial applications like environmentally friendly wireless sensors and cellular phones


Tiny electronic Implants Monitor Brain Injury, Then Melt Away

John A. Rogers, at the University of Illinois at Urbana-Champaign, and Wilson Ray, at the Washington University School of Medicine, are developing new class of small, thin Brain implants that can function as electronic sensors to monitor critical health parameters like temperature and pressure within the skull after a brain injury or surgery and then melt away when they are no longer needed, eliminating the need for additional surgery and reducing the risk of infection and hemorrhage.

“This is a new class of electronic biomedical implants,” said Rogers, who directs the Frederick Seitz Materials Research Laboratory at Illinois. “These kinds of systems have potential across a range of clinical practices, where therapeutic or monitoring devices are implanted or ingested, perform a sophisticated function, and then resorb harmlessly into the body after their function is no longer necessary.”

After a traumatic brain injury or brain surgery, it is crucial to monitor the patient for swelling and pressure on the brain. Current monitoring technology is bulky and invasive, Rogers said, and the wires restrict the patent’s movement and hamper physical therapy as they recover. Because they require continuous, hard-wired access into the head, such implants also carry the risk of allergic reactions, infection and hemorrhage, and even could exacerbate the inflammation they are meant to monitor.

“If you simply could throw out all the conventional hardware and replace it with very tiny, fully implantable sensors capable of the same function, constructed out of bioresorbable materials in a way that also eliminates or greatly miniaturizes the wires, then you could remove a lot of the risk and achieve better patient outcomes,” Rogers said. “We were able to demonstrate all of these key features in animal models, with a measurement precision that’s just as good as that of conventional devices.”

The new devices incorporate dissolvable silicon technology developed by Rogers’ group at the U. of I. The sensors, smaller than a grain of rice, are built on extremely thin sheets of silicon – which are naturally biodegradable – that are configured to function normally for a few weeks, then dissolve away, completely and harmlessly, in the body’s own fluids.

“The ultimate strategy is to have a device that you can place in the brain – or in other organs in the body – that is entirely implanted, intimately connected with the organ you want to monitor and can transmit signals wirelessly to provide information on the health of that organ, allowing doctors to intervene if necessary to prevent bigger problems,” said Rory Murphy, a neurosurgeon at Washington University and co-author of the paper. “After the critical period that you actually want to monitor, it will dissolve away and disappear.”


Biodegradable Power Generators Could Power Medical Implants

Now researchers have developed a biodegradable power source, they call biodegradable triboelectric nanogenerator (BD-TENG) that harnesses the phenomenon known triboelectricity, the most common cause of static electricity. When two different materials repeatedly touch and then separate, the surface of one material can steal electrons from the surface of the other.

They have designed a multilayer structure that is composed of biodegradable polymers (BDPs) and resorbable metals; the BD-TENG can be degraded and resorbed in an animal body after completing its work cycle without any adverse long-term effects. One BDP layer is a thin flat film, while the other layer is a sheet coated with rods up to 300 nanometers high. The layers are separated from one another by blocks of biodegradable polymer; they generate electricity when they are pushed together and pulled apart. The electricity-generating process relies on the relative contact separation between two BDP friction layers, in which a unique coupling between triboelectrification and electrostatic induction gives rise to an alternating flow of electrons between electrodes

In the lab, the researchers found that their biodegradable nanogenerator could achieve a power density of 32.6 milliwatts per square meter. They discovered that it could successfully power a neuron-stimulation device that helps control neuron growth. “Our results open the gate to fully degradable electronic devices,” says study co-author Zhong Lin Wang, a materials scientist at the Beijing Institute of Nanoenergy and Nanosystems. “A whole device can be absorbed in body and would not need to be removed through additional surgery.”

By fabricated BD-TENG with different materials, the researchers can tune the lifetime of their nanogenerator from hours to years, depending on the needs of the implantable electronics it is designed to power. They suggest that future devices could be powered by the mechanical energy from heartbeats or respiration.


Future Robots could be made from Biodegradable smart materials

Researchers at the Italian Institute of Technology (ITT) in Genoa are working to develop biodegradable smart materials for Humanoid’s robot’s skin. So far they have utilized bioplastics manufactured from food waste. What makes their material unique is that unlike normal plastics, which are made from petroleum products, these are made from industrial food waste.

‘These biodegradable materials, natural materials, they are very flexible so they can be used for robotic skins,’ explained Dr Athanassiou.’But they can be also very hard so they can be used for internal parts of a robot. “And also, in this flexible skin – robotic skin let’s say – we can incorporate sensors so they have this tactile sensing that the robots need, but with biodegradable materials.’

The group claims its bioplastics are non-toxic and will be better for the environment as they use less energy and water resources to manufacture. Developing similar materials for electronic components could one day make entire machines biodegradable.

The team is using a ‘mix and match’ approach to developing so-called ‘smart materials’, combining different nanomaterials to generate products with new properties. ‘What we are doing apart from making these new composite materials – smart materials – we’re also using them to change the properties of other materials, other existing materials like paper or cotton or different foams; from synthetic foams like polyurethane or forms of cotton.

‘So like this, in all these existing materials we are giving new properties that these materials don’t have so we can open up their application range.’ Dr Athanassia Athanassiou, who leads the Smart Materials Group at ITT, told Reuters: ‘We are infusing any material with nanotechnology.’ 


Biodegradable polymer films

Iowa State’s research team is experimenting with a blend of programmable biodegradable and transient insulating polymer films. It found success in controlling the rate of transiency through the addition of materials. The addition of gelatin to the mix, can slow the dissolution, while the addition of sucrose worked to speed up the rate of transiency. Using these special polymers, the team was able to build and test an antenna that was capable of sending data and then completely dissolving when a trigger was activated.


Reconfigurable Electronics: Disappearing Carbon Circuits on Graphene

Using carbon atoms deposited on graphene with a focused electron beam process, Fedorov and collaborators have demonstrated a technique for creating dynamic patterns on graphene surfaces. The patterns could be used to make reconfigurable electronic circuits, which evolve over a period of hours before ultimately disappearing into a new electronic state of the graphene.

“We will now be able to draw electronic circuits that evolve over time,” said Andrei Fedorov, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. “You could design a circuit that operates one way now, but after waiting a day for the carbon to diffuse over the graphene surface, you would no longer have an electronic device. Today the device would do one thing; tomorrow it would do something entirely different.”

The change usually occurs over tens of hours, and ultimately converts positively-charged (p-doped) surface regions to surfaces with a uniformly negative charge (n-doped) while forming an intermediate p-n junction domain in the cours“

There are multiple ways to modulate the dynamic state, through changing the temperature because that controls the diffusion rate of carbon, by directing the atomic flow, or by changing the carbon phase,” Fedorov said. “The carbon deposited through the focused electron beam induced deposition (FEBID) process is linked to graphene very loosely through van der Waals interactions, so it is mobile.”e of this evolution.

“The electronic structures continuously change over time,” Fedorov explained. “That gives you a reconfigurable device, especially since our carbon deposition is done not using bulk films, but rather an electron beam that is used to draw where you want a negatively-doped domain to exist.”

Beyond allowing fabrication of disappearing circuits, the technology could be used as a form of timed release in which the dissipation of the carbon patterns could control other processes, such as the release of biomolecules.

Fedorov and his collaborators have so far shown only the ability to create simple patterns of charged domains in the graphene. Their next step will be to use their p-n junctions to create devices that would operate for specific periods of time.

Reported in the journal Nanoscale, the research was primarily supported by the U.S. Department of Energy Office of Science, and involved collaboration with researchers from the Air Force Research Laboratory (AFRL), supported by the Air Force Office of Scientific Research.


Vanishing Electronics that Self Destructs in response to heat exposure

Researchers led by aerospace engineering professor Scott R. White and John A. Rogers, have developed a new type of “transient” electronic device that self-destructs in response to heat exposure instead of being exposed to water.

The technology involves first printing magnesium circuits on thin, flexible materials. The devices are coated with wax which contains Microscopic droplets of a weak acid. When exposed to heat, the wax melts and releases the acid, which completely dissolves the device. The researchers were also remotely able to trigger self-destruction by embedding a radio-frequency receiver and inductive heating coil in device. In response to a radio signal, the coil heats up and melt the wax, leading to the destruction of the device.

The team is also exploring the potential for other triggers, such as ultraviolet light and mechanical stress. The team’s work was supported by the National Science Foundation and DARPA.

DARPA’s Vanishing Programmable Resources (VAPR) program

DARPA’s Vanishing Programmable Resources (VAPR) program is investigating the development of special electronics that are rugged and functional as conventional electronics, but also capable of self-destruction on command or in response to environmental conditions, such as temperature. This shall prevent classified technology being leaked, reverse engineered or be used to develop countermeasures, if it fell in the hands of the enemy.

The main distinction between transient materials and conventional degradable materials is that unlike degradable materials, transient materials maintain their full characteristics and functionality until transiency is prescribed; and the dissolution rate is very often designed to be very fast. Military would benefit more with faster response to the trigger rather than waiting to slowly dissolve in the open environment or in the human body.

Sophisticated electronics are increasingly pervasive on the battlefield for a range of applications that include remote sensing and communications. However, it is nearly impossible to track and recover every device, resulting in their unintended accumulation in the environment, potential recovery and use by unauthorized individuals, and compromise of intellectual property and technological advantage.

The Vanishing Programmable Resources (VAPR) program seeks electronic systems capable of physically disappearing in a controlled, triggerable manner. These transient electronics should have performance comparable to commercial-off-the-shelf electronics, but with limited device persistence that can be programmed, adjusted in real-time, triggered, and/or be sensitive to the deployment environment.

VAPR aims to enable transient electronics as a deployable technology. To achieve this goal, researchers are pursuing new concepts and capabilities to enable the materials, components, integration and manufacturing that could together realize this new class of electronics.

Transient electronics may enable a number of revolutionary military capabilities including degradable environmental sensors or medical devices for diagnosis, treatment and health monitoring in the field.

Large-area distributed networks of sensors that can decompose in the natural environment (eco-resorbable) could provide critical data for a specified duration, but no longer. Alternatively, devices that resorb into the body may aid in continuous health monitoring and treatment in the field.


References and  Resources also include

Kim, et al., “Dynamic modulation of electronic properties of graphene by localized carbon doping using focused electron beam induced deposition,” (Nanoscale 7, 14946-14952, 2015). http://dx.doi.org/10.1039/c5nr04063a





Terahertz for Next generation terabits per second Military Wireless, Aircraft and Space Communications

Wireless data traffic is experiencing an unprecedented growth in recent years. There is an expectation that everyone will be permanently connected to the Internet, no matter where they are. The internet protocol traffic is expected to grow beyond 130 exabytes per month by 2018. At the same time, the massive use of mobile connection is pushing the need for wide bandwidth delivered up to the end users in a wireless regime. People are expecting that more information of a higher quality is delivered immediately. The newer services are requiring higher and higher data volumes and transfer rates.

Among various data traffic, the video traffic is expected to be dominant. Some video traffic has already posed severe challenges to mobile networks, including the forthcoming 5G mobile networks. For instance, it is expected that at least 10 Gbps traffic is needed for one virtual reality (VR) device. Moreover, full High Definition video is becoming increasingly important for mobile devices, and devices using Ultra High Definition (UHD) (4K and 8K) and 3-D rendering are also expected to become widely available in not so distant future.An uncompressed UHD video may reach 24 Gbps rate, and an uncompressed 3-D video with UHD can reach 100 Gbps. Ultimately, It is predicted that data rates will reach Terabit-per-second (Tbps) within the next five to ten years.

However, existing wireless technology is hard to support Tbps links. The state-of-the-art communication systems in ultra-wideband (UWB) or millimeter wave (mmWave) can only achieve Gigabit-per-second (Gbps) rates. On the other hand, communications over the infrared (IR) or visible light (VLC), are restricted by several technical and safety limitations.

One way to achieve this is to move to higher frequencies for wireless links. Amongst others, the Terahertz (THz) band, 0.1–10 THz, stands out as one of the promising alternatives. Terahertz can provide hundredfold, increase in the frequency compared to the mmWave addressing spectrum scarcity and capacity limitation in current wireless systems. Terahertz wi-fi could in theory support data rates up to 100Gb/s within ranges of about 10m.

In February 2017, researchers from the Panasonic Corporation and National Institute of Information and Communications Technology (Hiroshima University), developed a THz transmitter data at a staggering rate of 100 Gbps over a single channel of 300 GHz. At this data rate, you can transfer a 0.1 terabit file before you can say the word ‘it!’

The terahertz spectrum possibly can be the basis for the next “5G” network for cellphones. If cellphones on a current “4G” network can download data at 10 to 15 megabits per second, terahertz technology can potentially send data back and forth at terabits per second (or millions of megabits per second).  Terabits per second shall enable super high-speed link to communication satellites, faster content download from servers to the mobile terminal, improved use in applications requiring real-time quality communication (like in orbit), and quick exchange of 3D videos in high definition.

In the past, the frequency spectrum ranging from 0.3 to 3THz (or 300 to 3000GHz) was spoken as infamous “Terahertz Gap” as it lies between traditional microwave and infrared domains but remained “untouchable” via either electronic or photonic means. The conventional “transit-time-limited” electronic devices can hardly operate even at its lowest frequency; the “band-gap-limited” photonic devices on the other hand can only operate beyond its highest frequency. However continuous progress is being made for Terahertz components and devices to overcome electronic/photonic barriers for realizing highly integrated Terahertz systems.

“Imaging, radar, spectroscopy, and communications systems that operate in the millimeter-wave (MMW) and sub-MMW bands of the electromagnetic spectrum have been difficult to develop because of technical challenges associated with generating, detecting, processing and radiating the high-frequency signals associated with these wavelengths. To control and manipulate radiation in this especially challenging portion of the RF spectrum, new electronic devices must be developed that can operate at frequencies above one Terahertz (THz), or one trillion cycles per second,” says DARPA.

The Researchers from the Tokyo Institute of Technology have already demonstrated 3Gb/s transmission at 542 GHz. At the heart of the team’s 1mm-square device is what is known as a resonant tunnelling diode, or RTD. During the 2008 Olympic Games in Beijing, scientists from Osaka University and NTT Corp. already demonstrated a 120 GHz data link across a distance of 1 km.


Terahertz Applications

Information showers: The inherently small communication range of THz cells (few meters radius maximum) and extremely high-rate (up to Tbps) cells can be used for deployment of THz access points (APs) in the areas with high human flow (e.g. gates to the metro station, public building entrances, shopping mall halls, etc.).  With such a deployment strategy, each of the passing user is able to receive bulk data (up to several GBs), just while passing this AP. Such information showers can be used do seamlessly deliver software updates as well as other types of heavy traffic, such as high-quality video (e.g. a movie to watch in a train)

Mobile access: The applicability of THz communications to typical usage scenarios (e.g. indoor WLAN access) is limited due to considerable propagation losses. This could be addressed by trading the capacity of THz access points for coverage, primarily by reducing the utilized bandwidth
and moving the entire communications from above 1 THz to the so-called “lower terahertz” carriers around 300GHz. As a result, it is possible to create reliable wireless links over tens of meters while retaining the capacity of tens of gigabits per second, which makes Wi-Fi-like THz access points (or even femto-cells for cellular access) become feasible.

Fiber-equivalent wireless links: The strategy for next generation wireless networks (5G and Beyond) envision the appearance of numerous high-rate small cells, operating in the mm Waves spectrum. The feasibility of multi-gigabit-per second wireless links in the lower THz band for the distances
up to 1 km long have been recently experimentally validated.

Connectivity with miniature devices: The possibility to create micro-scale transceivers operating in the THz band  allow  networking  of several micro- and nano-scale robots, capable to assist the society in many different areas,  from  environmental sensing  to medicine .


Terahertz for Future Military and Space Communications

Terahertz wireless sensor networks shall also enable gigabit secure battlefield wireless sensor network and provide multi sensor fusion of wide range of imaging and non-imaging sensors. An ability to create highly directional beams with miniature size antenna arrays in conjunction with the high theoretical capacity of THz links results in a number of benefits for the security-sensitive usage, especially in military applications.  THz ad hoc network can be formed in the battlefield to connect soldiers, armoured personnel carriers, tanks, etc. The limited transmission range and highly directional antennas makes eavesdropping extremely difficult.

In outer space, the transmission of THz is lossless, so we can achieve long-range secure Gigabit Aircraft to satellite Communication with very little power space. A single THz satellite communication link will support broadband data transfer rates far beyond (>20X) the limits of current microwave technology.

The easier pointing due to their wider beam width, it is suitable for the application in the GEO-GEO or LEO-GEO inter-satellite links, which can support the high-throughput (Gigabit) communication with high security as well as the ability to defeat the interference.

The increased bandwidth of terahertz shall also enable UWB Code Division Multiple Access (CDMA) communications schemes which provide high immunity to fading, large processing gain for combating jamming and low probability of detection and interception.


ISSCC: Panasonic  develop ‘A 105Gb/s 300GHz CMOS Transmitter’.

Hiroshima University, National Institute of Information and Communications Technology, and Panasonic Corporation announced the development of a terahertz (THz) transmitter capable of transmitting digital data at a rate exceeding 100 gigabits (= 0.1 terabit) per second over a single channel using the 300-GHz band. The research group has developed a transmitter that achieves a communication speed of 105 gigabits per second using the frequency range from 290 GHz to 315 GHz. At this data rate, the contents of an entire DVD can be transferred in a fraction of a second

“This year, we developed a transmitter with 10 times higher transmission power than the previous version’s,” said Hiroshima Professor Minoru Fujishima. “This made the per-channel data rate above 100Gbit/s at 300GHz possible. Terahertz could offer ultrahigh-speed links to satellites, and that could, in turn, significantly boost in-flight network connection speeds, for example.” Other possible applications include fast download from contents servers to mobile devices and ultrafast wireless links between base stations, he added.

“This year, they showed six times higher per-channel data rate, exceeding 100Gbit/s for the first time as an integrated-circuit-based transmitter,” said Panasonic, which worked with Hiroshima University and the Japanese National Institute of Information and Communications Technology to develop the transmitter. He pointed out that such links could beat optical fibres because that are made from glass where the speed of light is slower than in air or space, increasing data latency and barring them from systems that require ultra-fast responses. “Today, you must make a choice between high data rate fibre optics and minimum-latency microwave links. You can’t have them both,” said Fujishima. “But with terahertz wireless, we could have light-speed minimum-latency links supporting fibre-optic data rates.”

Panasonic points out that the range of frequencies used are currently unallocated, falling within the 275-450 GHz whose usage is to be discussed at the World Radiocommunication Conference (WRC) 2019 under the International Telecommunication Union Radiocommunication Section (ITU-R).

FUJITSU and NTT develop compact terahertz band receivers

FUJITSU has developed the world’s first compact 300GHz receiver (which is part of the terahertz waveband in which attenuation during propagation of signals through the atmosphere is low) capable of wireless communications at tens of gigabits per second. They have developed an integrated module that combines receiver-amplifier chip and terahertz-band antenna with a low-loss connection within the cubic capacity at 0.75 of a centimeter, and that can be installed in mobile devices.

The use of this Fujitsu-developed technology will enable small devices to receive 4K or 8K HD video instantly, such as from a download kiosk with a multi-gigabit connection. It will also be possible to expand into such applications as split-second data transfers between mobile devices and split-second backup between mobile devices and servers.

They commonly used printed-circuit substrate to connect the antenna and the receiver-amplifier chip is ceramic, quartz, or Teflon. They replace the material with a low-loss polyimide, which can be micro-fabricated into printed circuit boards.

While polyimide as a material has a 10 percent higher loss than quartz, but since its processing accuracy is more than four times higher, the through-hole vias can be placed within several tens of microns of each other, halving the loss as compared to a connecting circuit on a quartz printed circuit. This allows the receiver to be highly sensitive which compensates the strong attenuation of terahertz waves when propagating through atmosphere.

NTT has also developed 300-GHz, a wireless-use IC for ultrahigh-speed short-distance wireless communication system.

In the IC, a modulator and a power amplifier (which are required components of a transmission unit for wireless transmission) are monolithically integrated. High output power and low-loss wiring were achieved by multi-parallelization of the amplifier, and high data rate was achieved by a travelling-wave modulator. Moreover, by using low-loss and wideband waveguide-to-IC transition we designed, the deterioration of the characteristics due to packaging was negligible. By applying this module high-speed operation (i.e., 20 Gbit/s) was confirmed



References and Resources also include:




http://Terahertz Band Communications: Applications, Research Challenges, and Standardization Activities: V. Petrov, A. Pyattaev, D. Moltchanov, Y. Koucheryavy Department of Electronics and Communications Engineering, Tampere University of Technology, Tampere, Finland

Metamaterial based Antennas for wireless and space communications, GPS, satellites, airplanes and missile seekers

Researchers are always looking for new materials with novel properties. A metamaterial is a kind of artificial synthetic composite material with a specific structure, which exhibits properties not found in natural materials. Metamaterials have received increasing attention due to their unique electromagnetic properties.

One of the most important applications of metamaterials is antenna design. Due to the unusual properties of metamaterials, we can achieve antennas with novel characteristics which cannot be realized with traditional materials.

Various types of metamaterial have been proposed with different characteristics, e.g. e.g. negative permittivity or permeability, zero refractive index, and huge chirality, etc. These unusual properties play an important role in modern antenna design, which can provide better performance, more functions, and more flexibility.

These novel antennas aid applications such as portable interaction with satellites, wide angle beam steering, emergency communications devices, micro-sensors and portable ground-penetrating radars to search for geophysical features.

Some applications for metamaterial antennas are wireless communication, space communications, GPS, satellites, space vehicle navigation and airplanes.

Metamaterials are a basis for further miniaturization of microwave antennas, with efficient power and acceptable bandwidth. Antennas employing metamaterials offer the possibility of overcoming restrictive efficiency-bandwidth limitations for conventionally constructed, miniature antennas.

Conventional antennas that are very small compared to the wavelength reflect most of the signal back to the source. A metamaterial antenna behaves as if it were much larger than its actual size, because its novel structure stores and re-radiates energy. Established lithography techniques can be used to print metamaterial elements on a PC board.

Metamaterials permit smaller antenna elements that cover a wider frequency range, thus making better use of available space for space-constrained cases. In these instances, miniature antennas with high gain are significantly relevant because the radiating elements are combined into large antenna arrays. Furthermore, metamaterials’ negative refractive index focuses electromagnetic radiation by a flat lens versus being dispersed

For broadband satellite communications applications where the platform is mobile, where the satellite is non geostationary, or both, a scanning antenna is required. Metamaterials surface antenna technology (M-SAT) is an invention that uses metamaterials to direct and maintain a consistent broadband radio frequency beam locked on to a satellite whether the platform is in motion or stationary. Gimbals and motors are replaced by arrays of metamaterials in a planar configuration.

Some of the advantages of Metamaterial Antennas are:

  • High gain, electrically configurable beam forming maximizes channel efficiency
  • Ultra-fast reconfiguration allows SDAs to realign on a frame-by-frame basis
  • Self-alignment eliminates the need for expensive technician installations or mechanical steering gimbals, as well as self-recovery from displacement
  • Active dynamic null generation allows mitigation of interfering signals when used in cluttered spectrum
  • Lightweight, compact and capable of being ruggedized for size-sensitive applications in harsh environments
  • Conformal form factor enables geometry-flexible antennas to be placed where conventional antennas could not be located
  • Support for a wide spectrum of frequencies across the RF, microwave, and millimeter wave spectrums


Researchers have proposed two classification of metamaterial based antennas . The first category is the concept of a transmission line composed of a periodic repetition of a unit cell comprising a series capacitance and a shunt inductance. This category is a direct application of the leaky-wave metamaterial antennas, which consists of a cascaded series of unit cells lying on a matched microstrip line. This type is preferred for beam scanning applications.

In the second category are the resonant antennas, which, in opposition to the first category, are obtained by terminating the structure to the free space by a short or open circuit. The metamaterial based resonant types of antenna structures allow dual-band, multiband behaviours and can be miniaturized but do not increase the bandwidth of the antenna.


A Broadband Left-Handed Metamaterial Microstrip Antenna with Double-Fractal Layers

Antennas are essential for wireless communication systems. The size of a conventional antenna is dictated mainly by its operating frequency. With the advent of ultra-wideband systems (UWB), the size of antennas has become a critical issue in the design of portable wireless devices. Consequently, research and development of suitably small and highly compact antennas are challenging and have become an area of great interest among researchers and radio frequency (RF) design engineers

In commercial wireless communication systems, the antenna remains a key element of the communication chain. The efficiency of a radio broadcasting system is directly related to the characteristics of its antennas. In addition, future communication systems using cognitive radio or flexible radio will need smaller wideband antennas.

On one of the common antenna designs is microstrip patch antenna. This design has many advantages; it can be easily fabricated using a lithographic technique, it has a low profile, it has a low production cost, and its structure is fairly simple. However, these advantages are offset by the narrow bandwidth of the antenna. To date, several approaches have been proposed to address this deficiency. In most cases, the proposed solution was to increase the thickness and decrease the dielectric constant of the substrate at the same time. However, these attempts did not produce significant bandwidth enhancements in redesigned antennas.

With the development of new materials called left-handed materials (LHM), or left-handed metamaterials, it is possible to achieve a significantly wider frequency range. As a result, many antennas with LHMstructures with better performance than conventional microstrip patch antennas were proposed.

Planar left-handed metamaterial structures were proposed a few years ago. The discussed structures consist of 2D periodic arrays of unit cells. This concept was applied to LHMantennas, resulting in broadband and high gain designs. The periodic patterns which showed left-handed characteristics were applied to rectangular conventional microstrip patch antennas. These configurations allowed obtaining a frequency range several times wider than the same patch antenna without the metamaterial pattern.

Researchers Roman Kubacki and others from  Military University of Technology,Warsaw, Poland have proposed  a microstrip patch antenna based on the left-handed metamaterial concept, using planar periodic geometry, which results in improved characteristics. This periodic geometry is derived from fractal shapes, which have been widely used in antenna engineering. The metamaterial property was obtained as a result of the double-fractal structure on both the upper and the bottom sides of the antenna. The upper side of the antenna follows the shape of crossbar fractals, with Minkowski fractals on the lower layer. The proposed self similarity and ease of repetitiveness of the geometry make these designs attractive for creating a periodic structure.

The final structure has been optimized to enhance bandwidth, gain, and radiation characteristics of the microstrip antenna.

This combination significantly improved antenna performance; our design could support an ultrawide bandwidth ranging from 4.1 to 19.4GHz, demonstrating higher gain with an average value of 6 dBi over the frequency range and a peak of 10.9 dBi and a radiation capability directed in the horizontal plane of the antenna.


Fractal Firm Confirms Breakthrough Metamaterial Antenna Technology

Fractal Antenna Systems has confirmed that it has developed a new proprietary antenna technology with broad applications, particularly in point to point access with directional antennas. The new technology is enabled by the firm’s fractal metamaterial discoveries and inventions. Fractal metamaterial devices are populated by closely packed ‘self similar’ shaped electromagnetic structures. Developed by the firm, the use of fractal metamaterials has already resulted in a broad range of critical attributes. Now magnification ability publicly joins the list of essential practical advantages.

The new antenna technology, referred to as “FM/R”, has the advantages of smaller size, wider bandwidths, and high efficiency, at high magnification, or “gain”. In addition, it has a unique characteristic of being nearly agnostic to its form factor shape. This means that most conventional, prescribed ‘fishbone’, ‘arrowhead’, and ‘bubble’ shapes for directional antennas are obsolete, or severely limited in comparisons of their footprint, supporting electronics, and cost. In addition, the FM/R antennas may replace several directional antennas at once, diminishing coveted tower and building real estate needs for antennas.

Notes CEO and inventor Nathan Cohen: “Others have oversold the case of metamaterials for lens-like applications, and ended up with ‘me-too’ technology of limited practical value. We’ve delivered on the promise.” Cohen attributes previous impediments to a failure to recognize the potential afforded by: “Greater sampling of the nearfield, through fractals. The physics was sound, but the assumptions about how to apply it were stuck in an age-old rut.”


Metamaterials surface antenna technology

For broadband satellite communications applications where the platform is mobile, where the satellite is non geostationary, or both, a scanning antenna is required. The satellite communications industry, however, is dominated by dish antennas mounted on motorized gimbals for these applications. These solutions are too large, heavy, and power-consuming to offer solutions for consumer mobile applications such as the connected car or a personal satellite terminal. Another alternative is phased array technology, but this technology is typically available only to government and military customers because of its expense and power consumption.

Kymeta has addressed these obstacles by developing an electronically-scanned antenna technology, based on a diffractive metamaterials concept, called Metamaterial Surface Antenna Technology (MSAT). Electronic scanning is achieved through the use of high-birefringence liquid crystals. The use of liquid crystals (LC) as a tunable dielectric at microwave frequencies permits large-angle (> 60°) beam scanning with power consumption of < 10 Watts and antenna thickness ~ 5.0 cm, with no moving parts. Kymeta’s engineering approach, through the use of LC and optimization of the materials and design for compatibility with liquid crystal display (LCD) manufacturing processes, positions the technology for mass production by leveraging the capital infrastructure of the LCD industry.


Metamaterials Electronically Scanned Array (MESA)

A low-cost, high-performance RF beam steering module that can be adapted for a broad range of applications, including: collision avoidance system for self-driving cars or drones, broadband satellite internet/radio, hypothermia treatment, wireless communications, etc. The key performance feature of PARC’s MESA is its capability to maintain a high signal-to-noise ratio and high-resolution, simultaneously.


Metamaterial-Based Radar Lets Drones Fly beyond Visual Line Of Sight

Echodyne Corporation, a developer of metamaterials-based radar systems, says it has completed testing on its airborne Metamaterial Electronically Scanning Array (MESA)-Detect and Avoid (DAA) radar on a small Unmanned Aerial Vehicle (sUAV).

“Echodyne’s airborne detect-and-avoid radar is made especially for small to medium UAS and enables safe beyond visual-line-of-site operations – in all environments and conditions,” said Jerry Hendrix, Executive Director for the LSUASC. “Before the MESA-DAA became commercially available, there were no options for long-range radar on small to medium commercial drones.”

“Radar is an ideal sensor technology for all sorts of scanning and imaging applications, especially when environmental conditions are less than ideal,” explained Thomas Driscoll, Chief Technology Officer for Echodyne. “Our radar thrives over other sensors in unpredictable weather conditions, can rapidly scan a broad field of view, can track Cessna-sized targets at distances greater than two kilometers, and dramatically increases situational awareness for UAS operators.”

Echodyne’s radar array is made of multiple layers of carefully patterned copper wiring, and beam control results from heating specific areas of the wiring, according to IEEE Spectrum. The smaller design is less powerful and has shorter range, but it also is more affordable to build, and is no less than effective for most commercial applications.


METamaterials for Active ELEctronically Scanned Arrays (METALESA)

An AESA, as core component of modern military radar systems, is a type of phased array, whose transmitter and receiver functions are composed of numerous small transmit/receive modules.

The objective of the METALESA project was to employ MetaMaterials in order to increase the efficiency and reliability of operating radar systems Electromagnetic MTMs are artificial materials with unusual macroscopic propagation properties of electromagnetic waves, which are normally generated by microscopic periodic metallo-dielectric structures.

Four main topics were identified as critical in the AESA design process, where MTM concepts could be applied, and have been analysed, prototyped and tested methodically in the project:

  • Expensive RF feeding networks, with considerable space requirements.
  • The coupling between radiating elements of the antenna array is the principal source of the scanning angle limitations, causing the undesired angular blind spots.
  • The parasitic back-lobe and side-lobe radiation caused by the antenna´s finite dimensions can cause unwanted disturbances of other systems or in the system itself.
  • A MTM based radome is proposed to reject the out of band interference and simultaneously, the in band back-lobe and side-lobe radiation

Smart Metamaterial Antennas

Highly reconfigurable metamaterial antennas are a natural evolution of the MESA architecture. They are tailored for 4G LTE/5G bay stations and for satellite communications.


RF Energy Harvesting Platform

RF Energy Harvesting PlatformAn RF energy harvesting platform that converts Wi-Fi and other RF bands to electricity, to power IoT sensors. It consists of a metamaterial-inspired antenna and a custom rectifying circuit. There are two classes of prototypes that we have demonstrated: hybrid (printed antenna with integrated silicon chips) and all-printed devices. The performance and bandwidth of the RF energy harvesters exceed by at least an order of magnitude that of the state of the ar


US DOD has issued SBIR to investigate low-cost alternatives to steerable antennas for the munitions application. The performance enhancements afforded by electronically steerable antennas is of high interest to the radar seeker community. Traditionally phase array antennas require beam forming networks with distributed phase shifters or time delay mechanisms and additional control circuits, to perform beam steering that lead to expensive and complicated circuitry not economically feasible for use in small missile radar seekers.

Recent breakthroughs in engineered electronic and electromagnetic materials and continuous transverse stub arrays have made agile, reconfigurable apertures possible where the beam-forming function is integrated in to the aperture. These technologies are opening avenues to provide new levels of real-time control of the aperture and performance as well as affordability, says SBIR.

Metamaterials  in Antenna Design

One of the most important applications of metamaterials is antenna design. Due to the unusual properties of metamaterials, we can achieve antennas with novel characteristics which cannot be realized with traditional materials.

  1. Electrically small antennas based on zeroth resonant mode

In mobile communication systems, electrically small antennas (ESA) are desired. Modern integrated circuit technology has the ability to miniature circuits to a very small size. However, in a traditional design, the performance of the antenna is related with its size. The antenna usually has dimensions in the order of the operating wavelength. This sets boundaries for the size of the whole system.

A ZIM medium, whose refractive index is near zero, shows an operating wavelength that is infinite at an arbitrary designed frequency. This phenomenon is named zeroth resonant mode. Since the wave number in this antenna is zero, in theory, the physical size of the antenna can be made independent of its working frequency. Because the operating wavelength is infinite, the field distribution and the radiation pattern are different from the normal ones.

  1. Dual-band and multi-band antennas

Normal dual-band antennas are realized with different resonant structures, or different resonant modes in one structure. The main disadvantage of this technique is that the field distributions in these structures can hardly be the same in both bands. This means that the radiation patterns in the operating bands are different. Since metamaterials can support a negative refractive index, the resonant modes can be selected as a symmetric pair, i.e. so-called negative and positive modes. The field distributions of these two modes can be very similar, and thus also the radiation patterns.

Negative and positive modes can be designed together with a zeroth-order mode. This yields a multi-band antenna with a specific pattern for each mode. An extra advantage of a metamaterial-loaded multi-band antenna is the fact that its size is usually smaller than in a traditional design, where the size is decided by the lowest operating frequency.

  1. Low Profile planar reflectors

In an electric dipole antenna positioned parallel on top of a PEC plane, the distance between the dipole antenna and the reflector should be approximately a quarter wavelength. Indeed, since the reflective phase at the PEC plane is 180°, the radiation of the image of the electric dipole will start to cancel the radiation of the dipole itself if it is located closer to the reflector.

However, if the reflector is a PMC plane, the reflective phase is zero, and the image of the electric dipole will enhance the radiation when the dipole is located near the PMC plane. This technique allows designing low profile reflectors for electric dipole antennas.

Conversely, magnetic dipoles, in practice realized by slots or apertures in a ground plate, are also not suitable for placement near any PEC plane because of the generation of parallel plate modes between the two metal planes, which considerably distorts the characteristics. An AMC plane can help to suppress any parallel plate modes. Also in this case, low profile structures become feasible.

  1. Antenna lenses and polarizers

Dielectric lenses can be used to improve the directivity and gain of an antenna. However, the cost to fabricate a 3D lens is large. Further, the location of the lens should be carefully chosen in relation with the phase centre of the antenna. A metamaterial lens can be formed by a flat 2D structure. Their manufacturing cost is much lower. They can even be integrated with the planar antenna structure to reduce the profile and size of the antenna system.

A polarizer can be based on a chiral medium which has the capability to transform a linearly polarized wave into a circularly polarized wave. This opens a way to design circularly polarized antennas based on existing linearly polarized antennas.



References and Resources also include:

Militaries racing to deploy Railgun on Navy Warships to shoot down stealth aircraft and missiles including against hypersonic threats

Electromagnetic Rail Gun, EMRG ,  is a cannon that uses electricity rather than chemical propellants (i.e., gunpowder charges) to launch projectiles at distances over 100 nautical miles – and at speeds  exceeding Mach 5.  In EMRG, “magnetic fields created by high electrical currents accelerate a sliding metal conductor, or armature, between two rails to launch projectiles at [speeds of] 4,500 mph to 5,600 mph,”  or roughly Mach 5.9 to Mach 7.4 at sea level.


The  Railguns provide revolutionary military  capabilities. They provide Long range artillery (in excess of 200 Km)  with increased penetration  because of high impact speed and simultaneous impacts via rate of fire and velocity control. Railgun-equipped warships can fire hypersonic projectiles to shoot down stealth aircraft and ballistic missiles, or bombard enemy ships and land targets from hundreds of miles away.  They can be employed for Anti-surface (naval), Anti air and anti missile defense (including against hypersonic threats).


The U.S.  China , Russia, Japan and France are reportedly developing their own version of the railgun. According to the navyrecognition.com, The pictures released on January 31st  show the People’s Liberation Army Navy (PLAN or Chinese Navy) Type 072 III landing ship Haiyangshan (hull number 936) fitted with the suspected railgun at its bow and several ISO containers amidship. If this turns out to be an actual EM railgun, China would become the very first country to test such a system at sea. The China Navy’s experimental railgun is mounted on landing ship as the test platform and speculated to enter service with the next Type055 DDG variant. Early, rear Admiral Ma Weiming told Chinese experts in electromagnetic research that the country has made breakthroughs in key areas of electromagnetic applications, such as railguns and electromagnetic-assisted launch system (EMALS) catapults.


The U.S. Navy, along with the Office of Naval Research (ONR) and BAE Systems has been working on the technology for several years. US Navy is ready to deploy their futuristic Electromagnetic Railgun (EMRG) for field tests.ONR has demonstrated the ability to conduct “multi shot salvo” (with two projectiles are fired in a 12 seconds span or about 5 rounds per minute) at the Naval Surface Warfare Center Dahlgren Division, a land based facility. Rear Admiral Matthew Klunder, head of US Naval Research, said the futuristic electromagnetic railgun – so called because it fires from two parallel rails – had already undergone extensive testing on land.  “Energetic weapons, such as EM railguns, are the future of naval combat,” said Rear Adm. Matt Klunder, the chief of naval research.


While initially conceived of and developed for the Navy’s emerging Rail Gun Weapon, the Pentagon and Army are now firing the Hyper Velocity Projectile from an Army Howitzer an effort to fast-track increasing lethal and effective weapons to warzones and key strategic locations, Pentagon officials said. Army is looking to target  buildings, force concentrations, weapons systems, drones, aircraft,vehicle bunkers and even incoming enemy missiles and artillery rounds. “We can defend against an incoming salvo with a bullet. That is very much a focus getting ready for the future,” Dr. William Roper, Director of the Pentagon’s once-secret Strategic Capabilities Office, told Scout Warrior among a small group of reporters.


A team of Russian scientists has successfully tested the country’s first railgun, which relies on electromagnetic forces rather than explosives or propellant. According to experts at the Institute of High Temperatures’ branch in Shatura, just outside Moscow, the railgun can fire shells at an incredibly fast speed of 3 kilometers per second, which is well enough to cut through any type of armor existing today. During the latest test, a 15 gram plastic cylinder fired by the railgun went through an aluminum plate several centimeters thick. “The newspaper’s report was no surprise. Similar developments are also actively under way in Russia,” Franz Klintsevich had told RIA Novosti.


While many countries are already staking on the railgun as a future weapon, Russia is also considering other, more peaceful applications, such as ferrying cargoes to the International Space Station. “The railgun is a big boost to our study of high energy physics as we are now ready to build apparatuses working at speeds exceeding 4.5 kilometers a second,” the Shatura Institute’s director Alexei Shurpov told Zvezda TV.

US Navy’s railgun programme

The US Navy  super-powerful electromagnetic railgun is targeted to to fire rounds at speeds up to Mach 7.5, which at 9,100 kilometers per hour, is more than seven times the speed of sound, and covers a distance of about 400 kilometers.The weapons are not only devastating in their speed, but at $25,000 per round are much cheaper than their explosive counterparts such as the Tomahawk or Harpoon, which can cost up to $1 million each. ‘The railgun is a true warfighter game changer,’ the Navy says. ‘Wide-area coverage, exceptionally quick response and very deep magazines will extend the reach and lethality of ships armed with this technology.’



The electromagnetic rail gun uses electrical energy generated by its host ship and stored over several seconds in a pulsed power system to create a magnetic field that propels the kinetic energy projectile well over 100 miles toward a wide range of targets, such as enemy vehicles, or cruise and ballistic missiles. The weapon can release up to 5 million amps, or 1,200 volts within 10 milliseconds, according to Military.com. That’s enough to speed up a 45-pound projectile from zero to 5,000 mph in one one-hundredth of a second, the site said.


However, they were ‘not suitable for integration aboard a ship’ and were too big to fit the latest Zumwalt-class destroyers, Thomas Beutner, head of ONR’s Naval Air Warfare and Weapons Department, said during a July event in Washington, according to Defence One. To get around the issue, ONR researchers developed their own capacitors, which are far smaller, but can supply 20 megajoules per shot, with a goal of 32 megajoules by next year.  According to ONR, ‘you can think of a megajoule as about the same, energy-wise, as a one-ton vehicle moving at 160 mph.’ These new capacitors ‘represent a new generation of pulse power, with an energy density of over a megajoule per cubic meter,’ said Beutner.  The current version is now capable of firing multiple shots in succession, the group is also aiming to ramp the firing rate up to 10 shots per minute by 2018, the report said.


The US Navy has been working on the gun with BAE Systems since 2005. During phase I developers focused on developing pulsed power technology. During phase II, which started in 2012, will further develop the pulsed power system and the launcher system.


The Navy funded the development of  two industry-built EMRG prototype demonstrators, one by BAE Systems and the other by General Atomics. The two industry-built prototypes are designed to fire projectiles at energy levels of 20 to 32 megajoules, which is enough to propel a projectile 50 to 100 nautical miles. (Such ranges might refer to using the EMRG for NSFS missions. Intercepts of ASCMs and ASBMs might take place at much shorter ranges.) The Navy began evaluating the two industry-built prototypes in 2012.


The Navy originally began developing EMRG as a naval surface fire support (NSFS) weapon for supporting U.S. Marines operating ashore, but subsequently determined that the weapon also has potential for defending against ASCMs and ASBMs.The weapon would also eliminate the hazards of high explosives in the ship and unexploded ordnance on the battlefield, Navy officials say.


Deputy defense secretary Bob Work described his vision for a future Navy fleet that would rely on railguns and lasers for fleet defense. “If the Navy can develop working railguns and lasers that are practical enough for a warship, it would not only solve the magazine depth issue, but would also free up missile tubes for the FSC’s offensive sea-control and land attack missions.”


While the weapon is currently configured to guide the projectile against fixed or static targets using GPS technology, it is possible that in the future the rail gun could be configured to destroy moving targets as well, Capt. Mike Ziv, Program Manager for Directed Energy and Electric Weapon Systems said.


The Navy is evaluating whether to mount its new Electromagnetic Rail Gun weapon aboard the high-tech DDG 1002 destroyer by the mid-2020s, service officials said. The DDG 1002’s Integrated Power System provides a large amount of on board electricity sufficient to accommodate the weapon, said Capt. Mike Ziv, Program Manager for Directed Energy and Electric Weapon Systems. The US Navy has also revealed plans to test a prototype electromagnetic on-board a joint high-speed vessel (JHSV) this year.


In January 2015, it was reported that the US Navy is projecting that EMRG could become operational on a Navy ship between 2020 and 2025. In April 2015, it was reported that the Navy is considering installing an EMRG on a Zumwalt (DDG-1000) class destroyer by the mid-2020s.

Challenges of Railguns

Crucially, the weapon currently requires a 25-megawatt power plant of its own to fire at all. The vast power consumption can so far be supported by only three US destroyers being built. Another problem here is that the longer the distance to the target the weaker the impact of this shot as air resistance keeps slowing the projectile down as it travels to its target. Another major challenge is the guidance system, which is to be based on GPS, and the sensitive electronics that should be modernized to withstand the gravity in order not to crumble.


One of the challenge is that the high muzzle velocity quickly wears out their “barrels” (which are actually two conductive metal rods along which the projectile is driven), requiring frequent replacement.


The Office of Naval Research recently identified several key “research opportunities” to make the railgun a success, including better thermal management for the gun’s launch rails; extending the service life of the equipment; developing high-strength dielectric structural materials; and reducing the size of associated power systems and control electronics. Experts say that the limited durability of a railgun’s rails under the stress of repeated firing is an especially serious challenge for the technology – one that rapid-fire testing may help address.

Russia downplays railgun as being too inexpensive and still technologically immature

Klintsevich, first deputy chairman of Russia’s Senate committee for defense and security, has accused Washington of trying to impose a new Cold War-style arms race while saying the “supergun,” dubbed a potential game-changer by the Pentagon, is not yet an effective technological breakthrough.


“There is a huge distance between a first test and mass production, moreover, at present, the main problem of creating a supergun – its expensiveness – isn’t solved.” the senator told RIA Novosti on Sunday.



Even if the US Navy manages to make a breakthrough its ambitious and costly project, it will not succeed in dragging Russia into another weapons race, Klintsevich stressed, suggesting that Russia may respond asymmetrically with existing capabilities. “To not allow the change of balance of power in the world, we have lots of other possibilities. Which will be used, if necessary. In short, situation is under control.”



Raytheon delivers pulse power containers for US Navy’s railgun programme

Raytheon has started delivering pulse power containers (PPCs) to support the US Navy’s railgun programme. In January 2012, the US Naval Sea Systems Command awarded an initial $10m contract to Raytheon for the preliminary design of a large power system, Pulse Forming Network (PFN).


‘Pulse Power Containers’ (PPC) consist of huge banks of capacitors or rechargeable batteries packed inside standard ISO containers. Developed by Raytheon, each container packs enough energy to discharge 18 kilowatts for each shot. To enable the railgun to fire ten such shots per minute the PPC must recharge from the host ship in seconds and be able to store and discharge the energy in very short time while managing the thermal load generated by the process.


The PFN will provide the electromagnetic energy for the railgun projectile to travel without the use of an explosive charge or rocket motor. The containers will be included in the navy’s railgun test range for additional development and testing. According to Raytheon, these PPCs, when combined, produce enough power to trigger an electromagnetic launch of a railgun’s high-velocity projectile, at speeds of more than mach 6.


Raytheon Integrated Defense Systems Business Advanced Technology vice-president Colin Whelan said: “Directed energy has the potential to redefine military technology beyond missiles and our pulse power modules and containers will provide the tremendous amount of energy required to power applications like the navy railgun. The US Navy’s railgun uses an electromagnetic force, known as the ‘Lorenz Force’, to fire a projectile at six or seven times the speed of sound.


The Navy, in addition to developing the railgun itself, is working on a hypervelocity projectile (HVP) that will support both the railgun and conventional 5-inch guns. The GPS-guided round will fly at hypersonic speeds, but the Navy is still working with the Pentagon’s Strategic Capabilities Office to close the fire control loop between the gun and the projectile.

 Typical Railgun

The speed of projectiles in conventional or light gas guns, both are limited by the acceleration of the expanding gas it uses. As EMRGs convert electrical energy into kinetic energy, effectively instantaneously, they are not limited by a maximum acceleration. Though EMRGs themselves tend to only be about 2% efficient, theoretically they have no limit to how much energy can be input to the system and thusly no maximum velocity the system can attain.EMRGs are made up of a few subsystems: an electrical subsystem, an injector, a pair of supported conductive rails, and a projectile.


1. Electrical Subsystem
The EMRG electrical subsystem is composed of three subsystems that output a pulse of current: a power source, a storage system; and a delivery system. The power source provides the storage system with electricity that is then stored. When the EMRG is fully charged and ready to fire the storage system sends the electricity as quickly as possible through a delivery system, a system of electrical cabling, and to the conductive rails.


2. Injector
The injector subsystem is required to accelerate the projectile before it reaches the electric rails. If the projectile enters the electric rails with no or a low initial velocity, the projectile will weld to the rails. To combat this, an injector system must be used to give the projectile an initial velocity. The more initial velocity obtained using the injector is also energy that the electric rails do not have to impart onto the projectile; ideally an injector that provides as much velocity as possible should be used.


3. Supported Conductive Rails
The conductive rails are the most important subsystem in the EMRG. They are the system responsible for converting electrical energy into kinetic energy using Lorentz force. Based on the size and distance between the rails the rate of conversion between electrical and kinetic energy is controlled. This conversion creates a large amount of force on the projectile but also on the rails themselves. To ensure that the rails do not fail due to the induced force, they are supported by a rigid structure.


4. Projectile
The projectile itself must be conductive. This allows the current to pass through the projectile and convert the electric energy into a force on the projectile. High melting points will maintain their shape better under firing conditions, but will also tend to do more damage to the barrel when they fragment.

References and Resources also include:









US DOD’s JUMP creating next generation microelectronics for dominance in future Battlefield Internet of Things

The Joint University Microelectronics Program (“JUMP”), is a collaborative effort between the Department of Defense, U.S. universities and the industrial participants with a goal to substantially increase the performance, efficiency, and capabilities of broad classes of electronics systems for both commercial and military applications.

The collaborative, multidisciplinary, multi-university consortium will support long-term research focused on high performance, energy efficient microelectronics for end-to-end sensing and actuation, signal and information processing, communication, computing, and storage solutions that are cost-effective and secure.

These research and development efforts should  provide the Department of Defense with an unmatched technological edge in advanced radar, communications, and weapons systems, and provide the U.S. economy with unique information technology and processing capabilities critical to commercial competitiveness and future economic growth.

The Consortium seeks to address existing and emerging challenges in electronics and systems technologies by concentrating resources on high-risk, high-payoff, long-range innovative research to accelerate the productivity growth and performance enhancement of electronic technologies and circuits, sub-systems, and multi-scale systems. To this end, JUMP is focused on exploratory research on an 8-12 year time horizon that is anticipated to lead to defense and commercial opportunities in the 2025-2030 timeframe.

As of January 1, 2018 six JUMP research centers comprised of academic researchers from over 30 U.S. universities began exploratory research initiatives that JUMP organizers hope will impact defense and commercial opportunities in the coming decades. Research will continue for five years with funding support coming from industry and government partners.

The consortium, for which SRC serves as the administrative hub, conducted a search for university research proposals throughout 2017 with the goal of uncovering innovative approaches to solving tough development challenges around microelectronics. Four of the successful proposals to participate in the JUMP program fall under the category “vertical” application-focused centers and two fall under the category “horizontal” disciplinary-focused centers.

The point of JUMP and its six thematic centers is to drive a new wave of fundamental research with the potential to deliver the disruptive microelectronics-based technologies required by the Department of Defense and national security in the 2025-2030 timeframe,” said Linton Salmon, DARPA’s program manager for JUMP. “Through these university teams, we’re seeking innovative solutions to tough technical challenges so that we can overcome today’s limitations in the performance and scalability of electronic systems. This in turn will open the way to technologies that dramatically boost the warfighter’s abilities to sense the environment, process information, and communicate.”

Funding for the five-year effort is expected to total approximately $200 million, with DARPA providing about 40 percent of the funding and consortium partners collectively kicking in about 60 percent. otal JUMP funding for the five-year period will be in excess of $150M, including funds committed by DARPA (Defense Advanced Research Projects Agency, www.darpa.mil), IBM Corporation, Northrop Grumman Corporation, Micron Technology, Inc., Intel Corporation, EMD Performance Materials (a Merck KGaA affiliate), Analog Devices Inc., Raytheon Company, Taiwan Semiconductor Manufacturing Company Ltd., and Lockheed Martin Corporation.

Current planning supports six research themes across six JUMP centers and utilizes vertical and horizontal centers to capture the intersections of ideas. While the vertical research centers emphasize breakthrough technologies and products, Horizontal research centers will drive foundational developments in a specific discipline, and create disruptive breakthroughs in areas of interest.


“Vertical” Application-Focused Centers

Within the JUMP context, the challenges of the “vertical” research centers focus on accomplishing application-oriented goals and spurring the development of complex systems with capabilities well beyond those available today. The focus is on key issues facing the industry by addressing the full span of multi-disciplined science and engineering required to achieve breakthrough technologies and products.

Diving deep into cognitive computing, intelligent memory and storage, distributed computing and networking, and RF to THz sensor and communications systems, among other areas, these research centers will strive to develop systems that will be transferable to military and industry in a five year timeframe and fieldable in   in ~10 years.

Technology areas of interest for the JUMP “vertical” Centers include:

Center for Converged TeraHertz Communications and Sensing (ComSecTer)

This theme seeks research in two general, synergistic application areas – RF Sensors and RF Communications Systems – that operate at microwave, millimeter wave or THz frequencies in support of consumer, military, industrial, scientific and medical applications. System examples may include radar, communication, reconnaissance and/or mmwave / THz imaging.

As an example, it is envisioned that future RF sensor systems will require novel, energy-efficient devices, circuits, algorithms, and architectures for adaptively sensing the environment, extracting/manipulating/processing information, and autonomously reacting/responding to the information.

Another example is cognitive communication systems – systems which will operate in complicated radio environments with interference, jamming and rapidly changing network topology, will obtain (sense) information about their environment (aware of their environment and the available resources ) and will dynamically adjust their operation (e.g., efficient spectrum use, interference mitigation, spectrum prioritization) to provide required services to end users.

These future systems should also have Agility, reconfigurable, adaptive, multi-function, multi-mode, self-calibrating sensors with increased degrees of freedom for efficient use of EM spectrum (including: spectrum agility, instantaneous bandwidth/ waveform agility, (very) wide bandwidth, high dynamic range). Autonomous operation and decision making is also desirable. (e.g., embedded real-time learning, ability to recognize threat scenarios, ability to do local-processing before transmitting the data/information)

Super-linear communication links (enabling high modulation formats) and integrated communications components for IoT and distributed sensor systems that enable ultra-low power, high data rate, long-range sensor communications with high linearity in up/down conversion

To address these applications, centers focusing on this vertically integrated application must drive breakthrough research in materials, devices, components, circuits, integration and packaging, connectivity, architectures (e.g., subsystems/arrays), and algorithms that are aimed at efficiently generating, modulating, manipulating, processing (mainly in or very closely coupled to the RF/mm wave /THz domain), communicating (transmitting) and sensing/detecting radiated signals.

Researchers from 10 universities led by Mark Rodwell of the University of California, Santa Barbara will work within ComSecTer toward the collective goal of developing technologies for a future cellular infrastructure designed to support the autonomous vehicle revolution and the emergence of intelligent highways.

The envisioned cellular infrastructure will be capable of handling the data demands require to support technologies like cm-precision localization, unparalleled high-resolution imaging, and lightweight “whisper radio” technology, which researchers would apply to solving some of the communication, safety, and navigation challenges associated with autonomous driving today.


Computing On Network Infrastructure for Pervasive Perception, Cognition, and Action (CONIX)

Importantly, new application requirements coupled with physics-based implementation constraints on latency and energy call for novel architectural solutions to computing-at-scale, requiring innovations in interconnect and networking at all levels, from on-chip to between datacenters.

The purpose of this theme is to explore the challenges of extremely large-scale distributed architectures. Novel, multi-tier, wired and wirelessly-connected heterogeneous systems are expected; tiers may be sensor/actuator, aggregation, cloud/datacenter, or combinations thereof. All tiers are expected to be highly scalable, and heterogeneity is expected both within and across the tiers.

Dramatic advances over today’s systems (cloud, mobile, etc.) and capabilities are required. Proposers are expected to define and tackle a grand challenge in the Distributed Computing and Networking space; the grand challenge should focus attention on research issues that would benefit a broad range of civilian and defense applications (e.g. society-scale digital currencies; battlefield command-and-control in denied environments; smart grid optimization; disaster management in digital cities .

It also calls for Development of new distributed computing systems for new applications besides IoT and big data. Novel computing architectures to reduce the energy and time used to process and transport data, locally and remotely for hyperspectral sensing, data fusion, decision making, and safe effector actuation in a distributed computing environment. Provide cooperative and coordinated distributed-system concepts that are scalable and function in communications-challenged environments (where both wired and wireless environments are not guaranteed to be available, reliable, or safe); address approaches to allow for proper operation in isolation environments, and that can intelligently synchronize when communications are restored, including only partial restoration

This theme will primarily focus on digital computing. All tiers are expected to be highly scalable, and heterogeneity is expected both within and across the tiers.

Under CONIX, Anthony Rowe of Carnegie Mellon University will lead researchers from seven universities to develop an architecture for networked computing that lies between edge devices and the cloud. The Internet of Things (IoT) relies on the symbiotic relationship of the cloud, edge devices, and the network however, the growing amount of IoT-generated data is straining existing networks as it moves to the cloud for processing. By building intelligence into the network, CONIX aims to rethink the current system by moving processing and decision-making out of the cloud and creating more adaptability for current and future IoT applications.

Center for Brain-inspired Computing Enabling Autonomous Intelligence (C-BRIC)

Led by Kaushik Roy of Purdue University, C-BRIC aims to deliver major advances in cognitive computing, with the goal of enabling a new generation of autonomous intelligent systems. The next wave of AI holds the promise of creating autonomous intelligent systems like self-flying drones and personal robotic assistants but will require a new type of semiconductor technology to meet the energy and computing demands required to advance beyond current machine learning applications. Researchers from nine universities will explore neuro-inspired algorithms, theories, hardware fabrics, and application drivers to achieve the center’s mission and pave the way for the AI hardware of the future.

The Cognitive Computing theme aims to create cognitive computing systems that can learn at scale, perform reasoning and decision making with purpose, and interact with humans naturally and in real-time. This theme seeks to explore multiple approaches for building machine intelligent systems with both cognitive and autonomous characteristics. Such systems can be solely non-traditional, solely von-Neumann or a combination of both elements.  Realizing these novel systems may heavily leverage non-traditional computing methods, such as analog computing, stochastic computing, Shannon inspired computing, approximate computing, and bio/brain-inspired models including neuromorphic computing for a broad application space.

A key goal is creating systems that, without explicit objectives, operate in the natural world on their own by forming and extending models of the world they perceive around them, and by interacting with local human decision makers and with global distributed intelligent networks in performing actions to achieve useful yet complex goals.

A full-system approach is required to achieve the goals of this theme. In addition, the proposed research should address the technology advances that are needed for fundamental improvements in performance, capabilities, and energy efficiency through improvements in programming paradigms, algorithms, architectures, circuits, and device technologies.


Center for Research on Intelligent Storage and Processing-in-memory (CRISP)

Advances in information technology have pushed data generation rates and quantities to a point where memory and storage are the focal point of optimization of computer systems. Transfer energy, latency and bandwidth are critical to performance and energy efficiency of these systems. The solutions to many modern computing problems involve many-many relationships that can benefit from high cross-sectional bandwidth of the distributed computing platform. As an example, large scale graph analytics involve high cross-data-set evaluation of numerous neighbor relationships ultimately demanding high the highest possible cross-sectional bandwidth of the system.

This research vector seeks a holistic, vertically-integrated, approach to high-performance Intelligent Storage systems encompassing the operating system, programming models, memory management technologies, and a prototype system architecture. A primary focus area for this center will be in establishing an operating system framework allowing run-time optimization of the system based on system configuration preferences, programmer preferences, and the current state of the system.

New Architecture and Programming paradigms, Self-optimizing Systems Allowing for Appropriate Programmer Control. 10X more power efficient computing platform scalable from high performance application processors to less-demanding processors for IoT/sensors/etc. with cost awareness. Small, Probably Low Cost, Compute+Memory+Sensor Node Capable of making Basic Decisions/observations and Reporting to a Larger System.

The technology can span across material, devices, packaging, circuits/systems techniques, computer architecture including but not limited to heterogeneous computing, memory technology (including NVM) and high-speed interface (on-chip and off-chip), etc.

Led by Kevin Skadron at the University of Virginia, researchers from nine universities will work to topple the “memory wall”–a 70-year-old technical bottleneck in computer systems that is hindering the use of big data for technical discovery. Research efforts will focus on removing the separate between memory and storage that is hampering users’ abilities to access data. To accomplish this mission, CRISP researchers seek to build computer processing capabilities into memory storage at the chip level and pair processors with memory chips in 3D stacks. Once addressed, users would be able to perform previously unattainable computations on massive amounts of information, ultimately enabling rapid advances in national security, medical discovery, and beyond.


“Horizontal” Disciplinary-Focused Centers

“Horizontal” research centers will drive foundational developments in a specific discipline, or set of like-minded disciplines, will build expertise in and around key disciplinary building blocks, and create disruptive breakthroughs in areas of interest to JUMP sponsors, including advanced architectures and algorithms, and advanced devices, packaging, and materials.

These centers have a mission to identify and accelerate progress for new technologies that look beyond traditional CMOS. Proposers are expected to define a set of key metrics that their center will use to benchmark and drive efforts in their research space.


Technology areas of interest for our JUMP “horizontal” Centers include:

Applications Driving Architectures (ADA) Center:

Today’s system architectures, including distributed clusters, symmetric multiprocessors (SMPs), and communications systems, are generally comprised of homogeneous hardware components that are difficult to modify once deployed. Heterogeneous architectures and elements, such as accelerators, will increasingly be needed to enable scaling of performance, energy efficiency, and cost.

This theme must lay the foundations for new paradigms in scalable, heterogeneous architectures, co-designed with algorithmic implications and vice versa. A major goal of this theme is to address the design and integration challenges of a broad variety of accelerators, both on-chip and off-chip, along with the algorithmic and system software innovations needed to readily incorporate them into both existing and future systems (e.g, information processing, communications, sensing/imaging, etc.).

Centers should address the design and integration challenges of: systems composed of on-chip and off-chip accelerators, computation in and/or near data, and non-traditional computing. Employing novel co-design to bridge the gap between architectures and algorithms for optimization, combinatorics, computational geometry, distributed systems, learning theory, online algorithms, cryptography, etc. are within scope. Benchmarking of the novel architectures is expected. Modeling and software innovations should be used to remove barriers to hardware implementation or mass adoption.

Led by Valeria Bertacco of the University of Michigan, the ADA Center aims to significantly reduce the cost, complexity, and energy required to develop advanced computing systems by democratizing the design and manufacturing process. Researchers from nine universities will work together to create a modular approach to system hardware and software design, requiring a complete rethink of the way design is done today. The expected “plug-and-play” ecosystem create by the ADA Center would help reduce the skills barrier required to develop new systems, expanding the talent pool and fostering idea generation to help propel the creation and advancement of new computing frontiers.


Applications and Systems driven Center for Energy-Efficient Integrated Nanotechnologies (ASCENT):

This theme will address advanced active and passive devices, interconnect, and packaging concepts, based on physics of new materials and unconventional syntheses.

This technology is needed to enable the next breakthrough paradigms in computation (including analog) and information sensing, processing, and storage that will provide further scaling and energy efficiencies. These new materials and devices will provide new functionalities and properties that can augment and/or surpass conventional semiconductor technologies, and will potentially enable novel 3D options. Material development, device demonstration and viable process integration are all within scope. Experimental demonstrations as well as ab-initio material and process modeling are expected.

Energy harvesting and energy storage devices: novel materials for high efficiency energy harvesting, supercapacitors, integrated batteries, power delivery

ASCENT seeks to tackle the data-transfer bottlenecks and energy efficiency challenges associated with current electronic devices. Suman Datta of the University of Notre Dame will lead researchers from 13 universities in efforts to transcend the anticipated limits of current CMOS technology in order to increase the performance, efficiency, and capabilities of future computing systems. To achieve its goal, the center will explore four main areas of research that span novel integration schemes, innovative device technologies, and the application of hardware accelerators.


JUMP and its efforts to build-up a foundational research base in fields underlying microelectronics technologies are part of DARPA’s Electronics Resurgence Initiative (ERI). Over the next four years, the ERI will commit hundreds of millions of dollars to ensure far-reaching improvements in electronics performance well beyond the limits of traditional scaling. Central to the ERI are new forward-looking collaborations among the commercial electronics community, defense industrial base, university researchers, and the DoD. The partnerships created across industry, academia, and the defense community through JUMP are one of several critical components advancing ERI and its efforts to foster the environment needed for the next wave of U.S. semiconductor technology innovations.


References and Resources





Over-The-Horizon radars being integrated into air defence networks to detect and track stealth Aircrafts and Aircraft carriers

Conventional microwave radars such as those commonly seen at airports propagate in a straight line and cannot detect objects beyond their line of sight i.e. beyond the visual horizon. Over The Horizon Radar (OTHR) utilises the Ionosphere to reflect the radiated signal ‘over the line of sight horizon’. Such radars can detect stealth aircrafts and ships from extremely long detection ranges from 700 to 4000 and also being employed for border protection, disaster relief and search and rescue operations.

OTHR and HFSWR radars have become important element of air  defence networks of many countries including China, Australia, Iran, US and Russia.  They  have the capability to defeat the stealth of aircrafts like the Northrop Grumman B-2 Spirit, F-35 or F-22 by detecting and tracking them from hundreds of kilometers.

China has reportedly set up a high-tech radar system in Inner Mongolia with a detection range of up to 3,000 kilometers, a move to spy on South Korean and Japanese military maneuvers, according to Chinese media. The installation comes amid a spat with South Korea on the deployment of a Terminal High-Altitude Area Defense (THAAD) battery from the US. THAAD is a missile defense system Beijing and Russia fear could be a tool to be used to spy its military activities. OTH radars could also detect stealth aircraft and locate inter-continental ballistic missiles and other types of missiles fired by other countries. The radar could allegedly confirm the target of an enemy within a minute after launching and could issue an early warning three minutes later.

This is the second OTH radar installed by China. Its first OTH is set up in the Hubei-Henan-Anhui triangle. All two radars are used to monitor the entire western Pacific if used together with spy satellites.

Russia plans to step up its fourth Sunflower (Podsolnukh-E) radar system, which, according to Russian experts, is capable to detect US stealth aircraft, such as В-2 Spirit, flying over the ocean at a height of 500 kilometers, the China Topix informational website reported. As the website reported, citing sources in the Russian Defense Ministry, the new Sunflower will be stationed in the Novaya Zemlya archipelago in the Arctic Circle. China Topix noted that the archipelago notorious as a place of the most large-scale tests of nuclear weapons. So, in the days of the former Soviet Union, 224 nuclear explosions had been implemented there before 1990. According to the media, Russia intends to build six over the horizon radar systems in the Arctic. Russia has been carrying out rapid Arctic militarization by building New airbases, icebreakers, ground forces, missiles and and carrying out military exercises there.

Over The Horizon Radar (OTHR)

There are two types of OTH radars: The first is the Skywave OTHR that utilises the refractive properties of the ionosphere to refract or bend transmitted HF electromagnetic waves back to Earth. When these refracted HF waves hit a radar reflective (metal) surface of sufficient size — either airborne or maritime — some of the energy is reflected back along the transmission path to the OTHR receiver. Sophisticated computer systems then process the received energy to discern objects within the radar’s footprint.   Skywave OTHR that are able to  detect aircraft and ships at very long ranges (between 500 km and 3000 km – ignoring any double bounces).

These radars are typically capable of estimating the 2-dimensional coordinates of targets; namely, latitude, longitude, speed and heading. They can perform simultaneous tracking of separate targets.  Such radars offer extremely long detection ranges (from 700 to 4000 km) but also very low resolution (from some hundreds of meters up to 20 km).

Other type of OTH radars are Surface wave radar systems, in particular high frequency surface wave radar (HFSWR) systems,  that operate from coastal installations, so that the radar energy can couple into the salt water.  High Frequency Surface Wave Radar (HFSWR) takes advantage of the diffraction of electromagnetic waves over the conducting ocean surface. The transmitted signal follows the curved ocean surface, and a system can detect aircraft, and ships, beyond the visible horizon,  at ranges out to roughly 300 km. HFSWR exploits a phenomenon known as a Norton wave propagation whereby a vertically polarised electromagnetic signal propagates efficiently as a surface wave along a conducting surface.

The successful detection of a target by a surface wave radar system traditionally involves compromises between a number of factors, including propagation losses, target radar cross-section, ambient noise, man-made interference, and signal-related clutter. In detecting a target at roughly 150 kilometers using HFSWR large error tolerances are experienced in both range (.+.1 to 2 km) and azimuth (.+.1. degree.) due to limited band width availability and physical antenna size constraints.

Some of the OTH radars are JORN of Australia, ROTH of Ratheon USA, NOSTRA-DAMUS of ONERA, France and STEEL YARD of NIIDAR, Russia. All of them operate from approximately 5-30 MHz. The most powerful radar is Steel yard of Russia which transmits 1500 KW.

OTH Radars being low frequency band radars possess anti-stealth capabilities, offering considerable capabilities of detecting targets such as stealth planes. Stealth techniques such as shaping have been designed with the aim to reflect most of the radar energy away from an expected radar antenna and not back to it. However, techniques such as shaping and coating with Radar-Absorbent Materials (RAM) is most effective in microwave frequencies mainly in the X and Ku bands, and is less effective at Longer wavelengths such as VHF or High frequency (HF) radars such as OTHR. When the wavelength of the incident electromagnetic (EM) wave is comparable to the physical dimension of the object, it results in enhancement of RCS and in large amplitude oscillations in the RCS. This is due to the resonance effect between the direct reflection from the target and scattered waves which “creep” around it.

How does it work?

OTHR systems operate on the Doppler principle, where an object can be detected if its motion toward or away from the radar is different from the movement of its surroundings. OTHRs are typically made up of very large fixed transmitter and receiver antennas (called ‘arrays’). The location and orientation of these arrays determines the lateral limits or arc of radar’s coverage. The extent of OTHR coverage in range within this arc is variable and principally dependent on the state of the ionosphere.

OTHRs do not continually ‘sweep’ an area like conventional radars but rather ‘dwell’ by focusing the radar’s energy on a particular area – referred to as a ‘tile’ – within the total area of coverage. The transmitted HF energy can be electronically steered to illuminate other ‘tiles’ within the OTHR’s coverage as required to satisfy operational tasking or in response to intelligence cuing.

Under certain atmospheric conditions, only specific radio frequencies will get reflected back towards the ground. The “correct” frequency to use depends on the current conditions of the atmosphere. So systems using ionospheric reflection need real-time monitoring of the reception of backscattered signals to continuously adjust the frequency of the transmitted signal.

Australia’s Jindalee Operational Radar Network

The Jindalee Operational Radar Network (JORN) is an over-the-horizon radar (OTHR) network that can monitor air and sea movements across 37,000 km2. It has a normal operating range of 1,000 km to 3,000 km. It is used in the defence of Australia, and can also monitor maritime operations, wave heights and wind directions.

The JORN defence system is a network of three remote over-the-horizon radars in Queensland, Western Australia and the Northern Territory. These radars are dispersed across Australia — at Longreach in Queensland, Laverton in Western Australia and Alice Springs in the Northern Territory — to provide surveillance coverage of Australia’s northern approaches.  It provides wide-area surveillance to support the Australian Defence Force’s air and maritime operations, border surveillance, disaster relief, and search and rescue operations.

The JORN radars have an operating range of 1000–3000km, as measured from the radar array. Of note, the Alice Springs and Longreach radars cover an arc of 90 degrees each, whereas the Laverton OTHR coverage area extends through 180 degrees.

JORN does not operate on a 24 hour basis except during military contingencies. Defence’s peacetime use of JORN focuses on those objects that the system has been designed to detect, thus ensuring efficient use of resources.

Operation and uses

The JORN network is operated by No. 1 Radar Surveillance Unit RAAF (1RSU). Data from the JORN sites is fed to the JORN Coordination Centre at RAAF Base Edinburgh where it is passed on to other agencies and military units. Officially the system allows the Australian Defence Force to observe air and sea activity north of Australia to distances up to 4000 km.

This encompasses all of Java, Irian Jaya, Papua New Guinea and the Solomon Islands, and may include Singapore. However, in 1997, the prototype was able to detect missile launches by China over 5,500 kilometres (3,400 mi) away.

The “backscatter” signal is extremely small due to reflection losses. The very long wave length used by such low frequency radar make it very difficult to pick out the relatively small target presented by an aircraft against the very large target presented by the earth. It takes a huge amount of data processing to pick large targets out of earth clutter.

For an aircraft or maritime vessel to be detected, it must possess a radar reflective (metal) surface of sufficient size so that sufficient HF radar energy is reflected back along the transmission path to the JORN receiver.

“JORN is expected to detect air objects equivalent in size to a BAe Hawk-127 aircraft or larger and maritime objects equivalent in size and construction to an Armidale-class patrol boat or larger,” according to Australian Air force.


BAE Systems to bid for Australia’s JORN Phase 6 upgrade

BAE Systems is to compete for the major Phase 6 upgrade of the Australian Department of Defence’s (DoD) Jindalee Operational Radar Network (JORN). The upgrade to the over-the-horizon radar (OTHR) network is designed to ‘open’ the system’s architecture enabling the insertion of next generation technologies and extend the operational life of JORN to beyond 2042, the Phase 6 upgrade is expected to take place in 2018.

Russian Podsolnukh (Sunflower) radar

Russia will station additional Podsolnukh (Sunflower) radars that are capable of detecting cutting edge stealth aircraft, including Lockheed Martin’s F35 Lightning II and F22 Raptor, to protect the country’s exclusive economic zones in the extreme North, the Baltic Sea and Crimea in 2017, Rossiyskaya Gazeta reported 10 August 2016.


Russia’s Black Sea Fleet will be reinforced by the deployment in Crimea of the Podsolnukh short range over the horizon surfacewave radar with 450 km target acquisition capacity, a source in the Russian Defense Ministry told TASS on 17 December 2014. “The seabased Podsolnukh radar will be deployed in Crimea that will be ‘looking’ to the Bosporus,” the source said.

According to the article, the fourth radar system “can become operational in 10 days and needs a team of just three people to remain operational.” The systems must be placed at a distance of 370 kilometers from each other in order to ensure full coverage. Currently, Russia has three Podsolnukh-E radar systems, which are operating in the Sea of Okhotsk, in the Sea of Japan and in the Caspian Sea. However, these stationary systems can be easily detected due to its massive radar towers.

“The Podsolnukh E is a coast-horizon shortwave short-range radar system that is capable to detect both air and sea targets, approaching to it from the sea. It can simultaneously detect, track and classify 100 aerial targets and 300 maritime targets in an automatic mode,” the article reads. “A distinctive feature of the Podsulnukh is its mammoth antenna array up to five kilometers long and five meters tall that can identify aerial targets 500 kilometers away and sea targets up to 400 kilometers away.”

The system is able to determine their position and capable to transfer coordinates of a target to various weapon systems, such as fighter jets, vessels and antiaircraft missile batteries. The Sunflower can detect stealth aircraft, such as the American F-35 super-expensive modern multirole fighters, “as clearly as aircraft of the WWII era,” the author of the article writes, citing Russian sources.


Chinese OTH radars

China is reported to have developed its first OTH-B radar back in 1967; Since the 1980s two further installations have possibly been added to the inventory, with at least one system looking out into the China Sea area reportedly to target (US Navy) aircraft carriers.

Backscatter systems function at the upper end of the High Frequency (HF) band, typically between 12 and 28 MHz. OTH-B radars are bistatic systems, this is where the transmitter and receiver use different antennas at widely separated locations to achieve detection results.

China’s OTH-B is said to use Frequency Modulated Continuous Wave (FMCW) transmissions to enable Doppler measurements, the suppression of static objects and the display of moving targets.

In 2008 Asian military sources told Richard Fisher that China had placed a new long range OvertheHorizon (OTH) radar station in Hainan Island. Then at the February 2009 IDEX show in Abu Dhabi a Russian source confirmed to Fisher the sale to China of the 300km range PodsolnukhE surfacewave OTH radar.


China deploying Anti-Stealth OTH Radar in the South China Sea

In 2015 Victor Robert Lee of The Diplomat reported, “Fiery Cross Reef, Subi Reef and Mischief Reef are China’s largest military installations in the Spratlys, but they are still under construction and do not exhibit the more sophisticated defensive capabilities now present at China’s smaller bases on four other reefs in the Spratlys: Cuarteron, Gaven, Hughes, and Johnson South.”

“These facilities are being equipped with state-of-the-art sensor towers, weapons tracking and firing platforms and tracking/firing guidance radars, as well as an array of electronic sensors and satellite communications infrastructure. For example, a satellite image taken August 23 shows that Cuarteron has a new antenna farm that Rogers considers reminiscent of Australia’s Jindalee over-the-horizon radar network, which has a range of up to 3,000 kilometers.”

“China appears to be building an anti-stealth radar system on an artificial island in the middle of the South China Sea, where a military-grade system would be useful in detecting stealth aircraft in the contentious and contested area,” Kyle Mizokami reports in Popular Mechanics.


China’s Anti-ship ballistic missile system can target US aircraft carriers through OTH radar and satellites

A constellation of satellites and at least one over-the-horizon radar give its Anti-Ship Ballistic Missile (ASBM) system the capability to work out the position of U.S. aircraft carriers at sea, according to assessments published by researchers at the National Institute of Advanced Studies in Bangalore.

Land-based ballistic missiles, carrying manoeuvrable warheads with conventional munitions, could then, if needed, target the aircraft carriers at a distance of about 2,000 km.

Although the land-based ballistic missiles can target aircraft carriers using just the Yaogon constellation, the number of targeting opportunities become fewer if cloud cover obscures the view of satellites with optical sensors, observed Prof. Chandrashekar.

China’s constellation of Yaogan military satellites includes those for electronic intelligence (ELINT) gathering that detect radio signals and other electronic emissions from an aircraft carrier and its associated warships. China currently has three clusters of ELINT satellites that provide global surveillance.

By incorporating an over-the-horizon radar that can continually track aircraft carriers up to a distance of about 3,000 km, the Chinese gain the flexibility to launch the ballistic missiles whenever they choose, he pointed out.


Ghadir, Iran’s over-the-horizon radar

Ghadir, is an Iranian over the horizon radar Ghadir is a 360°, 3D-radar, with a ceiling of 300 km, and a maximum range of 1,100 km. Unlike other OTHR’s, Ghadir doesn’t use FMCW modulation. Instead, it uses a shaped pulsed system which makes the edges of the signal hard to define. Because of this, the bandwidth of this signal can vary greatly, ranging from around 60 kHz to splattering over 1MHz, depending on the power of the received signal for the user

A senior Iranian Army general  spoke about plans of  Islamic Republic plans to unveil a variety of over-the-horizon radar systems covering a distance of 3,000 kilometers. Brigadier General Farzad Esmaili, the commander of Iran’s Khatam al-Anbiya Air Defense Base, said that the radars can help the Khatam al-Anbiya Air Defense Base “detect and monitor aircraft flying beyond [Iran’s] borders.”


References  and Resources also include:




US, Israel and India face tunnel threat enabling drug and weapon smuggling, human trafficking, and cross border terrorism, require detection technologies

The tunnel threat is a serious and growing concern to U.S and Mexico, as they enable human trafficking and smuggling of drugs and weapons across the border. It seems tunnel warfare is becoming a global concern as it is also common in other parts of the world such as Iraq, Afghanistan and Syria where rebels use them in combating Assad’s military forces.

In recent Israel-Palestine Conflict, Israel had carried out massive ground offensive to wipe out a vast network of tunnels built by Hamas. Israel sees these being built for infiltrating its territory, smuggle large amounts of firearms and other sabotage materials into the Gaza Strip. Many bemoan the fact, that such a large number of tunnels dug by Hamas from Gaza into Israel have gone undetected for so long.

The Israeli military has destroyed three cross-border Hamas terror tunnels in recent months using “new and groundbreaking technology,” Lt. Col. Jonathan Conricus — the head of the IDF Spokesperson’s Unit’s international media branch — said in a video

Prime Minister Benjamin Netanyahu and Defense Minister Avigdor Liberman in oct 2017,  hailed the IDF for destroying an attack tunnel from the Gaza Strip discovered near a kibbutz inside Israeli territory, with the two leaders attributing its discovery to Israel’s new “breakthrough technology.” I told you many times before that we are developing breakthrough technology to deal with the tunnel threat,” said Netanyahu at the start of the meeting. “We are implementing it. Today, we located a tunnel and we destroyed it.”

Earlier, the military said the tunnel had been under surveillance for an extended period of time and was under active construction at the time of the demolition. “The tunnel was detonated from within Israel, adjacent to the security fence,” the military said in a statement. IDF spokesperson Lt. Col. Jonathan Conricus said the tunnel was at least two kilometers away from the Israeli town and did not pose a threat to its residents. Liberman also said no Israelis were endangered by the tunnel.

The Israeli government, has been developing such a system for at least the past five years. Codenamed project “Hourglass,” Israel has already invested the U.S. dollar equivalent of more than $60 million in the system, involving help from more than 100 technologies, defense, and engineering companies. Remote controlled robots help agents explore tunnels that are too risky for humans to enter.

Recently a drug smuggling tunnel was discovered along the California-Mexico border that set the record for the longest cross-border tunnel ever discovered in Southern California. Around 170 tunnels have been discovered since 1990, Sixty percent of them discovered in just the last three years.  According to the Department of Justice’s accounting, the tunnel was estimated to span 800 yards, and likely a lot longer due to its “zig-zagging” route, as Assistant U.S. Attorney Timothy Salel put it. “It is equipped with rail and ventilation systems, lights and a sophisticated large elevator leading from the tunnel into a closet inside the Tijuana residence,” he added. “It is one of the narrowest tunnels found to date, with a diameter of just three feet for most of the length of the passageway.”

Many defense companies including Lockheed Martin and Raytheon, are developing technologies for detecting tunnels. U.S. government is earmarking $120 million over the next three years and partnering with Israel to help develop a new tunnel detector. The goal, U.S. Defense Department spokesman Christopher Sherwood told Foreign Policy, “is to establish anti-tunnel capabilities to detect, map, and neutralize underground tunnels that threaten the U.S. or Israel…”

In  between 2001 and 2016, India has discovered at least eight tunnels originating from across the border along Pakistan, at an average of one every two years. And, only one of these is suspected to have been dug for drug running, while the others are linked to possible or successful infiltrations.

In March 2016, the Indian BSF floated a Request for Proposal for a pilot project of the CIBMS in two five-km patches along the border in Jammu. Tata Power SED and Dat Con have won a pilot project of the Ministry of Home Affairs to install an integrated border-guarding system to test technology for preventing infiltration, especially by detecting cross-border tunnels as well as possible entries through aerial and underwater routes. Called the Comprehensive Integrated Border Management System (CIBMS), it is a major counter-infiltration measure to prevent cross-border terror attacks and detect tunnels.

Challenge of  detecting  terror tunnels

Part of the problem in detecting tunnels, say experts like Paul Bauman, a Canadian geophysicist, is the ground itself. Finding what is under the surface is not as simple as shooting radar or electromagnetic waves into the ground, he said. With underground cracks, water tables, tree roots and caves, it is hard to tell what is and is not a tunnel, he said. Mr. Bauman, who has worked with the Israel Defense Forces in their efforts to find tunnels, said most of the devices used for tunnel detection were developed for industries to find oil or mineral deposits, not drug tunnels.

Carey M. Rappaport, a professor of electrical and computer engineering at Northeastern University in Boston, said the depth of many tunnels also posed a technological challenge. Some can be as deep as 90 feet, beyond the reach of most ground radar devices and sensors. “Soil is very good at keeping secrets,” said Mr. Rappaport, who has also worked with the United States and Israeli governments on  tunnel detection methods.

Recently, the Science and Technology Directorate of the Department of Homeland Security concluded that none of the current methods used to detect underground tunnels were “necessarily suited to Border Patrol agents’ operational needs.”

Tunnel detection technologies

Most of the existing tunnel detecting capabilities are modifications of existing equipment originally used to detect land mines or natural gas and oil deposits. More sensitive, sophisticated techniques are needed to find tunnels, which exist between those two extremes of size and depth.

An Israeli company, Magna, has proposed digging a 70-km tunnel along the Israel-Gaza border, equipped with a sensitive alert system. The system shall be able to localize attack tunnel, estimate how many people are in it, and can monitor the progress of digging. Now, Israel Hayom reports, Israel has built its own network of defense tunnels along the Gaza border, with the cooperation of the United States.

Some of the technology solutions that have been found useful for tunnel detection are

The effectiveness of tunnel detection devices is directly related to the geophysical characteristics of local soil. DHS&T is in the process of collecting and compiling a database of existing derived and new geological and geophysical survey data along the border where the tunneling is most probable.


Ground penetrating Radar

Special radar mounted on a vehicle that uses pulses of appropriate frequency and ultra wideband waveforms to form an underground image. Its promising technology widely used in quality-testing roads, and to find unmarked graves, locate utility lines, trace subsurface geology, sweep for mines and search archaeological sites.

However, some of its limitations of this method is that it does not work well in most mediums like clay and rarely penetrates deeper than 40 ft and produces lots of false alarms even at shallow depths leading to waste of time and money. The developers are concentrating their efforts on using much lower frequencies that can penetrate the ground much deeper, and a sophisticated new imaging technology that can display clear pictures of deep tunnels.

The R2TD system developed by the U.S. Army Engineer Research and Development Center is a ground-penetrating radar capable of detecting tunnels deep within the ground. It employs sensors to detect acoustic and seismic energy. The R2TD system can be mounted in a vehicle or carried by a soldier to an area of interest, and is capable of transmitting data to a remote post for data analysis.

Surprise attacks by enemy troops hiding in tunnels are difficult to predict, although radar technology can help by finding the tunnels. The Rapid Reaction Tunnel Detection (R2TD) system can detect the underground void created by a tunnel, as well as electrical cables or devices within the tunnels, using ground-penetrating-radar (GPR) technology.

Because adversaries are continually adapting—using different tunnel depths and more complex maze configurations—the analysis software for the R2TD system must be continually refined, with increased transmit power for greater ground penetration.

The National Centre for Excellence in Technology for Internal Security (NCETIS) at IIT-B, which also has people working with other IITs, has developed a Ground Penetrating Radar (GPR) at 920MHz, which can not only detect tunnels but also landmines buried in soil. “Right now, we testing the equipment for ruggedness. We have a mandate that it needs to work in all terrains and conditions and once the ruggedness test is complete, we will begin the field trials in February,” Seema Periwal, project manager, NCETIS told TOI from Mumbai.


Seismic Sensor Network

The underground activity like digging, drilling, scraping, jack-hammering, can create the ground disturbances or vibrations that travel through the ground in the form of seismic waves and can be detected by seismic sensors like geophones buried under the ground.

Signal processing is the critical technology for extraction of data and intelligence from the signals generated by seismic sensors, identifying the type of activity like digging, walking, vehicle etc. and also the localization of activity. The intelligent algorithms can also filter out non-threatening vibrations, from construction equipment, traffic on nearby roads and underground subways, in order to minimize false alarms.


Other technologies

A combination of Airborne SAR (Synthetic Aperture Radar) and GPR has also been proposed for underground tunnel detection. Some of the other proposals include measuring electrical resistivity through metal electrodes, microgravity sensors and detecting muons underground brought by cosmic rays hitting the earth.


Robots in anti-tunnel campaign

IDF‘s military robot Talon 4 has been used in  dangerous tunnels on  the Gaza Strip border instead of soldiers to reduce risks to troops.

Another are lightweight, portable Carrier Robots, that would be carried by soldiers on their backs. They shall be capable of scanning areas underground for many hours, mapping entire buildings and terror tunnels. It is equipped with cameras, sensors, and a communications system capable of transmitting signals from underground. The groundbreaking technology will allow soldiers to understand the exact appearance of any structure, helping them avoid the dangers of underground or urban combat as explained by Major Lior Trablisi, the head of the IDF’s robot and technology unit.

“Robotic-laptop soldier,” will assist soldiers from the Combat Engineering Corps and infantry soldiers in underground combat. The idea of this small-scope robot is to take on dangerous missions, including patrolling and collecting information for the fighters on the ground. This will solve many of the problems soldiers are forced to face when operating underground, such as collapsing walls and lack of oxygen and lighting.

Underground Iron Dome i.a. Against Hamas’ Terror Tunnels

Western sources reported on 11th March 2016, that the new weapon, dubbed the “Underground Iron Dome,” can detect a tunnel, then send in a moving missile ton blow it up. The new weapon is not counter measure only against threat from Gaza and Lebanon but against Iran nukes too.

US intelligence sources disclosed only that new weapon is equipped with seismic sensors to detect underground vibrations and map their location before destroying them. Western experts haven been talking for years about a secret Israeli weapon capable of destroying Iran’s Fordo nuclear facility, which is buried deep inside a mountain not farn from the Shiite shrine city of Qom.

They suggested that this hypothetical weapon could be slipped through the Fordo facility’s vents, thread its way  through the underground chambers and take down the illicit enrichment facility.  It was discussed again three years ago, when the Israeli Air Force on 23rd  Aug. 2013 blew up the Popular Palestinian Front-General Command underground  facility at Al-Naama on the South Lebanese coast, 15 km south of Beirut.


Tata on Indo-Pak border: Using tech to detect tunnels, check infiltration

The 3,323-km India-Pakistan border consists of the international border guarded by the BSF and the Line of Control guarded by the Indian Army. The border is porous which makes infiltration by terrorists possible. In the 1990s, the government had erected a fence along the entire length of the India-Pakistan border. But infiltration was still taking place. Over the years, the BSF has found several tunnels starting from Pakistan reaching into India.

The CIBMS will integrate sensors, communication, infrastructure, response, and command and control. It will be a force multiplier for the BSF. “Manpower along the border is irreplaceable, but human endurance has its limitations. With the CIBMS we can detect threats in advance and ensure a counter attack. This would lead to reduction in casualties,” said an official.

An important component of the CIBMS is satellite imagery. The BSF is already using satellite imagery. It helps the security forces in learning about the terrain and military fortifications across the border. It also helps in better planning of operations and border defences on the Indian side. However, not being real-time, they are not always useful.

The BSF has also planned to use UAVs as part of the CIBMS to launch them when required to gain real-time data.
Sensors such as those placed underground will also form part of the CIBMS. These sensors sound an alarm when a person steps near them, alerting the troops. “The firms will also be setting up equipment to detect cross-border tunnels and possible infiltration through aerial and underwater means. The pilot project will be the first to test such technology,” said an official. The RFP had stated the requirement of tracking low-level flying threats from 500 m up to 1 km. Sonars will also be used to track underwater movement.

In a statement issued yesterday, Tata Power said, “CIBMS will establish a seamless multi-tier security ring at the border using a variety of sensors, to identify any infiltration attempts and will be operational 24x7x365. Sensors (viz. Thermal Imager, Radar, Aerostat with EO Payload, Optical Fibre Intrusion Detection System, Unattended Ground Sensor and Underwater Sensor) can detect threats not just on the surface but also underground and underwater.”


References and Resources also include:


On-skin health monitoring electronics is next revolution in medical field to diagnose diseases to monitoring soldiers’ health and stress levels in combat

Printed and Flexible electronics has started to revolutionize medical field with medical test strips with diagnostic electrodes. Engineers at the University of California San Diego have developed a flexible wearable sensor that can accurately measure a person’s blood alcohol level from sweat and transmit the data wirelessly to a laptop, smartphone or other mobile device.

Fitness trackers that monitor heart rate and step count are very popular, but wearable, non-invasive biosensors would be extremely beneficial for managing diseases,” said Prasad, the Cecil H. and Ida Green Professor in Systems Biology Science.

Wearable Biosensors are being developed that measure EEG, ECG, and EMG (electroencephalograms, electrocardiograms, and electromyography, tests which monitor brain, heart, and muscle activity). The next generation Wearable sensors employ lightweight, highly elastic materials attached directly onto the skin for more sensitive, precise measurements.

At the Seoul National University in Korea researchers have created a highly flexible electronic patch capable of doing basic ECG monitoring while amplifying and storing the data locally within novel nanocrystal floating gates. The patch is made of a flexible and stretchable silicon membrane on top of which gold nanoparticles are placed so as to draw the conductive components. This eliminates conductive films that have their unique limitations while increasing the memory capacity of the device.

A soft, flexible skin patch that monitors biomarkers in sweat can determine whether the wearer is dehydrated, measure the person’s blood sugar level and even detect disease. The invention is part of an emerging field of wearable diagnostics. Human sweat contains many of the same biomarkers as blood; however, analyzing sweat using a skin patch doesn’t hurt like a needle stick, and the results can be obtained more quickly.

“Cosmetics companies are interested in sweat using these devices in their research labs to evaluate their antiperspirants and deodorants and so on,” Rogers said. “So sweat loss and sweat chemistry is interesting in that domain, as well. And then we have contracts with the military that are interested sort of in continuous monitoring of health status of war fighters.”

Skin Patch Uses Sweat to Monitor Health

The skin patch, described in the journal Science Translational Medicine, is made of flexible material, and is about the size and thickness of a U.S. quarter. The so-called microfluidic device sticks to the forearm or back like an adhesive bandage, collecting and analyzing sweat.

The first-of-its-kind patch is aimed primarily at athletes, but the flexible electronics device will in all likelihood find a place in medicine and even the cosmetics industry.

“We’ve been interested in the development of skin-like technologies that can mount directly on the body, to capture important information that relates to physiological health,” said John Rogers, a materials scientist and bioengineer at Northwestern University in Illinois, and one of a number of developers of the skin patch. “And what we’ve demonstrated here is a technology that allows for the precise collection, capture and chemical analysis of biomarkers in sweat and perspiration.”

The sweat is routed through microscopic tubules to four different reservoirs that measure pH and chloride, important indicators of hydration levels, lactate — which reveals exercise tolerance — and glucose. It can also track the perspiration rate.

The skin patch could potentially be used to diagnose the lung disease cystic fibrosis by analyzing the chloride content in sweat. Wireless electronics transmit the color-coded results to a smartphone app, which analyzes them.

Bioengineers create sweat-based sensor to monitor glucose

Researchers at The University of Texas at Dallas have developed a wearable device that can monitor an individual’s glucose level via perspiration on the skin. In a study recently published online in the journal Sensors and Actuators B: Chemical, Dr. Shalini Prasad, professor of bioengineering in the Erik Jonsson School of Engineering and Computer Science, and her co-authors demonstrated the capabilities of a biosensor they designed to reliably detect and quantify glucose in human sweat.

“Fitness trackers that monitor heart rate and step count are very popular, but wearable, non-invasive biosensors would be extremely beneficial for managing diseases,” said Prasad, the Cecil H. and Ida Green Professor in Systems Biology Science. But for diabetics and those at risk for diabetes, self-monitoring of blood glucose, or blood sugar, is an important part of managing their conditions.

Typical home-use blood glucose monitors require a user to obtain a small blood sample, usually through the prick of a finger and often several times a day. However, the UT Dallas textile-based sensor detects glucose in the small amount of ambient sweat on a person’s skin. The team has previously demonstrated that their technology can detect cortisol in perspiration.

“In our sensor mechanism, we use the same chemistry and enzymatic reaction that are incorporated into blood glucose testing strips,” Prasad said. “But in our design, we had to account for the low volume of ambient sweat that would be present in areas such as under a watch or wrist device, or under a patch that lies next to the skin.”

For now, the skin patch is intended for use by sweaty athletes to measure biomarkers of performance, and Rogers sees the patch being sold with sports drinks; but, he said, a number of industries have expressed an interest in the sweat-based technology.


Nanomesh technology results in inflammation-free, on-skin health monitoring electronics

Minimal invasiveness is highly desirable when applying wearable electronics directly onto human skin. However, manufacturing such on-skin electronics on planar substrates results in limited gas permeability. The lack of breathability is deemed unsafe for long-term use: dermatological tests show the fine, stretchable materials prevent sweating and block airflow around the skin, causing irritation and inflammation, which ultimately could lead to lasting physiological and psychological effects.

According to a new study in Nature Nanotechnology, a new approach to this technology using a nanomesh structure could have positive implications for long-term health monitoring.

The new sensors are inflammation-free, are very gas permeable, and they’re thin and lightweight, without the use of any pesky substrates that can contribute to skin discomfort. That means they can be directly laminated onto human skin for longer periods of time.

The sensors’ mesh structure is made of biocompatible polyvinyl alcohol, which enables that gas permeability without blocking sweat glands, and it’s stretchable without causing any additional discomfort, even if it’s affixed for a considerable amount of time.

A one-week skin patch test revealed that the risk of inflammation caused by on-skin sensors can be significantly suppressed by using the nanomesh sensors. Furthermore, a wireless system that can detect touch, temperature and pressure is successfully demonstrated using a nanomesh with excellent mechanical durability. In addition, electromyogram recordings were successfully taken with minimal discomfort to the user.

They’re also versatile. The mesh conductors can attach to irregular skin surfaces — say, the tip of a person’s finger — and maintain their functionality even when a person’s natural body movements folds and elongates the skin. Nanofibres with a diameter of 300 to 500 nm were prepared by electrospinning a PVA solution, and were intertwined to form a mesh-like sheet. When the nanomesh conductors were placed on the skin and sprayed with water, the PVA nanofibers easily dissolved, and the nanomesh conductor attached to the skin.

According to the study, the approach has opened up a new possibility for the integration of electronic devices with skin for continuous, long-term health monitoring. “We learned that devices that can be worn for a week or longer for continuous monitoring were needed for practical use in medical and sports applications,” says Professor Takao Someya at the University of Tokyo’s Graduate School of Engineering whose research group had previously developed an on-skin patch that measured oxygen in blood.

Furthermore, the scientists proved the device’s mechanical durability through repeated bending and stretching, exceeding 10,000 times, of a conductor attached on the forefinger; they also established its reliability as an electrode for electromyogram recordings when its readings of the electrical activity of muscles were comparable to those obtained through conventional gel electrodes.

“It will become possible to monitor patients’ vital signs without causing any stress or discomfort,” says Someya about the future implications of the team’s research. In addition to nursing care and medical applications, the new device promises to enable continuous, precise monitoring of athletes’ physiological signals and bodily motion without impeding their training or performance.


Military requirements

Many militaries including those of US, China and others have expressed the desire to cut their manpower, along with stagnant growth or cuts in military budgets. On the other hands the increase in threat levels and employment of militaries in diverse and complex kind of missions has led to manifold increase in number of missions. Technological advances, such as night vision devices, have led to increase in duration of missions; militaries now operate around the clock during times of conflict.  Some of the missions the soldiers perform can take weeks, away from in difficult terrain like deserts and mountains which requires maintaining an incredibly high level of physical fitness.

Krueger (1991) reported that the efficiency of combatants in sustained operations can be significantly compromised by inadequate sleep. Vigilance and attention suffer, reaction time is impaired, mood declines, and some personnel begin to experience perceptual disturbances. Naitoh and Kelly (1992) warned that poor sleep management in extended operations quickly leads to motivational decrements, impaired attention, short-term memory loss, carelessness, reduced physical endurance, degraded verbal communication skills, and impaired judgment. Angus and Heslegrave (1985) noted that cognitive abilities suffer 30 percent reductions after only 1 night without sleep, and 60 percent reductions after a second night.

Around the world, armies are recognizing the importance of maximizing the effectiveness of Soldiers physically, perceptually, and cognitively. Militaries are therefore studying effects of frustration, mental workload, stress, fear and fatigue on both cognitive and physical performance.

In November, the Office of Naval Research awarded a $150,000 grant to Titus and the tech firm Sentience Science to develop tools that could monitor an individual’s stress levels in combat and automatically generate alerts when they reach dangerous levels.




References and resources also include: