Home / Technology / AI & IT / Militaries develop AI and machine learning enabled cognitive communications, Intelligence gathering and electronic warfare systems

Militaries develop AI and machine learning enabled cognitive communications, Intelligence gathering and electronic warfare systems

Electronic warfare (EW) is one of the crucial aspects of modern warfare. EW receivers are passive systems that receive emission from various platforms that operate in the relative vicinity. The received signals are typically analyzed to obtain valuable information about characteristics and intentions of various elements that are presented in the battlefield. A significant example in modern military warfare is represented by radar radiation sources classification and recognition (RRSCR),  which is one of the tasks that are associated to electronic support measures and electronic signal intelligence systems (ESM/ELINT).

 

The former (ESM) focuses on classifying different radar types, such as military or civil radar, surveillance or fire control radar, whereas the latter further concerns the identification of individual radar emitter parameters of the same classification, also called specific emitter identification (SEI). Such operations are based on radio frequency distinct native attribute (RF-DNA) fingerprint features analysis methods, such as pulse repetition interval (PRI) modulations analysis, intra-pulse analysis.

 

RRSCR mainly concerns the following four aspects:
i) denoising and deinterleaving (or separation) of collected pulse streams;
ii) improving accuracy of recognition in low SNR scenarios, in conditions of missing and spurious data and in real-time;
iii) boosting robustness and generalization of algorithms;
iv) identification of unknown radiation sources.

 

In recent years there has been a fundamental shift to Radar systems that are digital and reprogrammable in nature, and thus can adopt different frequencies, signal characteristics and waveforms to avoid being jammed. For instance, an adaptive radar can sense the environment and alter transmission characteristics accordingly, providing a new waveform for each transmission or adjusting pulse processing. This flexibility allows it to enhance its target resolution, for example. Many adversary systems require only a simple software change to alter waveforms, which adds to the unpredictability of waveform appearance and behavior. Military forces struggle to isolate adaptive radar pulses from other signals – friend or foe. However, adaptive solutions cannot rapidly grasp and respond to a new scenario in an original manner.

 

Similarly, communication systems are also evolving from software-defined radios to cognitive radios. Software-defined radios allow one to program the waveforms from traditional waveforms to new waveforms that can enable voice, video and data communications. The cognitive radios are aware of their internal state and environment and can use computer intelligence to automatically and invisibly adapt themselves to the user needs and band conditions.

 

The methods of RRSCR mainly have three classes: knowledge-based; statistical modeling based, and ML based. The knowledge-based methods depend on the prior radar knowledge summarized from the collected raw data by radar experts to achieve RESCR-related works. Concerning traditional statistical modeling methods, an autocorrelation spectrum analysis was applied to [156] for modulation recognition of multi-input and multi-output (MIMO) radar signals.

 

The increasingly growing complexity of electromagnetic environment demonstrates severe challenges for RRSCR, such as the increasingly violent electronic confrontation and the emergence of new types of radar signals generally degrade the recognition performance of statistic modeling techniques, especially at low signal-to-noise ratio (SNR) scenario. In recent years, because of the high efficiency of ML algorithms and the rapid development of novel RSP technology, ML-based methods have been successfully applied to RRSCR to face some critical challenges. Traditional ML algorithms based in RRSCR usually includes features selection, classifier design, classifier training and evaluation. Two-phase method of feature extraction and classification based on common machine learning algorithm, is a typical pattern in RRSCR . There are many classifier models applied to RRSCR, such as supervised learning methods: ANN, SVMs, decision DT, RF . ots of time costs in general. Nowadays with the advantages of deeply automatic feature extraction, radar experts exploit apply DL in RRSCR to improve the classification performance based on DNN models.

 

However, However, electronic warfare techniques have remained static, like studying enemy systems for vulnerabilities, figure out ways of disrupting them and then building a “playbook” filled with different EW tactics, says  DARPA. The EW domain is only just beginning to implement machine learning – and eventually, AI. The advantage of AI is that the algorithms can adapt to changing environments and scenarios. AI can also replace human operators in systems where human involvement is required for target recognition.

 

Now US DOD is planning to employ AI and machine learning methods to develop adaptive and cognitive EW technology which would be able to take countermeasures against these dynamic threats. The term “AI” has been used for decades, and broadly encompasses problem-solving where a machine is making decisions to find a solution. Machine learning (ML) refers to a type of AI where a machine is trained with data to solve a specific problem. DL is a class of ML capable of “feature learning,” a process whereby the machine determines what aspects of data to use in decision making, as opposed to a human designer specifying salient characteristics.

 

Deep Neural networks (DNN) or large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain have been responsible for many exciting advances in artificial intelligence in recent years. Deep learning networks typically use many layers—sometimes more than 100— and often use a large  number of units at each layer, to enable the recognition of extremely complex, precise patterns in data.  Over the past decade, DNNs have become the state-of-the-art algorithms of Machine Learning in speech recognition, computer vision, natural language processing and many other tasks.

 

“In general, the demands from military customers are interconnected sensors and communications that are fast, robust, and hard to detect, and jammers that can be adaptive to the unknown threat,” says Peter Thompson, Director, Business Development – Technology, at Abaco Systems (Boston, Massachusetts). The advantage of AI is that the algorithms can adapt to changing environments and scenarios. AI can also replace human operators in systems where human involvement is required for target recognition.

 

Instead of humans analyzing the data, the idea is to move to intelligent artificial means of analyzing that data. Neural networks, or the presently used term “deep learning,” essentially means having a smart computer that can make decisions and think more like humans. With AI, intelligent machines work and respond much like humans. Machines can therefore perform smarter tasks using capabilities like signal recognition. Machine learning takes AI one step further, allowing machines to continuously learn from data and adapt as a result. These computers learn over time at a very rapid rate. Threats using machine learning continue to learn from every conflict, determining ways to be more effective so that they prevail against future countermeasures.

 

This technology has applications in Military including ship recognition and electronic warfare-specific emitter identification. The RF spectrum has emerged as a new fighting domain.  Artificial intelligence can make sense of the vast number of signals you’re seeing and those you’ve collected for true situational awareness and recognize a threat. RF signal classification and spectrum-sensing algorithms can also benefit hugely from DL methods. Whereas previous automatic modulation classification (AMC) and spectrum-monitoring approaches required labor-intensive efforts to hand-engineer feature extraction (often taking teams of engineers months to design and deploy), a DL-based system can train for new signal types in hours.

 

DL also permits end-to-end learning, whereby a model jointly learns an encoder and decoder for a complete transmit-and-receive system. Instead of needing to attempt to optimize a system in piecemeal fashion by individually tuning each component (such as digital-to-analog converters [DACs], analog-to-digital converters [ADCs], RF converters, wireless channel, and receiver network) and stitching them together, the model treats the system as an end-to-end function and learns to optimize the system holistically.

 

DARPA’s Cognitive Electronic Warfare (EW) DARPA’s Advanced RF Countermeasures (ARC) and Behavioral Learning for Adaptive Electronic Warfare (BLADE) programs are investing in the technologies needed to rapidly react to dynamic electromagnetic spectrum signals from adversary radar and communications systems. “These programs are applying machine learning—computer algorithms that can learn from and make predictions from data—to react in real time and jam signals, including new signals that have not yet been cataloged. DARPA is working with the Services to transition technologies derived from the field of cognitive electronic warfare into the F-18, F- 35, Army Multi-Function EW program, and Next Generation Jammer.”

 

Wireless signal recognition (WSR)

With the increasing innovation in a wireless communication systems, numerous wireless terminals and equipment are constantly emerging, which has brought profound changes to our daily life. Unfortunately, the limited spectrum resource can hardly meet the ever-changing demand of the coming 5G and Internet of Things (IoT) networks, which poses a significant challenge to the spectrum utilization and management.

 

Generally, communication signal recognition takes advantage of some signal parameters to classify or identify the types of signals. These techniques may include frequency and bandwidth estimation, symbol rate evaluation, modulation type classification, and wireless technology identification, which could be collectively referred to as wireless signal recognition.

 

Wireless signal recognition (WSR) has great promise on military and civilian applications , which may include signal reconnaissance and interception, antijamming, and devices identification. Generally, WSR mainly includes modulation recognition (MR) and wireless technology recognition (WTR). MR is also known as automatic modulation classification (AMC) is first widely used in military field and later extended to civilian field. MR classifies radio signals by identifying modulation modes, which facilitates to evaluate wireless transmission schemes and device types. What is more, MR is capable of extracting digital baseband information even under the condition of limited prior information

 

Traditional algorithms of MR could mainly be separated into two groups: likelihood-based (LB) and feature-based (FB) approaches. LB approaches are based on hypothesis testing theory; the performance based on decision-theoretic is optimal but suffers high computation complexity. Therefore, feature-based approaches as suboptimal classifiers were developed for application in practice. In particular, the feature-based approaches usually extracted features for the preprocessing and then employed classifiers to realized modulation classification. Conventional FB approaches heavily rely on the expert’s knowledge, which may perform well on specialized solutions but poor in generality and suffer high complexity and time-consuming. To tackle these problems, machine learning (ML) classifiers have been adopted and shown great advantages, for example, using support vector machine (SVM).

 

AI in Deployed Systems

Performing signal detection and classification using a trained deep neural network takes a few milliseconds. Compared to iterative and algorithmic signal search, detection, and classification using traditional methodologies, this can represent several orders of magnitude in performance improvement. These gains also translate to reduced power consumption and computational requirements, and the trained models typically provide at least twice the sensitivity of existing approaches.

 

DeepSig, a US-based startup focused on signal processing and radio systems, has commercialized DL-based RF sensing technology in its OmniSIG Sensor software product, which is compatible with NI and Ettus Research USRPs. Using DL’s automated feature learning, the OmniSIG sensor recognizes new signal types after being trained on just a few seconds’ worth of signal capture.

 

 

Deep learning  to aid in Army intelligence gathering

Scientists at the U.S. Army’s corporate research laboratory are developing a new algorithm that could improve image and audio identification for intelligence gathering on the battlefield. U.S. Army Combat Capabilities Development Command Army Research Laboratory scientist Dr. Michael S. Lee and co-workers are developing a deep-learning algorithm called a shortcut autoencoder that can restore single audio clips and images corrupted by various types of random noise.

 

What sets their work apart from previous studies is that they have improved applicability to 1-D signals (e.g., human speech), and are testing against stronger noise sources than usually considered, i.e., noise/signal ratios beyond 1.0. “Deep learning is well known for being able to accurately detect objects in images, but it is also capable of synthesizing realistic-looking data, such as observed in the recently popular FaceApp,” Lee said. “In our work, we use deep learning to reconstruct an image based on limited input information, for example, with only one percent of the pixel channels retained.”

 

Lee said his team’s model is trained with a lot of data of what other real pictures look like, and a variant of their image model can be used to reconstruct human speech from noisy audio signals even when the noise is much louder than the signal.

 

According to Lee, target Army applications are numerous, including eavesdropping, demodulating communications in the presence of strong jammers and perception of objects in image/video that are obscured intentionally, by darkness (low-light) or by weather events such as fog and rain.

 

“In the short run, this technology could provide a ‘Zoom/Enhance’ function for intelligence analysts,” Lee said. “In the long run, this type of technology may be seamlessly integrated into a camera’s hardware for improved image quality under various scenarios such as low-light and fog.”In addition to Army applications, Lee noted that the commercial sector could benefit from this technology as well.

 

“In low-bandwidth environments, such as areas far away from cell towers, algorithms like ours could provide clearer phone calls,” Lee said. “Self-driving cars may benefit from this technology in extreme weather scenarios like rain and fog to infer what objects are ahead. Commercial video cameras will be able to operate in lower light conditions with higher frame rates and/or lower exposure times.” This work addresses challenges within the Network Command, Control, Communication and Intelligence Cross-Functional Team.

 

“Part of CCDC ARL’s mission is to explore the realm of what is possible,” Lee said. “Here, we show that beyond detection and classification, machine learning can be used for the elucidation of weak and/or noisy signals and images.” Moving into the future, Lee and his colleagues would like to explore how this method will work on data types beyond human speech and optical images, such as physical environment sensor data and wireless communication.

 

Although ML methods have the advantage of classification efficiency and performance, the feature engineering to some extent still depends on expert experience, resulting in degradation of accuracy rate. Therefore, the self-learning ability is very important when confronted with unknown environment.

 

Adaptive and Cognitive electronic Warfare

Current airborne electronic warfare (EW) systems must first identify threat radar to determine the appropriate preprogrammed electronic countermeasure (ECM) technique. This approach loses effectiveness as radars evolve from fixed analog systems to programmable digital variants with unknown behaviors and agile waveforms. Future radars will likely present an even greater challenge as they will be capable of sensing the environment and adapting transmissions and signal processing to maximize performance and mitigate interference effects.

 

Militaries are now looking to add  “cognitive” capabilities that leverage artificial intelligence, or AI, and machine learning, into electronic warfare systems. The main goal is to be able to increasingly automate and otherwise speed up critical processes, from analyzing electronic intelligence to developing new electronic warfare measures and countermeasures, potentially in real-time and across large swathes of networked platforms.

 

A computer system, especially one with an ever-growing library of electronic signature data collected from a wide array of sources, could parse through that information much faster than a human, or even a team of humans depending on the volume of available intelligence, rapidly identifying items of interest for further analysis and exploitation. It may even be able to start doing some of that follow-on work by itself after isolating the important data.

 

In recent years, however, there has been a “fundamental shift” to systems that are digital and reprogrammable in nature, and thus can adopt different frequencies, signal characteristics and waveforms to avoid being jammed. “We need to have the ability to respond to new threats, new waveforms that those systems are using that we haven’t anticipated,” Eisenberg said. “If things are changing quickly, then we need systems that can respond in similar timeframes to enable us to protect our aircraft.” “People do a lot of low-stakes applications of machine learning and artificial intelligence, but that is very different from our world where lives are on the line,” Tranquilli technical director for signals and communications processing at BAE says. “That’s one of the big things we have to work through is bringing new capability in without bringing risks based on the ability to adapt and be cognitive.”

 

True cognitive EW systems, should be able to enter into an environment not knowing anything about adversarial systems, understand them and even devise countermeasures rapidly. The goal of the DARPA’s Adaptive Radar Countermeasures (ARC) program is to enable U.S. airborne EW systems to automatically generate effective countermeasures against new, unknown and adaptive radars in real-time in the field. ARC technology will: Isolate unknown radar signals in the presence of other hostile, friendly and neutral signals, deduce the threat posed by that radar, synthesize and transmit countermeasure signals to achieve a desired effect on the threat radar and assess the effectiveness of countermeasures based on over-the-air observable threat behaviors.

 

Cognitive electronic warfare system market

The global cognitive electronic warfare system market is gaining widespread importance owing to the rising need for artificial intelligence enabled warfare systems for combatting dynamic threats coupled with growth in territorial conflicts and geopolitical instabilities.

 

The key market players in the global cognitive electronic warfare system market include BAE Systems, Cobham Advanced Electronics Solutions, Elbit Systems, General Dynamics Corporation, Israel Aerospace Industries, L3 Harris Technologies Inc., Leonardo S.p.A., Northrop Grumman Corporation, Raytheon Technologies Corporation, SAAB AB, Textron Inc., Thales Group, Teledyne Technologies, and Ultra Electronic Group.

 

 

 

References and resources also include:

https://new.hindawi.com/journals/wcmc/2019/5629572/

https://www.ni.com/en-in/innovations/white-papers/19/artificial-intelligence-in-software-defined-sigint-systems.html

 

 

 

About Rajesh Uppal

Check Also

DARPA Veloci-RapTOR: Redefining Velocity Measurement with Force Sensors

For decades, measuring velocity has relied on external references like GPS or lasers. But what …

error: Content is protected !!