RADAR offers special advantages with respect to other types of sensors including all-day, all-weather operations, long detection distance and, depending on the frequency used, penetration. Moreover, radar can often be carried by a number of platforms, spanning from classic naval and airborne to more recent space-borne, UAVs, such as drones, and high-altitude platforms (HAPs). The ensemble of these characteristics can be exploited for military scenarios, such as target detection, tracking and recognition, and for civil scenarios, such as land use and classification, disaster assessment, urban and non-urban monitoring, making radar the perfect sensor for dual-use applications.
The availability of frequency spectrum for multifunction radar systems has been severely compromised and the available frequency bands are increasingly shrinking. In the near future, radar systems will have to share their bandwidth with communications systems. For example, a total of 115 MHz of additional spectrum (1695-1710 MHz and 3550-3650 MHz bands) has been identified for wireless broadband systems . Moreover, high UHF radar systems overlap with GSM communication systems and S-band radars already partially overlap with Long Term Evolution (LTE) and WiMax systems. It is very likely that they will also share the same platforms and the same antennas in dual-function
radar-communications systems. It is clear that this issue of spectrum crowding cannot be addressed only by traditional modes of operation,
such as spatial signal processing and beamforming. Future systems require the ability to anticipate the behavior of emitters in the
operational environment and to adapt their transmissions in a cognitive fashion based upon the spectrum availability.
Radar signal processing (RSP) is one of the key aspects that characterize the radar field as its development allows for radar performances to be maximized and for several capabilities to be enabled, including the ability to operate in spectrally congested and contested scenarios and complex and dynamically changing environment. Artificial Intelligence (AI) has pushed the research and development in many fields , including, among others, speech signal processing (SSP), computer vision (CV) and natural language processing (NLP). Due to the large success of ML in many domains, the radar community has started applying ML-based algorithms to classic and new radar research domains to tackle traditional and new challenges from a novel perspective.
A classical adaptive radar is able to extract information from the target and the disturbance signals through appropriate signal processing algorithms and to use that information at the receive level to improve its performance. Cognitive Radars are capable of interacting intelligently with its environment by adapting both its transmit and receive functions based on contextual awareness and expert reasoning so as to maximizes their output SINR, SCR for Optimal target identification.
For military applications, Cognitive and adaptive radars have the capability to identify possible radio and sensor jamming threats and then transmit without affecting friendly signals. They shall be capable of sensing the environment and adapting transmissions and signal processing to maximize performance and mitigate interference effects in an increasingly cluttered EM environment. They shall also adapt to multiple targets of interest; and other radar, communication, and electronic systems that must operate without interfering with each other
Challenges for Adaptive and Cognitive radar
In 2017, Dr. Chris Baker and Dr. Hugh Griffiths spearheaded efforts to include a formal definition of cognitive radar in the IEEE Standard Radar Definitions: “A radar system that in some sense displays intelligence, adapting its operation and its processing in response to a changing environment and target scene. In comparison to adaptive radar, cognitive radar learns to adapt operating parameters as well as processing parameters and may do so over extended time periods.”
Radar sensors are the first stage in sensor/processor systems involved in detection, localization, tracking, and classification. These functions can be improved via adaptation of the sensor waveform and radar system parameters using feedback from the output of the end processor. Adaptive radars have the capability of changing the processing of received data as a function of time, while fully adaptive radars additional have the capability to adapt on transmit. To further improve the efficiency of spectrum utilization by the radars, modern systems should be able to change the transmitted waveform on-the-fly (adaptive radar illumination or waveform diversity). Again, the radar should apply its cognition to extract, from the past observed radar returns, useful information in order to select or decide the waveform for next transmission.
Two key challenges to the research and development (R&D) of cognitive radars are the development of assessment and evaluation tools, as well as experimental testing methodologies. Common terminology for describing and comparing the characteristics of cognitive radar
is needed. Furthermore, evaluation of cognitive radar algorithm performance requires quantitative metrics. This is not just vital for analysing radar performance offline, but is also the basis for forming cost or reward functions on which online optimisation is based. Although system performance will still be measured in terms of standard performance metrics –such as probability of target detection and false alarm, mean square error in tracking systems, and probability of correct classification in automatic target recognition systems – cognitive systems require additional metrics that quantify the gain in performance achieved at the cost of using system resources.
The challenge of adaptive sensing is that a sensing system must contend with multiple types of topography (such as urban, rural, suburban and littoral, or combinations within a single sortie), many different types of targets (ground movers, airborne and space platforms, for example) and the shrinking RF spectrum (i.e., contested/congested) limiting the available frequencies for clean transmission. These concerns together necessitate optimal use of available resources to maximize system performance.
“One of the greatest challenges will be in the exponential increase in sensor data,” says Tammy Carter, senior product manager for OpenHPEC products for Curtiss-Wright Defense Solutions [Ashburn, Virginia]. “This will drive the need for even faster backplanes, memory/data management, and the associated challenges of reliability. This in turn will drive the need for faster and larger data recorders with more focus on security. The development cycle of these new systems will require more analysts and data scientists to design and verify the new algorithms.”
Design of transmitting waveform has an important effect on the performance and efficiency of radar system. Adaptive waveform design for target detection and recognition has been developed during the past decade, and also recently most studies have been devoted to radar waveform optimization, write Vahid Karimi and others from Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran. In these approaches, one method is based on signal-to-noise ratio (SNR) maximizing under a particular model of the system, interference, clutter and targets. Another approach is based on mutual information which is first proposed by Bell. Bell shows for estimating the parameters of a target from a given ensemble, the radar waveform should be designed to maximize the mutual information between the received signal and the target ensemble.
Staying ahead of adversaries and getting past technical challenges also means upgrading and taking into account obsolescence issues, Noah Donaldson, chief technology officer at Annapolis Micro Systems, explains. “How do customers upgrade to the latest technology quickly? Traditionally, it has taken five to 10 years to upgrade a platform, but that is too long. That’s where commonality and open standards come into play. We work closely with SOSA [Sensor Open Systems Architecture consortium] and VITA [standards organization] to develop modular open architecture products that have standardized hardware profiles. As an example, our WILDSTAR 6XB2 6U Board was developed in alignment with SOSA and VITA standards.”
Cognitive radars are systems based on the perception-action cycle of cognition that sense the environment, learn from it relevant information about the target and the background, then adapt the radar sensor to optimally satisfy the needs of their mission according to a desired goal. The new feature of a cognitive radar that differentiates it from a classical radar is the active feedback between receiver and transmitter.
The cognitive radar system mimics the perception action cycle of cognition. It senses the environment and learns from it important information about the target and the background (perception), then adapts the transmitted waveform to optimally suit the needs of its mission (surveillance, tracking, etc.) according to a desired goal (action). Cognitive radar system is capable of optimizing performance using (1) intelligent signal processing that learns from the environment; (2) receiver-to-transmitter feedback; and (3) preservation of information (i.e., memory).
In this “decision-action” phase, there are two main approaches that can be applied: (i) the Bayesian approach, which builds on prior distributions and knowledge-aided models of the environment obtained from past measurements in the same or similar environments, and (ii) the machine learning approach, which determines the next action based only on the measured data and knowledge of actions commonly taken in the same or similar environments.
Requirements for power consumption, efficient data management, and limited space are playing a role in designing these systems. Essentially, some U.S. Department of Defense (DoD) requirements include the addition of “higher-density solutions to support multielement antenna arrays, where each element requires a separate signal processing channel for phased array beam steering,” Hosking says. “Each element may require both receive and transmit functions to support radar and EW countermeasures applications. While size and weight of the electronics for each channel are important, so are power requirements and cost per channel.”
In addition, users want “to install these functions as close to the antenna as possible to minimize cabling that imposes performance penalties due to signal loss and interference,” Hosking continues. “By incorporating RF circuitry, data conversion, and DSP [digital signal processing] functions in a housing near the antenna, wideband digitized signals can be delivered through optical cables, maintaining optimum signal fidelity.”
Current research on cognitive radar aims not only at developing the adaptive hardware and analytical techniques necessary to enable two-way interaction of the radar with its environment for performance optimization, but also on leveraging advances in fields such as stochastic control, optimization, machine learning, and artificial intelligence (AI) to develop engineering analogues to a wide range of cognitive processes. Cognitive radar builds from many research disciplines including adaptive radar, knowledge-based processing, waveform optimization and adaptation, machine learning and pattern precognitive, and adaptive spectrum sensing.
The recent advances in hardware capability to generate arbitrary (phase and amplitude) design waveforms, high computation resources like FPGAs, Giga samples per second A/D and D/A convertors and machine learning algorithms are other drivers of cognitive and adaptive radars.
Adaptive Signal processing
Earlier solutions for increased adaptivity were implemented through adaptive revisit time scheduling, adaptive selection of detection thresholds (e.g. constant false alarm rate (CFAR) detectors), and adaptive clutter suppression with space-time adaptive processing (STAP) for improved target detection.
Adaptive tracking techniques have varied the measurement times as well as signals used for track updates, based on measurements acquired by a tracker. This feedback loop is used to control the radar such that frequent measurements are made during unpredictable, or rapid dynamic maneuvers, while infrequent measurements are made during predictable periods or steady dynamics.
Experimental radar systems, such as the U.K. Multi-function Electronically Scanned Adaptive Radar (MESAR), U.S. Advanced Multifunction RF System (AMRFS), and Royal Canadian Navy (RCN) Active phased Array Radar (APAR), among others, have been used as a platform for demonstrating the new ideas being developed.
Simultaneously, the concept of altering the intra-pulse waveform modulation based on the measurements provided by the tracker was also beginning to be explored. This led to the development of methods for optimal waveform selection and adaptive extensions thereof. The term waveform diversity, first introduced in 2002 by Dr. Michael Wicks, has become a focal point for research into cognitive radar and is defined in the IEEE Standard 686 as the “optimization (possibly in a dynamically adaptive manner) of the radar waveform to maximize performance
according to particular scenarios and tasks” including exploitation of multiple domains, such as “the antenna radiation pattern (both on transmit and receive), time domain, frequency domain, coding domain and polarization domain.” Examples include waveform selection from among multiple waveform classes, e.g. linear or non-linear frequency modulation (LFM/NLFM), phase or frequency coding, and ultrawide band waveforms.
In 2002, the Defense Advanced Research Projects Agency (DARPA) initiated the Knowledge-Aided Sensor Signal Processing and Expert Reasoning (KASSPER) Program to more broadly address challenge of minimizing sensor deficiencies through exploitation of prior knowledge. Since then, this concept has been applied to numerous other radar problems, such as 2D autofocus for spotlight SAR,
tracking, ground moving target indication (GMTI), and radar identification.
The crystallization of cognitive radar as a formal concept for next-generation radar reflects a conscious evolution in design that incorporates more and more features of human cognitive capabilities into the radar architecture to achieve increased autonomy and performance optimization in dynamically changing environments. It thus provides a vision for building upon the designs of existing radar systems, some of which may now in retrospect be recognized as having some cognitive characteristics.
The term ”cognitive radar” itself was first coined by Dr. Simon Haykin which drew heavily on ideas developed by Fuster in cognitive neuroscience. Haykin’s work built upon past work in cybernetics, artificial neural networks, self-organized learning, and Bayesian decision theory to propose engineering analogues for implementation of four of the main cognitive features identified by Fuster: the PAC, memory, attention and intelligence.
The tangible implementations , adaptivity on transmit and controlled illumination were enabled by advances in electronics, embedded computing, adaptable RF components (amplifiers, filters), small, low-cost, low-power RF transceivers and software-defined radio platforms.
In the early 2000s, two programs initiated by the U.S. Air Force Office of Scientific Research (AFOSR) and DARPA stimulated research that would serve as important precursors to cognitive radar: namely, the Multi-disciplinary University Research Initiative (MURI) ”Waveform Diversity for Full Spectral Dominance” Program and ”Waveform Agile Sensing and Processing” Program. The aim of these programs were to devise methods for optimization of radar performance under time-varying environmental conditions, including a capability to respond to unknown dynamic target parameters through waveform agility. Together, these two programs advanced the requisite mathematical foundations, incorporating the resulting theories into a systems design perspective.
AFOSR initiated the Dynamic Data Driven Application Systems (DDDAS) program, defining the DDDAS concept as ”the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement (instrumentation and control) components of the application system. Efforts are focused on 4 specific science and technology frontiers: 1) applications modeling, 2) advances in mathematical and statistical algorithms, 3) application measurement systems and methods, and 4) software infrastructures and other systems software.
This is complemented by the U.S. Air Force Research Laboratory’s program in Fully-Adaptive Radar, led by Dr. Muralidhar Rangaswamy, which aims to close the loop on the radar operation at multiple levels in an attempt to bring to bear the sense-learn-adapt (SLA) paradigm to maximize system performance by making adaptive and optimal use of all available degrees of freedom.
Advanced Noncooperative Target recognition algorithms
Today’s radar technology is being asked not only to detect targets, but also to characterize them as large or small, friendly or hostile, and sometimes even provide radar images of the target. One of the modes of many modern types of radar is Non-Co-operative Target Recognition where it has to broadly recognize different classes like fighters, transport liner, missile and helicopter. Advanced algorithms are needed based on spectral signatures for identification, as well as on imaging methods are required to classify the targets.
“Algorithms and logic do the tuning and updating of the digital signal processing,” says Jon Friedman, aerospace and defense marketing manager at The MathWorks. “Any intelligent system is only as intelligent as what you train it on, so it’s critical to have the environment to do that.”
AI & Machine learning
Artificial intelligence and machine learning techniques are helping to automate tasks to ease the end user’s workload.” “The increase in available memory on the processor, communication bandwidth for passing data, and overall throughput in embedded signal processing systems enable artificial intelligence for radar and EW applications,” Carter says. “A deep learning reference optimizer and runtime, NVIDIA’s TensorRT, can help radar and EW applications achieve low latency and higher throughput.”
Ultrafast smart FPGAs
Future adaptive and cognitive shall be based on FPGA for radar signal and data processing application because of its ability to process quickly changing information, but also its ability to be to be reprogrammed on the fly to adapt to changing operating conditions. “FPGAs are the only type of programmable computing device with very low latency and predictable response,” says Dan Veenstra, product manager of sensor processing platforms at GE Intelligent Platforms on Ottawa.
Systems designers also are looking at new generations of smart FPGAs with embedded microprocessors not only to increase performance, but also to shrink electronic components in the interests of saving size, weight, and power (SWAP), Veenstra says.
FPGA technology also continues to improve. Modern FPGAs contain significantly more logic, provide more computational power per watt, and support high-speed data streaming at up to 150 Gb/s with dedicated IP blocks. The increased computational capability of today’s FPGAs opens the door for innovative techniques that simply weren’t possible five years ago.
One area of innovation that is enabled by new FPGA technology is the application of machine learning techniques within cognitive radar. These techniques make radars more responsive to their environments so they provide more actionable insight. Instead of operating modes that are pre-programmed (searching mode, tracking mode, etc.), machine learning allows radars to automatically adapt to the best operating parameters, including operating frequency and waveform types. Machine learning also unlocks capabilities such as automatic target recognition (ATR) and facilitates knowledge-aided operation.
Giga samples per second A/D and D/A convertors
The performance of digital receivers used in modern radar, communication, and surveillance systems is often limited by the performance of the analog-to-digital converter (ADC) used to digitize the received signal. Ultrafast ADCs are critical in military applications such as military software-defined radio, radar, and electronic counter-warfare (ECW) that require high sampling rates and large bandwidths.
The adaptive and cognitive radars are expected to be based on SDR architecture connecting a high-performance analog-to digital converter (ADC) to the antenna and moves many of the typical RF functions such as filtering, demodulation, and other processing to the digital realm.
Converter technology continues to evolve every year. Today’s analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) from major semiconductor companies sample at rates orders of magnitude faster than that of their predecessors five years ago at comparable resolutions. The increased resolution of these high-speed ADCs provides radars with higher dynamic range and wide instantaneous bandwidth.
Dynamic range is a critical factor for maximum operating range; for example, it enables fifth-generation fighters such as the F-35 to identify targets much farther out. More instantaneous bandwidth provides several advantages, including increased spatial resolution through pulse compression and the ability to implement advanced techniques such as low probability of intercept (LPI) radar.
Another trend enabled by wider bandwidth is sensor fusion, during which you can allocate a single signal chain to multiple functions. For example, you can use a wideband sensor as both a communications system and a radar simultaneously by splitting multiple waveform types across multiple frequency bands.
A Gsps-capable ADC makes it possible to combine multiple narrowband and wideband channels into one ultra-wideband channel. This moves formerly analog channelization onto the FPGA, where frequencies and bandwidths can be dynamically controlled with software to maximize system flexibility and reconfigurability.
D/A converters are also equally critical, devices that output signals from digital signal processing to the transceiver antennas. TDAC- 25 from Tektronix Component Solutions in Beaverton, Ore. is the 10-bit D/A converter is packaged as an application-specific integrated circuit (ASIC) and offers performance of 25 gigasamples per second.
Additionally, many semiconductor companies are releasing ADCs and DACs called “direct RF sampling converters,” such as the TI ADC12DJ3200 , which acquires data at rates up to 6.4 GS/s. With 12-bit resolution at these sample rates, RF sampling converters can directly convert RF input signals up to C band without upconversion or downconversion. As converters continue to evolve, future radars will benefit from direct RF sampling in both the C and X bands.
Direct RF sampling architectures will revolutionize AESA radars. In a fully active array, each antenna element requires its own ADC and DAC. That means if the ADCs and DACs cannot directly sample at the radar’s operating frequency, each transmit-receive module (TRM) also requires its own upconversion/downconversion stage. This leads to increased design costs, size, and variation in performance. But you can reduce costs, size, and complexity by using a direct RF sampling architecture to simplify the RF front-end architecture by eliminating the mixer and local oscillator (LO). With such a large array of transmitters and receivers, direct RF sampling architectures can significantly increase channel density and reduce the cost per channel.
High-Bandwidth Data Buses for Sensor Fusion
Another key trend is the increasing reliance on higher bandwidth data buses such as PCI Express Gen 3, 40/100 GbE, Fiber Channel, and Xilinx Aurora to move high-bandwidth sensor data back to centralized processors for computation. For example, the F-35’s integrated core processor aggregates data from multiple ISR sensors to enable processing on an aggregate set of data. This provides pilots with better situational awareness.
At the heart of this trend is the evolution of high-speed serial transceiver technology (also referred to as multigigabit transceivers or MGTs). This technology has progressed rapidly in recent years, with current line rates topping out at 32 Gbps per lane; 56 Gbps with PAM4 is on the horizon. FPGAs are mainly thought of as processing resources, but they also contain some of the most sophisticated MGTs, which makes them ideal targets for sensor development.
Gallium Nitride for Front-End Components
Gallium Nitride (GaN), considered by some to be the biggest semiconductor innovation since silicon, is a material that is capable of operating at a much higher voltage than conventional semiconductor material. Higher voltage means better efficiency, so RF power amplifiers and attenuators using GaN consume less power and produce less heat. As more GaN-based RF component suppliers enter the market with production-ready, reliable products, the use of GaN-based amplifiers has increased.
This technology is important for the evolution of active electronically scanned array (AESA) radar systems. An AESA is a fully active array with hundreds or thousands of antennas, each with its own phase and gain control. Using a phased array of transmitters and receivers, these radar systems steer beams electronically without physically moving the antenna. These types of radar systems are growing in popularity because of their increased power on target, spatial resolution, and improved robustness compared with other conventional radars. For example, if one element in the array fails, the radar continues to operate. The increased use of GaN amplifiers in AESA radars should offer better performance, achieving equivalent output power with smaller form factors and requiring less cooling.
References and Resources also include: