Remote sensing is the science of acquiring information about the Earth’s surface without actually being in contact with it. This is done by sensing and recording reflected or emitted energy and processing, analyzing, and applying that information. Unlike optical satellites that capture reflected sunlight to produce detailed photos of Earth, synthetic aperture radar (SAR) satellites bounce radar signals off the ground and record the reflections to create images. This allows radar satellites to collect high-resolution imagery day or night, regardless of cloud cover. This unique imaging capability makes SAR particularly useful for time-critical applications including change detection after natural disasters and identifying illegal fishing operations. Investments by the commercial and government sectors are leading to rapid growth in earth observation satellites and remote-sensing data.
Since before Russia’s invasion of Ukraine, space imagery, remote sensing and communications satellites have been informing the public and helping keep Ukrainian forces and civilians connected. An unprecedented release of commercial satellite imagery of Russia’s invasion of Ukraine – and the rapid sharing of that intelligence – was facilitated by U.S. intelligence agencies that already were familiar with the capabilities of the private sector and how they could be applied, a U.S. intelligence official said April 6.
“We partner with over 100 companies, we’re currently using imagery from at least 200 commercial satellites and we have about 20 or so different analytic services in our pipeline,” David Gauthier, director of commercial and business operations at the National Geospatial-Intelligence Agency (NGA), said during a panel discussion at the 37th Space Symposium.
The bad weather and heavy cloud coverage over Ukraine became a problem for optical imaging satellites that use visible, near-infrared and short-wave infrared sensors to produce photographic images. Those satellites can’t see through clouds so NGA turned to commercial operators of synthetic aperture radar (SAR) sensor satellites that can penetrate cloud cover and shoot pictures at night.
“We took commercial SAR, which was in our testing and evaluation pipeline, and we brought it directly to operations,” said Gauthier. “And we increased our purchasing power fivefold and started buying SAR capabilities all over the battlefield because of weather, quite honestly.”
Constellations of small satellites equipped with synthetic aperture radar (SAR) payloads can realize observations in short time intervals independently from daylight and weather conditions and this technology is now in the early stages of development.
SAR is a powerful tool that enables remote monitoring of the health and security of our oceans and coastal regions. SARs applications are many: they range from geology to crop monitoring, from the measurement of sea ice to disaster monitoring to vessel traffic surveillance, not to forget the military applications (many civilian SAR satellites are, in fact, dual-use systems). Rapidly changing natural disasters, such as floods, wildfires and volcanic eruptions, require ongoing, frequent temporal monitoring, and this is an inherent strength of SAR technology.
Recently, the use of machine learning, and specifically convolutional neural networks, have demonstrated significant advances in maritime object detection in SAR images. Once detected, machine learning methods have demonstrated some success at maritime object
classification, but they remain brittle.
Machine learning methods rely on large numbers of example images to learn (train) to identify
that object in future images. Object classification methods typically fail when asked to classify an
object that is in a state that was not previously seen in the training data. For stationary objects,
such as many terrestrial objects of interest, large training sets can be acquired over time to cover
many of the possible imaging variations. Maritime environments are much more challenging
because most objects of interest and the background scene are always in motion. This limits
today’s classification techniques to broad classes of objects. In addition, many interesting
applications of terrestrial SAR, such as change detection and optical-SAR registration, are not yet
possible in maritime environments.
DARPA launched the Fiddler program in March 2022 with the objective to improve automatic object recognition in SAR images. Object recognition often requires significant examples to train machine learning (ML) classification algorithms. Obtaining training data can be time-consuming, expensive, and even impossible in dynamic conditions.
DARPA is soliciting innovative research proposals to develop novel algorithms that can rapidly create synthetic imagery of targets for training Automatic Target Recognition (ATR) algorithms that operate on Synthetic Aperture Radar (SAR) imagery. The effort will develop software that can train on a sparse set of SAR imagery of an object, such as a commercial ship, and use the results of that training to create imagery of the same object under different viewing conditions.
The use of machine learning and computer vision methods to generate training data for dynamic
maritime environments is of particular interest to this program. Performer methods will learn
from real SAR images to generate or render synthetic SAR images at new imaging geometries
and configurations. Performers will then demonstrate the generation of diverse training data
from a few real examples, to rapidly train robust SAR object detection methods.
Software for Synthetic Image Generation
Performers will develop software and algorithms to produce diverse SAR-image training examples
of an object from sparse real image examples. Splitting the training data generation and the
subsequent machine-learning application is important for scaling this capability across new sensors and applications.
The internal structure of the software module will be defined by the performer. It is envisioned that it will contain the following portions:
Object learning portion that will learn relevant features of a new object from real SAR
image examples. Examples could include statistical feature extraction, autoencoders,
volumetric methods, and inverse imaging.
Object model that stores relevant information learned about the object. Examples could
include a latent feature space, physical feature space, 3D point-cloud, coordinate-based
neural representations, or surface models.
New image generation portion that will generate or render new SAR images at specific
imaging parameters when requested. Examples could include generative networks,
electromagnetic simulation, and neural rendering.
A common interface with the software module will be required to allow for testing and evaluation.
The interface will support initializing, loading examples, training, requesting new images, and
storing current state. The specific details of the interface will be established early in the program’s execution. All complex SAR imagery used or created by this software will be in Sensor
Independent Complex Data (SICD) or a similar common format.
Synthetic Images and Synthetic Training Sets
Image generation already has utility in many applications, such as data visualization and
emulation. But for Fiddler it could also generate SAR scenes at specific look angles and resolutions derived from other SAR images taken at fixed look-angles, resolutions, and frequencies. This would enable a more fluid interaction with the data instead of being confined to batch analyzing individual static images. Under this program, performers will also develop an ATR algorithm to demonstrate the utility of synthetic image generation. The goal is to in increase the probability of correct identification.
The primary measure of performance will be how well the synthetic image generation module
can recreate a diverse data set from a sparse sampling of that data set. While qualitative and
statistical metrics can be useful for measuring image quality and differences, they are not good
metrics for measuring the utility of the generated data. To accomplish a quantitative evaluation,
an ATR is used to classify images from both the original dataset and the synthetic dataset.
In addition to normalized performance, the performer’s software will also be evaluated on the
time it takes to learn a new object from examples and to produce new images. Fast generation of
diverse training images will enable fast ATR development and deployment.
Maritime environments are the primary focus of this program. Objects-of-interest include ships
and other objects found in and around littoral waters such as breakwaters, wind farms, piers, ice,
islands, and reefs. Terrestrial and coastal objects may also be explored as data sets become
Data will be post-detection in the form of image chips (X by Y), where X and Y can each range
from 128 to 512 pixels. A variety of scenarios and objectives will be evaluated with the test
harness. These can include object type identification, specific object identification, new object
identification, etc. Testing scenarios will be developed with performer input once in program
References and Resources also include: