Home / Technology / AI & IT / Wide-area Airborne Motion Imagery (WAMI ) offers Military persistent, real-time surveillance for enhanced situation awarness through an intelligent, airborne sensor system

Wide-area Airborne Motion Imagery (WAMI ) offers Military persistent, real-time surveillance for enhanced situation awarness through an intelligent, airborne sensor system

The DoD has become increasingly reliant on intelligence, surveillance and reconnaissance (ISR) applications. With the advent of expanded ISR capabilities, there is a pressing need to dramatically expand the real-time processing of wide-area, high-resolution video imagery, especially for target recognition and tracking a large number of objects. Not only is the volume of sensor data increasing exponentially, there is also a dramatic increase in the complexity of analysis, reflected in the number of operations per pixel per second. These expanding processing requirements for ISR missions, as well as other DoD sensor applications, are quickly outpacing the capabilities of existing and projected computing platforms.

 

Traditional aerial spycraft systems, whether on drones or spy planes or helicopters, operate a bit like telescopes. They’re really good at staring at one individual target in very high fidelity. What this new technology proposes is precisely the opposite. The idea is to watch everything at once, to view the full picture, ‘Eyes in the Sky’ author Arthur Holland Michel. When you do that, you gain all sorts of new powers. You can watch multiple different targets simultaneously. You can see the relationships between targets, to see if they’re part of the same adversary group. You can even see what happened when you weren’t paying attention..

 

Wide-area motion imagery (WAMI) sensor payloads and processing solutions provide real-time activity-based intelligence in both tactical and strategic environments. They help analysts observe vehicle tracks and traffic, study patterns of life, identify normal behavior and nodes of activity, and track trends to anticipate next behaviors.

 

WAMI is also useful to law enforcement agencies, to track organized crime, follows suspects in shootings, even down to [looking for] illegal dumping.

 

DARPA’s  developed advanced drone surveillance system ARGUS-IS (Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System), through a 30 month, $18.5 million contract awarded to BAE . The mission of the Autonomous Real-time Ground Ubiquitous Surveillance – Imaging System (ARGUS-IS) program is to provide users a flexible and responsive capability to find, track and monitor events and activities of interest on a continuous basis in areas of interest. The system used 368 cell phone camera sensors, 5 megapixels each which are focused on ground through four telescopic lenses.  Together, these cameras makeup to a resolution of a staggering 1.8 gigapixel allows observation of targets as small as 6 inch— good enough to resolve humans, animals, and even birds from an altitude of 6000 metres.

 

What are the technologies that enable WAMI?

Originally, the idea was literally to bolt a number of separate digital cameras to an aircraft, then feed their outputs into a processing system that stitched them together in software. This soon hit a wall. The more cameras you bolt together, the larger the whole system becomes—and you’re talking about aircraft with limited carrying capacity.

 

The next generation got around that problem by ingeniously taking the tiny cameras that you can find in any smartphone and making a mosaic of hundreds of them on a single wafer. That wafer is then placed behind a lens, with the system acting like a composite of hundreds of individual cameras, working simultaneously.

 

But the more pixels that you generate, the more difficult it is to stitch all the images together. They turned to graphics processing units (GPUs), more commonly found in Xboxes, which were ideal for quickly stitching together vast numbers of pixels into composite images.

 

A lot of CCTV cameras, which also have the “soda straw” [narrow field of view] problem, are now coming equipped with wide-area capabilities. This means that instead of having to select one specific area to point at, or having multiple cameras dotted around, you can have a single camera that provides a very high resolution, in some cases 360-degree, view.

 

In a city where the CCTV coverage is very dense, that essentially creates a WAMI system where an entire city is watched continuously and unblinkingly.

DARPA’s  Unconventional Processing of Signals for Intelligent Data Exploitation (UPSIDE) program

The Unconventional Processing of Signals for Intelligent Data Exploitation (UPSIDE) program seeks to break the status quo of digital processing with methods of video and imagery analysis based on the physics of nanoscale devices. UPSIDE processing will be non-digital and fundamentally different from current digital processors and the power and speed limitations associated with them.

 

Instead of traditional complementary metal–oxide–semiconductor (CMOS)-based electronics, UPSIDE envisions arrays of physics-based devices (nanoscale oscillators are one example) performing the processing. These arrays self-organize and adapt to inputs, meaning that they do not need to be programmed in the same way digital processors are. Unlike traditional digital processors that operate by executing specific instructions to compute, UPSIDE arrays will rely on a higher-level computational element based on probabilistic inference embedded within a digital system.

 

The UPSIDE program consists of an interdisciplinary approach which has three mandatory tasks performed over two phases. Task 1 forms the foundation for the program and involves the development of the computational model and the image processing application that will be used for demonstration and benchmarking. Tasks 2 and 3 will build on the results of Task 1 to demonstrate the inference module implemented in mixed signal CMOS in Task 2 and with non-CMOS emerging nanoscale devices in Task 3.

 

The UPSIDE program launched in June 2013 when participants from five corporate labs, thirteen universities and three government labs met to share their approaches and exchange ideas for this highly collaborative effort. UPSIDE image processing applications include object detection and tracking for video, wide area imagery (WAMI) and robotics. Emerging devices such as memristors and spin torque oscillators will be integrated into the processing chain of these systems to perform a variety of functions, including feature extraction.

 

Harris’ wide-area motion imagery (WAMI) sensor payloads and processing solutions

Our WAMI solutions deliver actionable situational awareness that allow analysts to not only see what’s happening in real time, but also augment this data with existing sensor payloads. Our capabilities expand on the information and data from traditional intelligence, surveillance, and reconnaissance methods to provide confident decision-making tools that will enable rapid decision making.

 

Automated tracking and multi-INT cross-cue

Harris WAMI airborne sensor systems deliver the ability for real-time detection and tracking of high-value targets and integrate multiple sources including signals intelligence, full motion video, hyperspectral imaging, and social media indicators, to provide real-time information for what is happening on the ground. Our WAMI systems:

  • Enable real-time multi-INT surveillance
  • Deliver complete picture of events on the ground in real time
  • Automate tracking and performs advanced analytics provide for continuous situational awareness in tactical environments
  • Tip and cue other sensor systems (including signals intelligence, full-motion video, and hyperspectral sensing)
  • Provide context of events across integrated sensors to deliver detection and tracking of high-value targets
    Advanced tracking analytics

 

The wide-area coverage and persistence of Harris’ motion analytics allow analysts to see events that are happening concurrently and establish interconnected patterns of life, including social interactions, destinations, and origins of travel. Incorporating historical information during a surveillance mission provides historical patterns of life information that can be used to anticipate future behaviors and plan appropriate responses. Our analytics:

  • Monitor virtual trip wire lines and watch box areas
  • Detect breaches and triggers in real-time tracking across multiple watch boxes
  • Detect specific track types (right or left turn, stop/start, vehicle U-turns), abnormal speeds, vehicles avoiding checkpoints, multi-vehicle meet-ups, or convoys
  • Capture and store activity across wide areas for pattern of life analysis to understand what is happening and plan operations

 

References and Resources also include:

https://www.harris.com/solution/corvuseye-1500

https://www.darpa.mil/program/unconventional-processing-of-signals-for-intelligent-data-exploitation

https://www.harris.com/solution/corvuseye-1500

 

 

About Rajesh Uppal

Check Also

Revolutionizing Real-Time Machine Learning: The Rise of Dedicated AI Chips and Neuromorphic Computing

Introduction: Artificial Intelligence (AI) has been a transformative force, aiming to enhance computers and robots …

error: Content is protected !!