Home / Technology / AI & IT / DARPA EOLE developing AI agents for human-machine collaborative analysis of image, video, and multimedia documents

DARPA EOLE developing AI agents for human-machine collaborative analysis of image, video, and multimedia documents

In Defense & Intelligence, milliseconds can mean everything. Real-time military analytics is the tactical edge for mission success. This is why information dominance for military operational readiness is so critical, and why the Department of Defense (DoD) and Intelligence communities collect and analyze so much military big data–everything from logistics, to IoT sensor data, to social media and signals metadata. Yet existing military geospatial technology and GEOINT software solutions are unable to keep up with the proliferation of this data.

 

Multimedia includes media text, audio, still image, video, and metadata. The application of multimedia in the military geography domain is to open new dimensions improving information transfer opportunities through visualisation and using options provided by human brain in pattern recognition, spatial identification.

 

Multi-media contextual analytics techniques that harvest forensic social and digital media will boost agility of military operations both in the physical and information spaces through deep understanding of adversary perspectives, intent, and threats.

 

DARPA provided use cases such as Anticipating actions on the ground in the Russian invasion of Ukraine. The system will Detect and classify shapes to provide Indications and Warnings such as What can these objects do and where can they go?

Another use case is Activity Recognition: What events are happening; what materials are involved; what is the likely function of this aggregate?
If it is Probable military resupply base then
– Activities this base is capable of supporting
– Current actions in progress
– Constraints on actions given terrain

This activity recognition involves: Detect and classifying shapes and aggregate counts; detecting and classifying discrete actions

 

DARPA launched the Environment-driven Conceptual Learning (ECOLE) program in Sep 2022 that will create AI agents capable of continually learning from language and vision to enable human-machine collaborative analysis of image, video, and multimedia documents during time-sensitive, mission-critical DoD analytic tasks, where reliability and robustness are essential.

 

The Defense Advanced Research Projects Agency (DARPA) issued a Broad Agency Announcement (BAA) in Sep 2022 seeking proposals for its new Environment-driven Conceptual Learning (ECOLE) program, which intends to radically improve computational systems that analyze large amounts of multimedia by creating artificial intelligence (AI) agents capable of continually learning from linguistic and visual input.

 

According to the agency’s announcement of the ECOLE launch, the goal of the program is to enable collaborative human-machine analysis of image, video, and multimedia documents during time-sensitive, mission-critical DoD analytic tasks that must be reliable and robust. By utilizing the analytic process as a form of interactive curriculum learning, ECOLE improves its
conceptual model of the visible world while aiding analysis.

 

Dr. William Corvey, ECOLE program manager in DARPA’s Information Innovation Office, says that today’s multimedia analysis systems lack introspection. “Furthermore, symbolic representations as they’ve been constructed in the past simply do not scale. The core innovation in ECOLE will be teaching the AI to learn representations that are faceted and conceptual in nature — such as representations that can be iterated on with a human partner; representations that can be reasoned over; and representations that can be readily generalized.”

 

 

The results garnered from the ECOLE program should be broadly applicable to a range of technology sectors, including the semantic web community, commercial companies that reason over information on the internet; the robotics industry; public-safety organizations that must process images or video for object and activity recognition; and any industry requiring robust, automatic reasoning over image and video data, like autonomous vehicles.

 

The ECOLE effort will run over four years divided into three phases. The BAA can be viewed at the official Sam.gov site.

 

 

 

 

References and Resources also include:

https://militaryembedded.com/ai/big-data/ai-program-from-darpa-aims-to-transform-multimedia-analysis

About Rajesh Uppal

Check Also

Exploring VxWorks: A Comprehensive Guide to Real-Time Operating Systems

Introduction: In the realm of embedded systems, real-time operating systems (RTOS) play a crucial role …

error: Content is protected !!