Trending News
Home / Technology / AI & IT / DARPA ANSR employing hybrid AI methods for assurability and trustworthiness of Military ISR Missions

DARPA ANSR employing hybrid AI methods for assurability and trustworthiness of Military ISR Missions

Artificial Intelligence technologies aim to develop computers, or robots that match or exceed the abilities of human intelligence in tasks such as learning and adaptation, reasoning and planning, decision making and autonomy; creativity; extracting knowledge and making predictions from data. Within AI is a large subfield called machine learning or ML. Machine Learning enables computers with capability to learn from data, so that new program need not be written. Machine learning Algorithms extract information from the training data to discover patterns which are then used to make predictions on new data.

The last decade witnessed tremendous progress in applications of data-driven ML, fueled by growth in compute power and data, in areas that span a wide spectrum ranging from board games to protein folding, language translation to medical image analysis. In several of these applications, ML and related techniques have demonstrated performance that rivals, and occasionally surpasses, human capability with respect to a set of narrowly curated metrics.

AI is enabling many military capabilities and operations such as intelligence, surveillance, and reconnaissance, identifying targets, speed weapon development and optimization, command and control, logistics and developing war games. Adversaries could use AI to carry out information operations or psychological warfare.

AI systems can accurately analyze huge amounts of data generated during peace and conflicts. It can quickly interpret information, which could lead to better decision-making. It can fuse data from different sensors into a coherent common operating picture of the battlefield. AI systems can react significantly faster than systems that rely on human input; Therefore, AI is accelerating the complete “kill chain” from detection to destruction. This allows militaries to better defend against high-speed weapons such as hypersonic weapons which travel at 5 to 10 times the speed of sound.

AI enhances of autonomy of unmanned Air, Ground, and Underwater vehicles.  It is enabling concepts like vehicle swarms in which multiple unmanned vehicles autonomously collaborate to achieve a task. For example, drone swarms could overwhelm or saturate adversary air defensive systems.

Autonomy and highly autonomous systems are the desired capability for many Department of Defense (DoD) missions – Intelligence, Surveillance and Reconnaissance (ISR), Logistics, Planning, Command and Control among others. The purported benefits are many, including – (1) improved operational tempo and mission speeds; (2) reduced cognitive demands on warfighters in operation and supervision of autonomous systems; and (3) increased standoff for improved warfighter safety.

DARPA defines different generations of AI. The first Wave of AI was rule-based AI systems where machines followed the rules defined by humans. The second ongoing wave of AI includes all machine learning techniques where machines define rules by clustering, classifications and use those models to predict and make decisions.

But the problem with deep learning is that it is a black box, we don’t know the reasoning behind the decisions it makes. This makes it hard for people to trust them and humans working closely with robots risky. DARPA is now developing “third wave” AI theory and applications that make it possible for machines that can explain their decisions and adapt to changing situations. Instead of learning from data, intelligent machines will perceive the world on its own, learn and understand it by reasoning. Artificial intelligence systems will then become trustworthy and collaborative partners to soldiers in battlefield.

 

A crucial desideratum associated with autonomy is the need for trustworthiness and trust, as emphasized by the 2016 Defense Science Board (DSB) Report on Autonomy. Informally, trust is an expression of confidence in an autonomous system’s ability to perform an underspecified task. Assuring that autonomous systems will operate safely and perform as intended is integral to trust, which is key to DoD’s success in the adoption of autonomy. Since the DSB report on Autonomy publication, significant improvements have been made in machine learning (ML) algorithms that are central to achieving autonomy. Simultaneously, innovations in assurance technologies have delivered mechanisms to assess the correctness and safety trustworthiness of systems at design time and be resilient at operation time.

 

However, despite these apparent successes, there are a number of concerns associated with stateof-the-art (SOTA) ML algorithms. It is well known, for example, that SOTA ML algorithms do not generalize well, lack transparency and interpretability, and are not robust to environmental and adversarial perturbations. Some of the limitations, such as a lack of robustness to adversarial examples, have been theoretically determined to be fundamental in nature.

 

The prevailing trend in industrial ML research is towards scaling up to Giga- and Tera- scale models (100’s of billions of parameters) as a means to improve accuracies and performance. These trends are not sustainable because of the extremely high computational and data needs for training such models, as well as scaling laws. These trends are also not responsive to the needs of DoD applications, which are typically data- and compute-starved with limited access to cloudscale compute resources. Furthermore, DoD applications are safety and mission-critical, need to operate in unseen environments, need to be auditable, and need to be trustable by human
operators. In sum, the prevailing trends in ML research are not conducive to the assurability and trustworthiness needs of DoD applications.

 

Assured Neuro Symbolic Learning and Reasoning (ANSR)

Despite recent improvements to machine learning (ML) algorithms and assurance technologies, high levels of autonomy still remain elusive, says DARPA. The reasons for this are twofold. First, data-driven ML lacks transparency, interpretability, and robustness and has unsustainable computational and data needs. Second, traditional approaches to building intelligent applications and autonomous systems that rely on knowledge representations and symbolic reasoning can be assured but are not robust to the uncertainties encountered in the real world.

The traditional approaches to building intelligent applications and autonomous systems rely heavily on knowledge representations and symbolic reasoning. For example, complex decision-making in these approaches is often implemented with programmed condition-based rules, stateful logic encoded in finite state machines, and physics-based dynamics of environments and objects represented using ordinary differential equations. There are numerous advantages of these classical techniques:
 they use rich abstractions that are grounded in domain theories and associated formalisms and that are supported by advanced tools and methods (Statecharts, Stateflow, Simulink, etc.);
 they can be modular and composable in ways supported by software engineering practices that promote reuse, precision, and automated analyses; and
 they can be analyzable and assurable in ways supported by formal specification and verification technologies that have been demonstrated in hardening mission and safety critical systems against cyber attacks.

However, these approaches also have limitations when used in real-world autonomy applications. They fare poorly when dealing with real-world uncertainty and high dimensional sensory data, which is integral to perception and situation-understanding applications, The ruleset and stateful logic in these decision-making applications are often incomplete and insufficient when exposed to unanticipated situations. Further, it is well understood that common-sense knowledge is intractable to codify. For example, the Cyc11 knowledge base includes millions of concepts and tens of millions of rules and yet is inadequate for many real-world tasks.

The challenge of assuring cyber physical systems (CPS) with ML components has been an active area of research supported by DARPA’s ongoing Assured Autonomy program as well as other research initiatives. Specifically, in Assured Autonomy, the assurance approach developed by the program has resulted in: (1) formal and simulation-based verification tools that can comprehensively explore the behavior of a CPS; (2) monitoring tools that can detect deviations of ML components from expected inputs and behavior; resilience and recovery strategies to
avoid worst-case safety consequences; and (3) an assurance case framework that enables structured argumentation backed by evidence in support of the claim that major safety hazards have been identified and their root causes have been adequately mitigated.

The advances in assurance technologies, including formal and simulation-based approaches, have helped in accelerating identification of failure modes and defects of the ML algorithms. Unfortunately, the ability to repair defects in SOTA ML remains limited to retraining, which is not guaranteed to eliminate defects or to improve the generalizability of ML algorithms. Further, while the runtime assurance architecture – including monitoring and recovery – ensures operational safety, frequent invocations of fallback recovery – triggered by brittleness and
generalizability of ML – compromises the ability to accomplish the mission.

DARPA launched its newest artificial intelligence (AI) program, Assured Neuro Symbolic Learning and Reasoning (ANSR) in June 2022, which seeks to motivate new thinking and approaches that will take ML beyond data-driven pattern recognition and augment it with knowledge-driven reasoning that includes context, physics, and other background information. “Motivating new thinking and approaches in this space will help assure that autonomous systems will operate safely and perform as intended,” said Dr. Sandeep Neema, DARPA ANSR program manager. “This will be integral to trust, which is key to the Department of Defense’s successful adoption of autonomy.”

The ANSR program seeks breakthrough innovations in the form of new hybrid AI algorithms
that deeply integrate symbolic reasoning with data-driven learning to create robust, assured, and
therefore trustworthy systems. We define a system as trustworthy, if it is: (a) robust to domaininformed
and adversarial perturbations; (b) supported by an assurance framework that creates
and analyzes heterogenous evidence towards safety and risk assessments; and (c) predictable
with respect to some specification and models of “fitness.”

ANSR will explore diverse, hybrid architectures that can be seeded with prior knowledge, acquire both statistical and symbolic knowledge through learning, and adapt learned representations. The program includes demonstrations to evaluate hybrid AI techniques through relevant military use cases where assurance and autonomy are mission-critical. Specifically, selected teams will develop a common operating picture of a dynamic, dense urban environment using a fully autonomous system equipped with ANSR technologies. The AI would deliver insights to the warfighter that could help characterize friendly, adversarial and neutral entities, the operating environment, and threat and safety corridors.

 

We hypothesize that several of the limitations in ML today are a consequence of (1) the inability to incorporate contextual and background knowledge; and (2) treating each data set as an independent uncorrelated input. In the real-world, observations are often correlated and a product of an underlying causal mechanism, which can be modeled and understood. We posit that hybrid AI algorithms capable of acquiring and integrating symbolic knowledge and performing symbolic reasoning at scale, will deliver robust inference, generalize to new situations, and provide evidence for assurance and trust.

We envision modifying both the training and inference procedures to interleave symbolic and neural representations for iterative inference and mutual adaptation of the representations to exploit the benefits and reduce the limitations of each representation. The modified training procedure will result in representations that are grounded in domain-specific symbols, essentially a symbolic equivalent of the Neural Network’s (NN) implicit data representation. The modified inference procedure iteratively converges to a response that is conformant to both the symbolic and neural representations. The symbolic representation can explicitly include prior knowledge
and domain-specific rules and constraints and enables verification against specification and construction of assurance arguments.

Some recent results for specific applications provide the basis for confidence. For example, a recent study prototyped a hybrid reinforcement learning (RL) architecture that acquires a set of symbolic policies through data-driven learning. The symbolic policies are in the form of a small program that is interpretable and verifiable. The approach demonstrably inherits the best of both worlds: it learns policies that are highly performant in a known environment, and it generalizes well by remaining safe (crash-free) in an unknown environment. Another recent approach uses symbolic reasoning to fix errors in a NN in estimating the object-poses in a scene, and it achieves
substantially higher (30-40% above baseline) accuracy in several cases.

The hybrid AI techniques developed by the program will enable new mission capabilities. The program intends to demonstrate assured execution of an unaided ISR mission to develop a Common Operating Picture (COP) of a highly dynamic dense urban environment. The autonomous system performing the ISR mission will carry an effects payload to reduce sensor to-effects delivery time. While the delivery of effects is gated by human on-the-loop, an effects carrying system is quintessentially a safety and mission-critical system and, therefore, requires
strong guarantees of collision avoidance and mission performance. The capabilities required of the autonomous system in terms of deep situational understanding and decision-making are not achievable by SOTA machine learning or standalone symbolic reasoning systems. The training data is sparse, further motivating the use of hybrid AI methods.

 

The development in the program will be orchestrated in four technical areas (TAs) summarized below:
TA1. Algorithms and Architecture – The goal of TA1 is to develop and model new AI algorithms and architectures that deeply integrate symbolic reasoning with data-driven machine learning. TA1 will explore and evaluate a range of possible algorithms and architecture patterns that are suitable for different tasks.
TA2. Specification and Assurance – The goal of TA2 is to develop an assurance framework and methods for deriving and integrating evidence of correctness and quantifying mission-specific risks. TA2 will establish a pipeline that abstracts the hybrid neuro-symbolic representations into formally analyzable representations and analyzes them with respect to a set of mission-dependent specifications. TA2 will also explore techniques to estimate and quantify mission-specific risks.
TA3. Platforms and Capability Demonstration – The goal of TA3 is to develop use-cases and an architecture for engineering mission-relevant applications of hybrid AI algorithms suitable for the demonstration and evaluation of robust and assured performance. Specifically, the program intends to pursue demonstration through assured execution of an unaided ISR mission to develop a Common Operating Picture (COP) of a highly dynamic dense urban environment.
TA4. Assurance Assays and Evaluation – The goal of TA4 is to 1) develop an assurance test harness with adversarial AI; and 2) evaluate the technologies in individual technical areas and their compositions in systems. TA4 will act as a red team that probes the validity of assurance claims through adversarial evaluations. TA4 will also refine the proposed program metrics and define measures to characterize the trustworthiness of the system. TA4 will need to assess robustness, generalizability, and assurance claims through adversarial evaluation that employs confounding perturbations and quantify the loss of system performance.

 

 

Cite This Article

 
International Defense Security & Technology (June 5, 2023) DARPA ANSR employing hybrid AI methods for assurability and trustworthiness of Military ISR Missions. Retrieved from https://idstch.com/technology/ict/darpa-ansr-employing-hybrid-ai-methods-for-assurability-and-trustworthiness-of-military-isr-missions/.
"DARPA ANSR employing hybrid AI methods for assurability and trustworthiness of Military ISR Missions." International Defense Security & Technology - June 5, 2023, https://idstch.com/technology/ict/darpa-ansr-employing-hybrid-ai-methods-for-assurability-and-trustworthiness-of-military-isr-missions/
International Defense Security & Technology December 13, 2022 DARPA ANSR employing hybrid AI methods for assurability and trustworthiness of Military ISR Missions., viewed June 5, 2023,<https://idstch.com/technology/ict/darpa-ansr-employing-hybrid-ai-methods-for-assurability-and-trustworthiness-of-military-isr-missions/>
International Defense Security & Technology - DARPA ANSR employing hybrid AI methods for assurability and trustworthiness of Military ISR Missions. [Internet]. [Accessed June 5, 2023]. Available from: https://idstch.com/technology/ict/darpa-ansr-employing-hybrid-ai-methods-for-assurability-and-trustworthiness-of-military-isr-missions/
"DARPA ANSR employing hybrid AI methods for assurability and trustworthiness of Military ISR Missions." International Defense Security & Technology - Accessed June 5, 2023. https://idstch.com/technology/ict/darpa-ansr-employing-hybrid-ai-methods-for-assurability-and-trustworthiness-of-military-isr-missions/
"DARPA ANSR employing hybrid AI methods for assurability and trustworthiness of Military ISR Missions." International Defense Security & Technology [Online]. Available: https://idstch.com/technology/ict/darpa-ansr-employing-hybrid-ai-methods-for-assurability-and-trustworthiness-of-military-isr-missions/. [Accessed: June 5, 2023]

About Rajesh Uppal

Check Also

Soft Robotics transforming military soft exosuits in reducing injuries to explosive ordnance disposal

Robots have already become an indispensable part of our lives. However currently, most robots are …

error: Content is protected !!