Home / Technology / AI & IT / DARPA plans revolutionary new architectures with Physics of AI (PAI) to accelerate AI in Defense Applications

DARPA plans revolutionary new architectures with Physics of AI (PAI) to accelerate AI in Defense Applications

In past, DARPA had (R&D) that facilitated the advancement and application of “First Wave” (rule based) and “Second Wave” (statistical learning based) AI technologies. Today, DARPA continues to lead innovation in AI research which will help shape a future for AI technology where machines may serve as trusted and collaborative partners in solving problems of importance to national security.

 

It is anticipated that AI will play an ever larger role in future Department of Defense (DoD) activities, ranging from scientific discovery, to human-machine collaboration, to real-time sensor processing, to the control and coordination of a variety of distributed, intelligent and autonomous composable systems.

 

Although artificial intelligence is making its way into private- and public-sector enterprise systems, it has not gained as much traction in the Defense Department. Between DOD’s security and performance requirements, the immaturity of the technology to deal with unstructured and incomplete data and the complex problems that come with modeling dynamic systems, integration of AI into defense applications has been slow.

 

DARPA believes this future will be realized upon the development and application of “Third Wave” AI technologies, where systems are capable of acquiring new knowledge through generative contextual and explanatory models. The Physics of AI (PAI) basic research Disruption Opportunity supports this vision.

 

To speed the adoption of AI, the Defense Advanced Research Projects Agency is issuing a Disruption Opportunity — a call for innovative basic research concepts exploring new architectures and approaches to improve AI’s ability to generalize beyond training data and work with sub-optimal data.

 

PAI aims to develop novel AI architectures, algorithms and approaches that “bake in” the physics, mathematics and prior knowledge relevant to an application domain in order to address the technical challenges in application of AI in scientific discovery, human-AI collaboration, and a variety of defense applications.

Physics of AI (PAI) program

DARPA says, despite rapid and accelerating progress of AI in the commercial sector – particularly in the subfield of machine learning – AI has not yet been successfully integrated into the most transformative DoD applications, for reasons that have included:
• The demanding levels of trust, safety and performance guarantee required of AI systems in defense applications;
• The lack of success of deep learning constructs in causal, predictive modeling of complex nonlinear dynamic systems;
• The acknowledged difficulties of machine learning architectures and training protocols in dealing with incomplete, sparse and noisy data;
• The lack of robustness, which makes AI image recognition systems potentially subject to a variety of adversarial spoofing;
• The inherent challenges faced by AI approaches in dealing with “Open World problems”, e.g., in unstructured environments with unknown and hidden states, as compared to relatively well-structured application domains (e.g. games) where the system state is fully observable and interaction rules are known; and
• The difficulty in obtaining useful performance guarantees and limits or even to know what questions can be asked of an AI system and whether the answers make sense.

As a consequence, the integration of AI in DoD systems has been slow relative to the private sector.

 

The Physics of AI (PAI) program hypothesizes that challenges associated with today’s machine learning and AI systems can be overcome, especially in many defense applications, by “baking in” physics – relevant scientific and mathematical knowledge — from the outset.

 

Data-driven machine learning techniques have proven successful in leveraging massive training data to answer questions narrowly registered around the initial training set and questions. Deep artificial neural networks (DNNs) are extremely expressive in approximating arbitrary nonlinear functions, extracting features from data, and producing useful reduced-dimensional representations for classification purposes.

 

Advanced computational platforms now enable the training of hundred-layer-deep networks using backpropagation methods that encompass hundreds of thousands to millions of parameters (the weights of the DNNs) as long as sufficient training data exist.

 

However, despite some successes in transfer learning and one-shot learning, it has proven difficult for DNNs to generalize beyond their initial set of training questions. In general DNN’s are not generative, although generative models such as variational autoencoders (VAEs), generative adversarial networks (GANs) and hybrid models exist and have been employed in specialized domains.

 

PAI is seeking innovative approaches that address the above and can substantially improve upon current machine learning approaches in bringing “deep insight” into physics-centric application domains.

 

AI architectures, algorithms and approaches that make use of DNNs as one of several component are welcome, but conventional learning algorithms using DNNs (including convolutional and recurrent neural networks) by themselves are not considered likely to meet the broad goals of the program.

 

Hybrid architectures are encouraged that embed hierarchical physical models into generative cores; that incorporate manifold learning techniques; that incorporate operator theoretic spectral methods, and/or that bake in topological knowledge, group symmetries, projection knowledge, or gauge invariances into the network architecture. Generative approaches that can reproduce the multiscale structures of observed data; distinguish between semantic and stylistic differences; are resilient to noise, data dropouts, data biases, and adversarial spoofing; and can learn with minimal labeled data are also encouraged.

The PAI program has three objectives:

Develop an AI prototype that uses observational, experimental and simulated data along with prior knowledge, such as scientific, mathematical/topological information or statistical models, to overcome the limitations of sparse, noisy or incomplete data.

 

Demonstrate an AI prototype that uses simulated and/or real data in a representative DOD-relevant application such as satellite or radar image processing or human-machine collaboration.

 

Address computation requirements and fundamental performance limits of AI systems in terms of their accuracy, ability to effectively predict behaviors beyond of the training data and robustness in the face of noise, sparse data and adversarial spoofing.

 

A total of $1 million will be available for the 18-month, two-phase program.

 

References and Resources also include:

https://gcn.com/blogs/pulse/2018/07/darpa-pai.aspx

About Rajesh Uppal

Check Also

Digitized Modem Architecture with Digital IF: Enabling Software-Defined Satellites and Earth Stations

In the fast-evolving realm of satellite communication, innovation is key to meeting the ever-increasing demands …

error: Content is protected !!