Trending News
Home / International Defence Security and Technology / Technology / ICT / DARPA’s thrust in Artificial Intelligence for developing adaptive and intelligent military systems to implement US’s Third Offset Strategy

DARPA’s thrust in Artificial Intelligence for developing adaptive and intelligent military systems to implement US’s Third Offset Strategy

Machine Learning (ML) is a subfield of Artificial Intelligence which attempts to endow computers with the capacity of learning from data, so that explicit programming is not necessary to perform a task. ML algorithms allow computers to extract information and infer patterns from the record data so computers can learn from previous examples to make good predictions about new ones. Machine learning (ML) methods have demonstrated outstanding recent progress and, as a result, artificial intelligence (AI) systems can now be found in myriad applications, including autonomous vehicles, industrial applications, search engines, computer gaming, health record automation, and big data analysis.

However, Current artificial intelligence (AI) systems only compute with what they have been programmed or trained for in advance; they have no ability to learn from data input during execution time, and cannot adapt on-line to changes they encounter in real environments.

DARPA is soliciting highly innovative research proposals for the development of fundamentally new machine learning approaches that enable systems to learn continually as they operate and apply previous knowledge to novel situations. The goal of the DARPA’s L2M program is to develop fundamentally new machine learning mechanisms that enable systems to learn continuously during execution and apply previously learned information to novel situations the way biological systems do and in which the environment is, in effect, the training set. Such a system is safer, more functional, and increasingly relevant to DoD applications, including adapting quickly to unforeseen circumstances, changing the mission, and improving performance through a system’s fielded lifetime experience.

Another DARPA’s “explainable AI” program  is developing machine-learning systems  that will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The project involves 100 researchers at more than 30 universities and private institutions. The goal is to produce a group of machine learning tools and user interfaces that government or commercial groups can use to explain how their own AI products reach their conclusions. “If it’s finding patients that need special attention in the hospital, or wanting to know why your car stopped in the middle of the road, or why your drone turned around and didn’t do its mission … then you really need an explanation,” Gunning said.

The thrust in Machine learning and Artificial Intelligence is  part of  Third offset strategy. US’s Third Offset strategy is leveraging new technologies such as artificial intelligence, autonomous systems and human-machine networks to equalise advances made by the nations opponents in recent years. “There’s this powerful new wave that’s happening today in AI,” and the Pentagon needs to exploit it.  DARPA already has some programs tackling this problem, DARPA director Aarti Prabhakar said, but “you’ll see more, I think, in that area as we start developing this next foundation for AI.”

DARPA’s Lifelong Learning Machines (L2M) program

Current ML systems experiencing errors when they encounter circumstances outside their programming and/or training must be taken off-line and re-programmed/retrained. Taking a system offline and re-training it is expensive and time-consuming, not to mention that encountering a programming/training oversight during execution time can be disruptive to a mission.

Current ML systems are also plagued with another significant problem known as catastrophic forgetting. These systems ‘forget’ previously incorporated data when trained with new data and unless programmed or trained for every eventuality, these systems operating in real-world environments are bound to fail at some point. This means ML is restricted to specific situations with narrowly predefined rule sets.

At the same time, current ML systems are not intelligent in the biological sense. They have no ability to adapt their methods beyond what they were prepared for in advance and are completely incapable of recognizing or reacting to any element, situation or circumstance they have not been specifically programmed or trained for.

This issue presents severe limitations in system capability, creates potential safety issues, and is clearly limiting in Department of Defense (DoD) applications, e.g., supply chain, logistics, and visual recognition, where complete details are often unknown in advance and the ability to react quickly and adapt to dynamic circumstances is of primary importance.

The goal of the Lifelong Learning Machines (L2M) program is to develop substantially more capable systems that are continually improving and updating from experience. Proposed research should investigate innovative approaches that support key lifelong learning machines technologies and enable revolutionary advances in the science of adaptive and intelligent systems.

Proposed research should investigate innovative approaches that support key lifelong learning machines technologies and enable revolutionary advances in the science of adaptive and intelligent systems.

The L2M program considers inspiration from biological adaptive mechanisms as a supporting pillar of the project. Biological systems exhibit an impressive capacity to learn and adapt their structure and function throughout their lifespan, while retaining stability of core functions. Taking advantage of adaptive mechanisms evolved through billions of years honing highly robust tissue-mediated computation will provide unique insights for building L2M solutions.

While it is very easy to code agent behavior to perform a particular task, doing so precludes the agent learning the task, which in turn precludes the possibility of adapting the behavior to another task or situation. This is the heart of the problem to be solved in the creation of a lifelong learning machine. The purpose and goals of the L2M program, is developing a system that figures out how to accomplish a task and subsequently can figure out another task more easily based on previous learning.

A possible realization of an L2M system is a plastic nodal network (PNN) – as opposed to a fixed, homogeneous neural network. While plastic, the PNN must incorporate hard rules governing its operation, maintaining equilibrium. If rules hold the PNN too strongly, it will not be plastic enough to learn, yet without some structure the PNN will not be able to operate at all.

Eight computer science professors in Oregon State University’s College of Engineering have received a $6.5 million grant from the Defense Advanced Research Projects Agency to make artificial-intelligence-based systems like autonomous vehicles and robots more trustworthy.

 

DARPA’s Explainable Artificial Intelligence (XAI)

Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users.

The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.

 

The Explainable AI (XAI) program aims to create a suite of machine learning techniques that:

  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
  • Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

 

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user. Our strategy is to pursue a variety of techniques in order to generate a portfolio of methods that will provide future developers with a range of design options covering the performance-versus-explainability trade space.

 

The XAI program will focus the development of multiple systems on addressing challenges problems in two areas: (1) machine learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions. These two challenge problem areas were chosen to represent the intersection of two important machine learning approaches (classification and reinforcement learning) and two important operational problem areas for the Department of Defense (intelligence analysis and autonomous systems).

 

Research aims to make artificial intelligence explain itself

The success of the deep neural networks branch of artificial intelligence has enabled significant advances in autonomous systems that can perceive, learn, decide and act on their own. The problem is that the neural networks function as a black box. Instead of humans explicitly coding system behavior using traditional programming, in deep learning the computer program learns on its own from many examples. Potential dangers arise from depending on a system that not even the system developers fully understand.

The four-year grant from DARPA will support the development of a paradigm to look inside that black box, by getting the program to explain to humans how decisions were reached.

“Ultimately, we want these explanations to be very natural – translating these deep network decisions into sentences and visualizations,” said Alan Fern, principal investigator for the grant and associate director of the College of Engineering’s recently established Collaborative Robotics and Intelligent Systems Institute.

Developing such a system that communicates well with humans requires expertise in a number of research fields. In addition to having researchers in artificial intelligence and machine learning, the team includes experts in computer vision, human-computer interaction, natural language processing, and programming languages.

To begin developing the system, the researchers will use real-time strategy games, like StarCraft, to train artificial-intelligence “players” that will explain their decisions to humans.The AI players would be trained to explain to human players the reasoning behind their in-game choices. StarCraft is a staple of competitive electronic gaming. Google’s DeepMind has also chosen StarCraft as a training environment for AI. Later stages of the project will move on to applications provided by DARPA that may include robotics and unmanned aerial vehicles.

Fern said the research is crucial to the advancement of autonomous and semi-autonomous intelligent systems. “Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust, and having an explanation capability is one important way of building trust,” he said.

The researchers from Oregon State were selected by DARPA for funding under the highly competitive Explainable Artificial Intelligence program. Other major universities chosen include Carnegie Mellon, Georgia Tech, Massachusetts Institute of Technology, Stanford, Texas and University of California, Berkeley.

 

DARPA and Third offset strategy

“Fundamentally, what’s behind the push of the Third Offset Strategy is this idea that the department needs to reinvigorate our ability to develop these advanced technologies,” Prabhakar said. “If we do that at the same old pace in the same old way, there’s a strong recognition that we’re just not going to get there.” Instead of such custom-tailored, tightly integrated systems, you want a modular and open architecture where you can easily replace a component — hardware or software — without disrupting the rest of the system.

Instead of a relatively small number of pricey manned platforms, you want a “heterogeneous” mix of manned and unmanned vehicles of all kinds, from 130-foot robotic ships to disposable handheld drones. Instead of architectures designed for a specific kind and size of force, you want systems that can scale up and down as the force changes.

And instead of brittle networks dependent on a few means of transmission and a few central nodes, you want a highly distributed network that stays up despite physical attack, jamming, and hacking. A project called HACMS — High Assurance Cyber Military Systems — applies a class of mathematics called “formal methods” to finding and closing cyber vulnerabilities. DARPA’s also applying new methods to the old problem of electronic warfare. To keep up with these ever-mutating signals, “cognitive electronic warfare” aims to use artificial intelligence to detect, catalog, and counter transmissions in real time.

 

 

References and resources also include:

https://www.fbo.gov/spg/ODA/DARPA/CMO/HR001117S0016/listing.html

http://www.darpa.mil/program/explainable-artificial-intelligence

http://oregonstate.edu/ua/ncs/archives/2017/jun/research-aims-make-artificial-intelligence-explain-itself

https://futurism.com/darpa-working-make-ai-more-trustworthy/

image_pdfimage_print

Check Also

Swarm

Innovations in swarm behaviors improve self improve military force protection, firepower, precision effects, and ISR capabilities in urban operations

The United States military successfully launched what it’s calling “one of the world’s largest micro-drone …

error: Content is protected !!