Home / Technology / AI & IT / DARPA’s SAIL-ON developing military AI systems that can effectively react to an adversary’s surprise actions,

DARPA’s SAIL-ON developing military AI systems that can effectively react to an adversary’s surprise actions,

Current artificial intelligence (AI) systems excel at tasks defined by rigid rules – such as mastering the board games Go and chess with proficiency surpassing world-class human players. Current AI systems. “Imagine if the rules for chess were changed mid-game,” said Ted Senator, program manager in DARPA’s Defense Sciences Office. “How would an AI system know if the board had become larger, or if the object of the game was no longer to checkmate your opponent’s king but to capture all his pawns? Or what if rooks could now move like bishops? Would the AI be able to figure out what had changed and be able to adapt to it?” Existing AI systems become ineffective and are unable to adapt when something significant and unexpected occurs. Unlike people, who recognize new experiences and adjust their behavior accordingly, machines continue to apply outmoded techniques until they are retrained.

 

For example, AI systems know to stop at red stop signs because they are trained with data that includes what to do when approaching those signs. But they might not know how to respond if they approach something outside of those parameters, like a blue stop sign. Kildebeck said. “If a self-driving car came upon a blue stop sign, it might ignore it, or it might register the sign as something new and stop. The ability to reason and rationally adapt to things you’ve never seen before is the whole point of this program.”

 

Given enough data, machines are able to do statistical reasoning well, such as classifying images for face-recognition, Senator said. Another example is DARPA’s AI push in self-driving cars in the early 2000s, which led to the current revolution in autonomous vehicles. Thanks to massive amounts of data that include rare-event experiences collected from tens of millions of autonomous miles, self-driving technology is coming into its own. But the available data is specific to generally well-defined environments with known rules of the road.

 

Therefore AI systems aren’t very good at adapting to constantly changing conditions commonly faced by troops in the real world – from reacting to an adversary’s surprise actions, to fluctuating weather, to operating in unfamiliar terrain. “It wouldn’t be practical to try to generate a similar data set of millions of self-driving miles for military ground systems that travel off-road, in hostile environments and constantly face novel conditions with high stakes, let alone for autonomous military systems operating in the air and on sea,” Senator said. For AI systems to effectively partner with humans across a spectrum of military applications, intelligent machines need to graduate from closed-world problem solving within confined boundaries to open-world challenges characterized by fluid and novel situations.

 

DARPA launched the Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program in 2019 to research and develop the underlying scientific principles, general engineering techniques, and algorithms needed to create AI systems that act appropriately and effectively in novel situations that occur in open worlds. The program’s goals are to develop scientific principles to quantify and characterize novelty in open-world domains, create AI systems that react to novelty in those domains, and demonstrate and evaluate these systems in a selected DoD domain.

 

The Defense Sciences Office (DSO) of the Defense Advanced Research Projects Agency (DARPA) is soliciting innovative research proposals for new artificial intelligence (AI) methodologies and techniques that support: (1) the principled characterization and generation of novelty in open worlds; and (2) the creation of AI systems capable of operating appropriately and effectively in open worlds. Proposed research should investigate innovative approaches that enable revolutionary advances in science, devices, or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.

 

The Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program will research and develop the underlying scientific principles and general engineering techniques and algorithms needed to create AI systems that act appropriately and effectively in novel situations that occur in open worlds, which is a key characteristic needed for potential military applications of AI. The focus is on novelty that arises from violations of implicit or explicit assumptions in an agent’s model of the external world, including other agents, the environment, and their interactions.

 

“The first thing an AI system has to do is recognize the world has changed. The second thing it needs to do is characterize how the world changed. The third thing it needs to do is adapt its response appropriately,” Senator said. “The fourth thing, once it learns to adapt, is for it to update its model of the world. Specifically, the program will: (1) develop scientific principles to quantify and characterize novelty in open world domains; (2) create AI systems that act appropriately and effectively in open world domains; and (3) demonstrate and evaluate these systems in multiple domains, including a selected DoD domain.

 

SAIL-ON will require performers and teams to characterize and quantify types and degrees of novelty in open worlds, to construct software that generates novel situations at distinct levels of a novelty hierarchy in selected domains, and to develop algorithms and systems that are capable of identifying and responding to novelty in multiple open-world domains.

 

SAIL-ON seeks expertise in multiple subfields of AI, including machine learning, plan recognition, knowledge representation, anomaly detection, fault diagnosis and recovery, probabilistic programming, and others.

 

If successful, SAIL-ON would teach an AI system how to learn and react appropriately without needing to be retrained on a large data set. The program seeks to lay the technical foundation that would empower machines, regardless of the domain, to go through the military OODA loop process themselves – observe the situation, orient to what they observe, decide the best course of action, and then act.

 

Shrivastava Receives $2.5M from DARPA to Teach AI Systems Adaption in April 2020

A University of Maryland expert in computer vision and artificial intelligence (AI) has been awarded $2.5 million from the Defense Advanced Research Projects Agency (DARPA) to teach AI systems how to adapt in evolving situations in the real-world. Abhinav Shrivastava, an assistant professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), is principal investigator of the project, working with with professors Carl Vondrick(link is external) at Columbia University and Abhinav Gupta at Carnegie Mellon University.

 

As AI increasingly becomes ubiquitous in various aspects of military operations—particularly aerial or ground-based autonomous vehicles—it will be essential for military AI applications to be aware of dynamic environments, and act effectively when confronted by evolving situations. Current AI systems are unable to adapt to evolving situations the same way as people, who can recognize new experiences and adjust their behavior accordingly. For example, humans will naturally respond to an adversary’s surprise actions, new vehicles, a change in weather, or unfamiliar terrain.

 

The goal of Shrivastava’s research is to teach an AI system to recognize these dynamic environments and react appropriately, without needing to be retrained on a large data set. Funding for the project comes from DARPA’s Science of Artificial Intelligence and Learning for Open-world Novelty(link is external) (SAIL-ON) program, which aims to quantify and characterize change in open-world domains, create AI systems that can react to change in those domains, and then evaluate those systems.

 

Polycraft Team Wins DARPA Grant To Lay Groundwork for Smarter AI in July 2020

Polycraft World, a modification of the video game Minecraft, was developed by University of Texas at Dallas researchers to teach chemistry and engineering. Now the game that allows players to build virtual worlds is serving as the foundation for federal research to develop smarter artificial intelligence (AI) technology.

 

UT Dallas researchers received a grant from the Defense Advanced Research Projects Agency (DARPA) to use Polycraft World to simulate dynamic and unexpected events that can be used to train AI systems — computer systems that emulate human cognition — to adapt to the unpredictable. The simulated scenarios could include changing weather or unfamiliar terrain. In response to the COVID-19 pandemic, researchers have added the threat of an infectious disease outbreak.

 

The $1.68 million project is funded through DARPA’s Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program, which was formed in 2019 to support research on scientific principles and engineering techniques and algorithms needed to create advanced AI systems.

 

“The project is part of DARPA’s suite of AI programs that are trying to figure out what the next generation of AI is going to look like,” said principal investigator Dr. Eric Kildebeck BS’05, a research professor in UT Dallas’ Center for Engineering Innovation (CEI). “Our role centers around the concept of novelty. It’s all about creating artificial intelligence agents that — when they encounter things they’ve never seen before and they’ve never been trained to deal with — respond appropriately.”

 

The UT Dallas researchers’ work focuses on the first of three phases of DARPA’s project — building simulated scenarios in Polycraft World. Next, researchers at other institutions will develop algorithms to enable AI systems to respond to those challenges. The UT Dallas researchers are not building military scenarios. Instead, in the third and final phase, the Department of Defense will adapt the researchers’ work to reflect what military troops might face. The UT Dallas project began in December and will continue through mid-2021. Kildebeck said Polycraft World provides an ideal platform for DARPA’s project because it incorporates multiple fields of science, including polymer chemistry, biology and medicine, to enable simulation of real-world scenarios.

 

Dr. Walter Voit BS’05, MS’06, who led the team that developed Polycraft World, is a co-principal investigator for the DARPA project. “DARPA seeks to advance the state of the art in how artificial intelligence operates in open worlds where prior training has been limited,” Voit said. “We are excited at UT Dallas to be able to provide a comprehensive test environment based on Polycraft World to provide novel situations for some of the nation’s most promising algorithms to see how they react.”

 

The researchers are working to capture novel scenarios from the real world, including recording video that they can digitize to incorporate into the game. In the self-driving car example, the virtual car would need to learn to stop at different variations of a stop sign to prevent accidents. “We’re building a track that may have green stop signs, red stop signs and blue stop signs all along the pathway,” Steininger said, giving an example of a possible scenario. “Other researchers will incorporate their algorithms into a self-driving car. As the car navigates through a virtual city, if it reaches its destination without accidents, it’s a success.”

 

About Rajesh Uppal

Check Also

DARPA COOP: Guaranteeing Software Correctness Through Hardware and Physics

Introduction In an era where software reliability is paramount, ensuring continuous correctness in mission-critical systems …

error: Content is protected !!