Home / Technology / AI & IT / DARPA EDGE enabling Human Control of Military Autonomous and AI based systems

DARPA EDGE enabling Human Control of Military Autonomous and AI based systems

Interactions with technologically sophisticated artificial intelligence (AI) agents are now commonplace. We increasingly rely on intelligent systems to extend our human capabilities, from chatbots that provide technical support to virtual assistants like Siri and Alexa. Examples of such systems include air traffic control, aircraft cockpits, chemical processing, and the power industry. One current focus is the development of such systems for large robotic teams (10 or more robots).

 

A major benefit of increasingly advanced automation and artificial intelligence technology is decreased workload and greater safety for humans – whether it’s driving a vehicle, piloting an airplane, or patrolling a dangerous street in a deployed location with the aid of autonomous ground and airborne squadmates. But when there’s a technology glitch and machines don’t function as designed, human partners in human-machine teams may quickly become overwhelmed trying to understand their environment at a critical moment – especially when they’ve become accustomed to and reliant on the machine’s capabilities.

 

Machines can possess sensory (infra-red, FLIR, sonar, etc.) and motor (flying, fast land travel, precision surgery, etc.) capabilities that humans do not possess. Without situational awareness of the system and environment, the human team member may be unable to adapt, reducing safety and threatening mission success. This reality played out in crashes of modern jetliners in recent years killing hundreds, because advanced automated systems failed in flight and pilots weren’t able to assess the situation and respond appropriately in time. Such examples underscore the need to design human-machine interfaces (HMIs) that allow humans to maintain situational awareness of highly automated and autonomous systems so that they can adapt in the face of unforeseen circumstances.

 

DARPA  announced its Enhancing Design for Graceful Extensibility (EDGE) program in May 2021, which aims to create a suite of HMI design tools to be integrated into systems design processes. By prioritizing and orienting these tools towards quantifying, supporting, and testing situational awareness – rather than on cognitive load at the expense of situational awareness – EDGE will help create HMI systems that allow operators to not just monitor autonomous systems but also adapt their use to meet the needs of unanticipated situations.

 

“As highly-automated machines and AI-enabled systems have become more and more complicated, the trend in HMI development has been to reduce the cognitive workload on humans as much as possible. Unfortunately, the easiest way to do this is by limiting information transfer,” said Bart Russell, EDGE program manager in DARPA’s Defense Sciences Office. “Reducing workload is important, because an overloaded person cannot make good decisions. But limiting information erodes situational awareness, making it difficult for human operators to know how to adapt when the AI doesn’t function as designed. Current AI systems tend to be brittle – they don’t handle unexpected situations well – and warfare is defined by the unexpected.”

 

The EDGE design tools will focus on supporting the ability of operators of autonomous systems, who are not necessarily data scientists or AI experts, to understand enough about the abstract functioning of a system that they can adapt with it when they encounter off-nominal situations. Designers will be able to leverage EDGE design tools to create HMIs that help operators understand an AI system’s processes, or how it works; the system’s status against its performance envelope (i.e., if it’s in its “comfort zone,” or near the edges of its speed, range, etc.); and the environmental context, which is often where the most unanticipated elements come in.

 

“We need HMIs that do a better job of exchanging information between the system and the human,” Russell said. “There’s a lot of work right now focused on designing machines to understand human intentions, called AI Theory of Mind. I’m interested in helping humans better understand the complex systems they’re teamed with. EDGE is specifically focused on the Observe, Orient, Decide and less on the Act in the OODA loop. It’s not about how fast you press a button, or the ergonomics of your cockpit, it’s about how well you perceive the information that’s coming to you and does that help you develop sufficient understanding of systems processes, status against the machine’s performance envelope, and the context in which it’s operating to still complete a mission despite off-nominal conditions.”

 

The suite of EDGE HMI design tools will include models that quantify situational awareness demands to enable detailed co-design between software engineers and HMI designers; composable design methods to speed and mature design implementation; and an HMI breadboard for realistic test and verification early in the design process.

About Rajesh Uppal

Check Also

Revolutionizing Combat Data Collection: Cutting-Edge Technology in Military Training

In the ever-evolving landscape of military technology, advancements in combat data collection are reshaping how …

error: Content is protected !!