Home / Technology / AI & IT / DARPA’s In the Moment (ITM): Revolutionizing Military Operations with Algorithm-Driven Decision Makingdeveloping algorithm-driven decision making for military operations

DARPA’s In the Moment (ITM): Revolutionizing Military Operations with Algorithm-Driven Decision Makingdeveloping algorithm-driven decision making for military operations

Introduction:

The realm of military operations is a complex and dynamic landscape where quick and informed decision-making can be the difference between success and failure. Recognizing the need for advanced decision-making capabilities, the Defense Advanced Research Projects Agency (DARPA) has embarked on an ambitious initiative called “In the Moment (ITM).” This groundbreaking endeavor aims to revolutionize military operations by harnessing the power of algorithm-driven decision-making. In this article, we delve into the innovative ITM initiative and explore how it has the potential to transform the way military personnel operate on the battlefield.

 

Military operations – from combat to medical triage to disaster relief – require complex and rapid decision-making in dynamic situations where there is often no single right answer. Two seasoned military leaders facing the same scenario on the battlefield, for example, may make different tactical decisions when faced with difficult options.

 

Traditionally, the military has relied on human judgment to make these decisions. However, human judgment can be fallible, and it can be difficult to make accurate decisions in complex and rapidly changing environments. Defense Advanced Research Projects Agency (DARPA), in the US, launched a new program in March 2022 with the aim to address this challenge by developing algorithms that can make decisions as quickly and accurately as humans. It seeks to develop algorithms that can assume decision-making responsibilities in difficult circumstances.

Algorithmic Insights: Navigating a Data-Driven World and Revolutionizing Military Decision-Making

The Rise of Algorithm-Driven Decision-Making:

An algorithm is a step-by-step procedure or set of rules designed to solve a specific problem or accomplish a particular task. In the context of decision making, an algorithm is a systematic approach that guides the process of selecting the best course of action among several possible alternatives. It involves breaking down a complex decision problem into smaller, manageable steps and using predefined rules or calculations to analyze available data, evaluate options, and ultimately arrive at a decision.

In algorithmic decision making, various factors and criteria are considered, and mathematical or logical operations are performed to determine the optimal choice or course of action. Algorithms can incorporate a wide range of techniques, such as statistical analysis, machine learning, optimization methods, and heuristic approaches, to process and interpret data and make informed decisions.

i. Speed and Efficiency: Algorithms can rapidly analyze vast volumes of data, enabling real-time decision making in dynamic environments.

ii. Accuracy and Precision: By leveraging advanced algorithms, military personnel can make decisions based on data-driven insights, reducing the chances of errors or biases.

iii. Adaptability and Flexibility: Algorithms can continuously learn from new data, enabling the adaptation of strategies and responses to evolving situations.

 

In the Moment (ITM) program

DARPA announced the In the Moment (ITM) program, which seeks to quantify the alignment of algorithms with trusted human decision-makers in difficult domains where there is no agreed upon right answer. ITM aims to evaluate and build trusted algorithmic decision-makers for mission-critical Department of Defense (DoD) operations.

 

The program will focus on developing algorithms that can learn from experience and adapt to changing conditions.  ITM is designed to address a critical challenge facing the military: the need to make decisions quickly and accurately in situations where there is no time for human deliberation. This is particularly true in combat, where soldiers may need to make life-or-death decisions in the blink of an eye.

 

“ITM is different from typical AI development approaches that require human agreement on the right outcomes,” said Matt Turek, ITM program manager. “The lack of a right answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly requires human agreement to create ground-truth data.”

 

To illustrate, self-driving car algorithms can be based on ground truth for right and wrong driving responses based on traffic signs and rules of the road that don’t change. One feasible approach in those scenarios is hard-coding risk values into the simulation environment used to train self-driving car algorithms.

 

“Baking in one-size-fits-all risk values won’t work from a DoD perspective because combat situations evolve rapidly, and commander’s intent changes from scenario to scenario,” Turek said. “The DoD needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is unavailable. Difficult decisions are those where trusted decision-makers disagree, no right answer exists, and uncertainty, time-pressure, and conflicting values create significant decision-making challenges.”

 

ITM is taking inspiration from the medical imaging analysis field, where techniques have been developed for evaluating systems even when skilled experts may disagree on ground truth. For example, the boundaries of organs or pathologies can be unclear or disputed among radiologists. To overcome the lack of a true boundary, an algorithmically drawn boundary is compared to the distribution of boundaries drawn by human experts. If the algorithm’s boundary lies within the distribution of boundaries drawn by human experts over many trials, the algorithm is said to be comparable to human performance.

 

“Building on the medical imaging insight, ITM will develop a quantitative framework to evaluate decision-making by algorithms in very difficult domains,” Turek said. “We will create realistic, challenging decision-making scenarios that elicit responses from trusted humans to capture a distribution of key decision-maker attributes. Then we’ll subject a decision-making algorithm to the same challenging scenarios and map its responses into the reference distribution to compare it to the trusted human decision-makers.”

 

The program has four technical areas. The first is developing decision-maker characterization techniques that identify and quantify key decision-maker attributes in difficult domains. The second technical area is creating a quantitative alignment score between a human decision-maker and an algorithm in ways that are predictive of end-user trust. A third technical area is responsible for designing and executing the program evaluation. The final technical area is responsible for policy and practice integration; providing legal, moral, and ethical expertise to the program; supporting the development of future DoD policy and concepts of operations (CONOPS); overseeing development of an ethical operations process (DevEthOps); and conducting outreach events to the broader policy community.

 

ITM is a 3.5-year program encompassing two phases with potential for a third phase devoted to maturing the technology with a transition partner. The first phase is 24-months long and focuses on small-unit triage as the decision-making scenario. Phase 2 is 18-months long and increases decision-making complexity by focusing on mass-casualty events.

 

To evaluate the whole ITM process, multiple human and algorithmic decision-makers will be presented scenarios from the medical triage (Phase 1) or mass casualty (Phase 2) domains. Algorithmic decision-makers will include an aligned algorithmic decision-maker with knowledge of key human decision-making attributes and a baseline algorithmic decision-maker with no knowledge of those key human attributes. A human triage professional will also be included as an experimental control.

 

“We’re going to collect the decisions, the responses from each of those decision-makers, and present those in a blinded fashion to multiple triage professionals,” Turek said. “Those triage professionals won’t know whether the response comes from an aligned algorithm or a baseline algorithm or from a human. And the question that we might pose to those triage professionals is which decision-maker would they delegate to, providing us a measure of their willingness to trust those particular decision-makers.”

 

DARPA have selected performers for the ‘In the Moment’ AI program intended to make difficult medical triage decisions using an algorithm.

Raytheon, Kitware, Parallax, CACI International and the University of Maryland have been selected as performers for the US Defence Advanced Research Projects Agency’s (DARPA) ‘In the Moment’ (ITM) program, a program for the development of artificial intelligence (AI) to make difficult medical triage decisions in austere environments and in mass casualty events.

The first phase of the ITM program will look at medical triage for small military units in austere environments using AI to make decisions about treatment, while the second phase will consider approaches to using eh technology in mass casualty events.

Raytheon BBN Technologies and Soar Technology, Inc. will collaborate to create decision-maker characterisation approaches that will identify and quantify critical human decision-maker traits in challenging domains.

Kitware, Inc. and Parallax Inc. will create algorithmic decision-makers that match essential characteristics of trusted human decision-makers.

“Key attributes might include how an algorithm evaluates a situation, how it relies on domain knowledge, how it responds to time pressures, and what principles or values it uses to prioritise care,” said Turek.

CACI International Inc. will develop and implement the programme assessment, focusing on how essential human characteristics might contribute to trustworthy decision-making delegation.

The University of Maryland Applied Research Laboratory for Intelligence and Security and the Institute for Defence Analyses will be in charge of policy/practice integration and outreach, with ELSI specialists advising throughout the study process.

Turek predicts that ITM improvements will eventually permit completely automated and semi-automatic decision-making, with humans having the option to veto the algorithm.

In March, researchers at Edge Hill University have launched a new drone project powered by artificial intelligence for battlefield triage. Project ATRACT, which stands for A Trustworthy Robotic Autonomous system to support Casualty Triage, is aiming to create an aerial drone that can aid and speed up triage in the crucial post-trauma minutes that shape survival chances on the battlefield.

Parallax Advanced Research wins DARPA In the Moment Award

Parallax Advanced Research, a nonprofit research institute, has been awarded a $4.067 million Defense Advanced Research Projects Agency (DARPA) In the Moment (ITM) grant. The ITM initiative aims to explore how humans can create trustworthy artificial intelligence (AI) systems for challenging decision-making scenarios with no definitive right answer. Parallax’s research team is specifically working on ITM Technology Area 2 (TA2), focusing on developing human-aligned algorithmic decision-makers capable of adapting to different decision-makers and demonstrating key trust-supporting decision-making attributes. They are collaborating with Drexel University and Knexus Research Corporation for the ITM project.

The primary focus of Parallax’s research under ITM is on human-aligned decision-making in situations like small-unit triage in austere environments and mass casualty care. These situations require quick decisions with limited resources, where there is often no universally correct answer, leading to disagreements even among experts. Parallax is combining various complex decision-making technologies based on AI and machine learning to make decisions in medical triage scenarios when trained medical personnel are not available.

One notable development is the Trustworthy Algorithmic Delegate (TAD), led by Dr. Matt Molineaux, director of AI and Autonomy at Parallax. TAD employs an innovative Explainable Case-Based Reasoning (ECBR) approach to difficult decision-making, emulating human decision-making processes to make trustworthy decisions. The goal is to build trust in the AI decision-makers among human operators, aligning with the consensus of various experts.

TAD’s application extends to assisting less experienced medics in situations like mass casualty care or small-unit triage, potentially saving lives in challenging scenarios with limited medical personnel.

Parallax’s ITM research is divided into two phases: first, ensuring alignment with the decision-making variability of trusted human decision-makers, and second, aligning the AI system with a specific trusted human decision-maker.

The involvement of the Naval Medical Research Unit – Dayton (NAMRU-D) in the project provides crucial subject-matter expertise in battlefield medicine, particularly in the context of medical triage efforts. If successful, TAD could be transitioned to the armed services, significantly impacting the Department of Defense’s medical care capabilities.

 

However, it’s important to acknowledge the potential risks associated with ITM:

Loss of human control: If ITM algorithms become too powerful or are given too much autonomy, there is a risk that humans could lose control over decision-making. This raises concerns about accountability, as decisions made solely by algorithms may lack the nuanced judgment and ethical considerations that human operators can provide. There is a need to strike a balance between the capabilities of ITM algorithms and human oversight to ensure responsible and accountable decision-making.

Bias: ITM algorithms could be biased, which could lead to unfair or discriminatory decisions. This is a particular concern if the algorithms are trained on data that is itself biased, reflecting societal prejudices or systemic inequalities. Careful attention must be paid to data collection, algorithm development, and ongoing monitoring to mitigate the risk of bias and ensure that decision-making processes are fair and equitable.

Cybersecurity: ITM algorithms could be vulnerable to cyberattacks, which could compromise their integrity or functionality. The potential consequences of a cybersecurity breach in military operations are significant, as it could lead to compromised data, unauthorized access, or even manipulation of decision-making processes. Robust cybersecurity measures and constant vigilance are essential to protect the integrity and reliability of ITM algorithms and maintain the trust in their capabilities.

As the development and implementation of ITM algorithms progress, it is crucial to address these potential risks and challenges through rigorous testing, oversight, and ongoing refinement of the technology. By embracing a thoughtful and responsible approach, we can harness the transformative potential of ITM while ensuring that it aligns with ethical principles and enhances the safety and effectiveness of military operations.

 

Awards

Raytheon BBN, part of RTX (Raytheon Technologies Corporation), has been awarded a contract by DARPA (Defense Advanced Research Projects Agency) to support the “In The Moment” (ITM) program. ITM aims to develop the foundations for trustworthy algorithms capable of making independent decisions in dynamic, uncontrolled environments, such as mass casualty triage and disaster relief situations. The project seeks to understand how human experts, such as medical professionals and first responders, make complex decisions and assess trust in the decisions of others.

DARPA is bringing together multiple teams to collaborate on this program. Other teams will focus on the development of prototype AI decision-makers that start with baseline knowledge and can then be tuned to match a set of target attributes. The research products from this program will be integrated and evaluated to determine how well the algorithmic agents were able to make decisions consistent with the target human attributes when faced with difficult scenarios. The program will also test whether human experts trust these aligned agents over the baseline agents or other actual humans. In these program evaluations of trust, the human experts will be shown a record of decisions in difficult scenarios without knowing whether the decision-maker was an AI or a human.

Raytheon BBN, in collaboration with Kairos Research, MacroCognition, and Valkyries Austere Medical Solutions, will use cognitive interviewing techniques to design scenario-based experiments, allowing AI systems to adapt to user-specific attributes and domains. DARPA has multiple teams collaborating on the program, focusing on prototype AI decision-makers that can be tuned to match specific human attributes and aims to test whether human experts trust these AI systems over baseline agents or other humans.

“Because the way we make decisions varies from person to person, it’s unlikely that a one-size fits all trusted AI model exists,” said Leung. “Instead, in theory, we should be able to create AI systems that adapt to the user and domain. Decisions are difficult because of uncertainty and trade-offs between competing goals. We want to be able to tune an AI’s attributes such as risk tolerance, process focus, or willingness to change plans to better match a user or a group of users.”

Researchers from Drexel University are collaborating with Parallax Advanced Research Corporation, Knexus Research Corporation, and the U.S. Naval Medical Research Unit – Dayton to train and test these AI algorithms. As part of the Parallax team, Drexel’s researchers will apply their expertise in explainable artificial intelligence and case-based reasoning to extract, train, augment and test the AI system, which Parallax calls the Trustworthy Algorithmic Delegate, which combines various AI components.

The Drexel team is led by Rosina Weber, PhD, a professor in the College of Computing & Informatics and expert in case-based reasoning and explainable AI that when combined enables the program to produce justifications for its decisions, which are made based on previous similar experiences. “A case-based reasoning approach is ideal for a technological challenge like this, because there is a fairly clear justification for the decisions the program is making,” Weber said. “This makes it easier for the human decision-maker to understand its logic — which is expected to help make the human decision maker willing to delegate to the algorithm.”

Case-based reasoning differs from other artificial intelligence methods, such as neural networks, which “train” on massive amounts of input data and draws on the patterns it extracts from it to produce an output — though the precise reasoning for that output may be difficult to define. By contrast, case-based reasoning is more similar to the process of referencing legal precedent when making a legal argument. The program is trained on several previous situations – which are essential units of knowledge representation that can be augmented with contextual scenarios and domain knowledge while still benefiting from other machine learning techniques – and from these previous cases it can still generalize to produce a decision for a new situation using only the local information that is available.

The team plans to test the Trustworthy Algorithmic Delegate in two phases. The first will focus on aligning the program’s process with a group of trusted human decision-makers. The second, more complex, focus will look at how the program can align with one specific trusted human decision-maker.

“The idea is that, where necessary, the human operator will trust that the AI decision-maker will make decisions that align with what varied experts think should happen,” said Viktoria Greanya, PhD, chief scientist at Parallax. “The assumption is that everybody makes their decisions in a different manner, and the AI should be able to align with the specific person who’s delegating to the algorithmic decision-maker.”

If it’s successful, the program is also intended to produce a framework for creating other algorithms that can express key attributes that are aligned with trusted humans, according to DARPA.

This project aims to test the AI in complex situations, including triage for small military units and mass-casualty events.

 

Conclusion:

DARPA’s In the Moment (ITM) initiative represents a significant leap forward in the realm of military decision making. By leveraging the power of algorithms, ITM has the potential to enhance situational awareness, improve command and control systems, and optimize mission planning. While challenges and ethical considerations remain, the future of algorithm-driven decision making in military operations is promising. As DARPA continues to push the boundaries of innovation, ITM stands as a beacon of hope for revolutionizing military operations and ultimately ensuring the safety and success of military personnel on the battlefield.

 

References and Resources also include:

https://drexel.edu/news/archive/2023/October/DARPA-ITM-CCI-decision-making

About Rajesh Uppal

Check Also

The Revolution in Military Affairs (RMA): Transforming Warfare in the 21st Century

The way we fight wars is constantly evolving. Throughout history, advancements in technology, strategy, and …

error: Content is protected !!