The social sciences can play important roles in assisting military planners and decision-makers who are trying to understand complex human social behaviors and systems, potentially facilitating a wide range of missions including humanitarian, stability, and counter-insurgency operations.
Current social science approaches to studying behavior rely on a variety of modeling methods—both qualitative and quantitative—which seek to make inferences about the causes of social phenomena on the basis of observations in the real-world. Yet little is known about how accurate these methods and models really are, let alone whether the connections they observe and predict are truly matters of cause and effect or mere correlations.
“We use these models because our ability to think through the behaviour of complex systems, and the consequences of various assumptions about individual behaviour, is kind of limited,” Russell explains. “And so these models can help us crank through what would actually happen given basic assumptions.”
To improve knowledge of social science modeling’s capabilities and limitations, DARPA announced its Ground Truth program in 2017. The program aims to use artificial, yet plausible, computer-based social-system simulations with built-in “ground truth” causal rules as testbeds to validate the accuracy of various social science modeling methods.
“The real-world operates according to dynamic, interactive, non-linear, and sometimes adaptive and changing rules that we don’t understand very well, all of which limit our efforts to determine causality in social systems,” said Adam Russell, program manager in DARPA’s Defense Sciences Office. “We want to develop computationally simulated worlds where we create and therefore understand all the causal processes and rules. Then we can test a variety of social science modeling methods to see how well they identify the known causal processes built into the simulation.”
DARPA’s Ground Truth program will create a sort of war game for testing the validity of social science models.
The Ground Truth (GT) program aims to improve knowledge of social science modeling’s capabilities and limitations.”The program aims to use artificial, yet plausible, computer-based social-system simulations with built-in ‘ground truth’ causal rules as testbeds to validate the accuracy of various social science modeling methods,” the agency said in a news release.
The purpose of the program is to use artificial, yet plausible, computer-based social-system simulations with built-in “ground truth” causal rules as testbeds to validate the accuracy of various social science modeling methods (i.e. the teams creating the simulations know the rules, but the teams creating the models don’t). A further goal of the program is to use a series of Ground Truth challenges to explore new multi-disciplinary teaming approaches for enabling rapid “solution-oriented” social science modeling capabilities.
“Ground Truth will solicit one group of researchers to create social simulations with associated ground truth rules, known only to them, while challenging another group of researchers to create innovative teaming approaches to ‘discover’ the rules in those simulations,” the agency said. “DARPA and its independent test and evaluation team will score the modeling teams’ abilities to identify and predict causal ground truth in different simulations with different degrees of social complexity.”
“If we can take a principled approach to using the simulations – building increasingly complex simulations – that might give us an idea when facing a problem in the real world. If we can assess the complexity of the problem based on things we learn in ground truth, we have a much better sense of the kind of approach we’ll need to take to understand and potentially predict that complex social system.”
So. “What would happen if we grew our own world?” asks Russell. At Ground Truth, simulations serve as test beds, artificial worlds that can be reverse-engineered by DARPA researchers who in turn work like detectives to unpack how and why a simulated agent acted the way it did. The conditions that kick those behaviours into action are known as the titular ground truth.
Then teams will test their methods of prediction. One group will build a social simulation based on rules known only to them. In turn, another team will be challenged to come up with an approach for discovering those rules. Russell imagines one team announcing an impending shock to their simulation by, say, adding an influx of new agents into the world or removing a key resource. Then the opposing team will model the impact of this disruption, and compare their predictions with the true outcome, which DARPA and an independent evaluation team will score.
If successful, Ground Truth will demonstrate a principled approach for testing the power and limitations of various social science modeling methods; explore new modeling approaches for describing and predicting different kinds of complex social systems; and inform future modeling investments for research and operations.