A rapidly increasing percentage of the world’s population is connected to the global information environment. At the same time, the information environment is enabling social interactions that are radically changing how and at what rate information spreads. Both nation-states and nonstate actors have increasingly drawn upon this global information environment to promote their beliefs and further related goals. Recently Social media has become an important medium to conduct psychological warfare for terrorists to Nation-states.
In 2016, Russian influence operations on social media were reported to alter the course of events in the U.S. by manipulating public opinion. Russia was accused of using thousands of covert human agents and robot computer programs to spread disinformation referencing the stolen campaign emails of Hillary Clinton, amplifying their effect. The release of emails, and the use of fake Facebook and Twitter accounts — were designed to undermine trust in institutions through manipulation, distortion, and disruption.
According to a 2016 study by the Pew Research Center, a majority of U.S. adults now rely on social media as their primary news source. This means that when natural or man-made disasters strike, the quality of information shared online and how it spreads is critical; the ability of official evacuation orders to break through the noise of inaccurate or intentionally misleading posts could literally be a matter of life-and-death.
The COVID-19 pandemic is evidence that Russia and China have accelerated adoption of their age-old influence and disinformation tactics to the modern era, national security experts and military leaders said. Those countries are leveraging U.S. laws, social media platforms and divisions within society to their larger strategic advantage and as a way to weaken the United States. At a recent hearing before the House Armed Services Committee’s cyber and information systems subcommittee, Nina Jankowicz, a disinformation fellow at the Wilson Center, told lawmakers adversaries are engaging in perpetual information competition. “Adversaries understand information competition is the new normal and they are constantly probing for societal fissures to exploit,” Jankowicz said.
DARPA launched SocialSim Program in 2017 to improve the understanding of the spread and evolution of online information so as to develop adequate measures against psychological warfare and conduct DOD’s own information operations effectively.
This study shall also be useful for effective use of social media when natural or man-made disasters strike when the quality of information shared online and how it spreads is critical—the ability of official evacuation orders to break through the noise of inaccurate or intentionally misleading posts could be a matter of literal life-and-death.
Computational Simulation of Online Social Behavior (SocialSim)
A simulation of the spread and evolution of online information, if accurate and at-scale, could enable a deeper and more quantitative understanding of adversaries’ use of the global information environment than is currently possible using existing approaches.
At present, the U.S. Government employs small teams of experts to speculate how information may spread online. While these activities provide some insight, they take considerable time to orchestrate and execute, the accuracy with which they represent real-world online behavior is unknown, and their scale (in terms of the size and granularity with which populations are represented) is such that they can represent only a fraction of the real world. High-fidelity (i.e., accurate, at-scale) computational simulation of the spread and evolution of online information would support efforts to analyze strategic disinformation campaigns by adversaries, deliver critical information to local populations during disaster relief operations, and could potentially contribute to other critical missions in the online information domain.
The goal of Computational Simulation of Online Social Behavior (SocialSim) is to develop innovative technologies for high-fidelity computational simulation of online social behavior. SocialSim will focus specifically on information spread and evolution. Current computational approaches to social and behavioral simulation are limited in this regard.
Top-down simulation approaches focus on the dynamics of a population as a whole, and model behavioral phenomena by assuming uniform or mostly-uniform behavior across that population. Such methods can easily scale to simulate massive populations, but can be inaccurate if there are specific, distinct variations in the characteristics of the population.
In contrast, bottom-up simulation approaches treat population dynamics as an emergent property of the activities and interactions taking place within a diverse population. While such approaches can enable more accurate simulation of information spread, they do not readily scale to represent large populations. SocialSim aims to develop novel approaches to address these challenges.
Sentiment analysis – the process of identifying positive, negative, or neutral emotion – across online communications has become a growing focus for both commercial and defense communities. Understanding the sentiment of online conversations can help businesses process customer feedback and gather insights to improve their marketing efforts. From a defense perspective, sentiment can be an important signal for online information operations to identify topics of concern or the possible actions of bad actors.
The presence of sarcasm – a linguistic expression often used to communicate the opposite of what is said with an intention to insult or ridicule – in online text is a significant hindrance to the performance of sentiment analysis. Detecting sarcasm is very difficult owing largely to the inherent ambiguity found in sarcastic expressions.
“Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions, and gestures that cannot be represented in text,” said Brian Kettler, a program manager in DARPA’s Information Innovation Office (I2O). “Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available.”
SocialSim Awards
The Defense Advanced Research Projects Agency (DARPA) has allocated more than $6.7 million to a team of researchers, including three from Carnegie Mellon University, to fund research into improving the understanding of how social information travels and transforms online. The grant is one of the largest federally funded projects of its kind.
Christian Lebiere, a research psychologist in CMU’s Dietrich College of Humanities and Social Sciences, is the principal investigator for the CMU project. Additionally, Carnegie Mellon’s Coty Gonzalez, research professor of social and decision sciences, and David Plaut, professor of psychology, will work with experts in computer science, cognitive science, economics and sociology from Virginia Tech, Stanford, Claremont, Duke, Wisconsin, USC and the Institute for Human and Machine Cognition on “Homo SocioNeticus,” a key component of DARPA’s new SocialSim Program.
A rapidly increasing percentage of the world’s population is connected through the global information environment. At the same time, the information environment enables social interactions that are radically changing how and at what rate information spreads. By developing high-fidelity computational simulations of the spread and evolution of online information, we will enable a deeper understanding of complex diffusion phenomena.
Our principal research objective is to evaluate deep learning methodologies for predicting dynamic processes at scale in various social environments (e.g., Twitter, GitHub, Reddit, YouTube). To this end, researchers will develop social simulator frameworks capable of capturing the microscopic dynamics in multiple messaging platforms. These frameworks will be tested and compared against several baselines and relevant performance metrics to reveal the accuracy, meaningfulness, and usefulness of our simulations. Finally, this project will support efforts to analyze complex real-world online scenarios such as cross-platform information cascades, strategic disinformation campaigns, pump-and-dump scenarios on digital currencies, and other critical missions in the online information domain.
“Our research is in large-scale distributed systems and social computing/computational sociology. Our research cycle involves measuring characteristics of real systems (such as online social communities, physics collaborations, or peer-to-peer systems), designing algorithms and building systems to solve problems in large distributed systems, and experimentally evaluating our solutions. Recently we’ve been using tools from distributed systems to understand human behavior patterns in online communities that are harder to detect in real life, such as unethical behaviors.”
“Being able to accurately and reliably predict the spread of information online under a wide range of conditions requires a principled account of the decision-making of large groups and its impact on social network dynamics. CMU brings a long tradition of computational modeling of human cognition using approaches ranging from neural networks to cognitive architectures,” Lebiere said.
Virginia Tech team leads federal effort to forecast the flow of information online
The project, titled Homo SocioNeticus, is a key component of DARPA’s new SocialSim Program, which will support fundamental research to develop technologies that afford high-fidelity simulation of online behavior at-scale and will guide the establishment of a new research community centered on pushing the limits of rigorous evaluation of human social simulations.
“For decades, cognitive science, psychology, and neuroscience have been developing models to understand decision-making on an individual level or, rarely, in small groups,” said Mark Orr, a research associate professor at the Biocomplexity Institute of Virginia Tech’s Network Dynamics and Simulation Science Laboratory and principal investigator of Homo SocioNeticus. “This team will be the first to test how today’s cutting-edge cognitive science, informed by social science, translates to networks the size of Facebook or Twitter, orders of magnitude larger than anything attempted before.”
In addition to boosting critical communications during disaster relief operations, one major potential benefit of the initiative is its capacity to help de-escalate conflicts without the use of armed force. Where adversaries seek to increase military tensions through online misinformation campaigns, this research may present a means of swinging public opinion back in favor of a peaceful resolution.
“Any time you can save lives without having to be aggressive, that’s important,” said DARPA’s SocialSim program director, Jonathan Pfautz. “At present, the government employs small teams of experts to speculate how information may spread online, but a system that can automate that process and provide greater accuracy could focus our efforts even further.”
The program also aims to lead the way in establishing guidelines for ethically sourcing data in large-scale studies of social media usage. With government agencies and private-sector marketers scaling up their efforts to forecast how large groups of people spread information online, clear, consistent guidelines about how to obtain consent and ensure anonymity are more critical than ever.
“DARPA wants to set the example and build a vibrant research community around this big, audacious problem,” said Orr. “The ability to accurately diagnose how large populations share data may still be several years off, but in the process we’ll be establishing a firm precedent for how this type of work should be done.” More broadly, researchers say this project could provide a new framework for understanding the online “echo chambers” that have come to define today’s media landscape.
“The standard site of information consumption has shifted from centralized, largely uniform media news sources to decentralized, self-selected ‘information pockets,’ which has dramatic implications for how people understand events in the world,” said James Moody, professor of sociology at Duke University and member of the project team. “If we can effectively model how information moves across such landscapes, we may be able to help people see the blind spots in their own information sources as well as distinguish systematic distortions.”
The Homo SocioNeticus initiative will apply new methods developed at the Biocomplexity Institute of Virginia Tech to computationally model the behavior of large populations and simulate the interactions that drive information diffusion and evolution.
“Our researchers have developed ‘synthetic populations’ capable of simulating the way real-world communities interact on a massive scale,” said Chris Barrett, director of the Biocomplexity Institute and professor of computer science in Virginia Tech’s College of Engineering. “Granular representations of interacting individuals at the scale of regional and national populations, in combination with artificial intelligence approaches to cognitive processes associated with those individuals at that sort of scale, is a really big deal. When we achieve that kind capability, it will allow study of social aspects of cognitive processes at population scale and in unprecedented detail.”
The Biocomplexity Institute regularly conducts research through federal, state, and industry grants and contracts. Notably, this award is part of the institute’s portfolio of research programs that has received more than $103 million in new awards in the first half of FY 2018. For more information on the SocialSim initiative, visit the official DARPA project page.
SocialSim researchers demonstrate deep learning model capable of accurately classifying sarcasm in textual communications, addressing online sentiment analysis roadblock
Researchers from the University of Central Florida working on DARPA’s Computational Simulation of Online Social Behavior (SocialSim) program are developing a solution to this challenge in the form of an AI-enabled “sarcasm detector.” The researchers have demonstrated an interpretable deep learning model that identifies words from input data – such as Tweets or online messages – that exhibit crucial cues for sarcasm, including sarcastic connotations or negative emotions. Using recurrent neural networks and attention mechanisms, the model tracks dependencies between the cue-words and then generates a classification score, indicating whether or not sarcasm is present.
“Essentially, the researchers’ approach is focused on discovering patterns in the text that indicate sarcasm. It identifies cue-words and their relationship to other words that are representative of sarcastic expressions or statements,” noted Kettler.
The researchers’ approach is also highly interpretable, making it easier to understand what’s happening under the “hood” of the model. Many deep learning models are regarded as “black boxes,” offering few clues to explain their outputs or predictions. Explainability is key to building trust in AI-enabled systems and enabling their use across an array of applications. Existing deep learning network architectures often require additional visualization techniques to provide a certain level of interpretability.
To avoid this, the SocialSim researchers employed inherently interpretable self-attention that allows elements in the input data that are crucial for a given task to be easily identified. The researchers’ capability is also language agnostic so it can work with any language model that produces word embeddings. The team demonstrated the effectiveness of their approach by achieving state-of-the-art results on multiple datasets from social networking platforms and online media. The model was able to successfully predict sarcasm, achieving a nearly perfect sarcasm detection score on a major Twitter benchmark dataset as well as state-of-the-art results on four other significant datasets. The team leveraged publicly available datasets for this demonstration, including a Sarcasm Corpus V2 Dialogues dataset that is part of the Internet Argument Corpus as well as a news headline dataset from the Onion and HuffPost.
DARPA’s SocialSim program is focused on developing innovative technologies for high-fidelity computational simulation of online social behavior. A simulation of the spread and evolution of online information could enable a deeper and more quantitative understanding of adversaries’ use of the global information environment. It could also aid in efforts to deliver critical information to local populations during disaster relief operations, or contribute to other critical missions in the online information domain.
Accurately detecting sarcasm in text is only a small part of developing these simulation capabilities due to the extremely complex and varied linguistic techniques used in human communication. However, knowing when sarcasm is being used is valuable for teaching models what human communication looks like, and subsequently simulating the future course of online content.
References and Resources also include:
https://www.darpa.mil/program/computational-simulation-of-online-social-behavior
https://www.cmu.edu/dietrich/news/news-stories/2017/december/darpa-grant-online-information.html
https://vtnews.vt.edu/articles/2017/11/bi-forecasting-info-flow-online0.html