The conditions under which our Armed Forces conduct operations are rapidly changing with the spread of blogs, social networking sites, and media‐sharing technology (such as YouTube), and further accelerated by the proliferation of mobile technology. Changes to the nature of conflict resulting from the use of social media are likely to be as profound as those resulting from previous communications revolutions. DARPA has an interest in addressing this new dynamic and understanding how social network communication affects events on the ground as part of its mission of preventing strategic surprise.
Events of strategic as well as tactical importance to our Armed Forces are increasingly taking place in social media space. We must, therefore, be aware of these events as they are happening and be in a position to defend ourselves within that space against adverse outcomes, said Rand Waltzman, Program manager DARPA.
For example, in one case rumors about the location of a certain individual began to spread in social media space and calls for storming the rumored location reached a fever pitch. By chance, responsible authorities were monitoring the social media, detected the crisis building, sent out effective messaging to dispel the rumors and averted a physical attack on the rumored location. This was one of the first incidents where a crisis was (1) formed (2) observed and understood in a timely fashion and (3) diffused by timely action, entirely within the social media space.
DARPA launched its SMISC program in 2011 to examine ways social networks could be used for propaganda under Military Information Support Operations (MISO), formerly known as psychological operations.The effective use of social media has the potential to help the Armed Forces better understand the environment in which it operates and to allow more agile use of information in support of operations.
The general goal of the Social Media in Strategic Communication (SMISC) program is to develop a new science of social networks built on an emerging technology base. Through the program, DARPA seeks to develop tools to help identify misinformation or deception campaigns and counter them with truthful information, reducing adversaries’ ability to manipulate events.
To accomplish this, SMISC focussed research on linguistic cues, patterns of information flow and detection of sentiment or opinion in information generated and spread through social media. Researchers also attempted to track ideas and concepts to analyze patterns and cultural narratives. If successful, they should be able to model emergent communities and analyze narratives and their participants, as well as characterize generation of automated content, such as by bots, in social media and crowd sourcing.
SMISC researchers will create a closed and controlled environment where large amounts of data are collected, with experiments performed in support of development and testing. One example of such an environment might be a closed social media network of 2,000 to 5,000 people who have agreed to conduct social media-based activities in this network and agree to participate in required data collection and experiments. This network might be formed within a single organization, or span several. Another example might be a role-player game where use of social media is central to that game and where players have again agreed to participate in data collection and experiments.
Some of the research projects funded by the SMISC program included studies that analyzed the Twitter followings of Lady Gaga and Justin Bieber among others; investigations into the spread of Internet memes; a study by the Georgia Tech Research Institute into automatically identifying deceptive content in social media with linguistic cues; and “Modeling User Attitude toward Controversial Topics in Online Social Media”—an IBM Research study that tapped into Twitter feeds to track responses to topics like “fracking” for natural gas.
Several studies related to the automatic assessment of how well different people in social networks knew one another, through analysing frequency, tone and type of interaction between different users. Such research could have applications in the automated analysis of bulk surveillance metadata, including the controversial collection of US citizens’ phone metadata revealed by Snowden.
A similarly titled-project out of the University of Southern California, “The Role of Social Media in the Discussion of Controversial Topics”, studied the behaviour of Twitter users posting about a 2012 vote in California on measures such as raising taxes, genetically modified organisms and the death penalty. “Our findings suggest Twitter is primarily used for spreading information to like-minded people rather than debating issues,” the authors wrote in their paper on the project.
Several of the DoD-funded projects went further than simple observation, instead engaging directly with social media users and analysing their responses.
One of multiple studies looking into how to spread messages on the networks, titled “Who Will Retweet This? Automatically Identifying and Engaging Strangers on Twitter to Spread Information” did just this. The researchers explained: “Since everyone is potentially an influencer on social media and is capable of spreading information, our work aims to identify and engage the right people at the right time on social media to help propagate information when needed.”
Events in social media space involve many‐to‐many interactions among numbers of people at a compressed scale of time that is unprecedented. Entirely new phenomena are emerging that require thinking about social interactions in a new way. The tools that we have today for awareness and defense in the social media space are heavily dependent on chance. We must eliminate our current reliance on a combination of luck and unsophisticated manual methods by using systematic automated and semi‐automated human operator support to detect, classify, measure, track and influence events in social media at data scale and in a timely fashion
In particular, SMISC will develop automated and semi‐automated operator support tools and techniques for the systematic and methodical use of social media at data scale and in a timely fashion to accomplish four specific program goals:
1. Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
2. Recognize persuasion campaign structures and influence operations across social media sites and communities.
3. Identify participants and intent, and measure effects of persuasion campaigns.
4. Counter messaging of detected adversary influence operations.
The development of a new science of social networks and the solutions to the problems posed by SMISC will require the confluence of several technologies including, but not limited to, information theory, massive‐scale graph analytics and natural language processing. While SMISC will not directly support natural language processing development efforts, it will certainly use the results of previous programs as well as contribute new challenges to further stimulate ongoing efforts.
Technology areas particularly relevant to SMISC are shown here grouped to correspond to the four basic goals of the program as described above:
1. Linguistic cues, patterns of information flow, topic trend analysis, narrative structure analysis, sentiment detection and opinion mining;
2. Meme tracking across communities, graph analytics/probabilistic reasoning, pattern detection, cultural narratives;
3. Inducing identities, modeling emergent communities, trust analytics, network dynamics modeling;
4. Automated content generation, bots in social media, crowd sourcing.
Recent research has shown that traditional approaches to understanding social media through static network connectivity models often produce misleading results. It is, therefore, necessary to take into account the dynamics of behavior and SMISC is interested in a wide variety of techniques for doing so.