Home / Military / Doctrine & Strategy / Rising threat of Social media as weapon to influence Politics, and sentiments and carry psychological warfare

Rising threat of Social media as weapon to influence Politics, and sentiments and carry psychological warfare

The past several centuries have largely been defined by physical security threats, requiring a nation’s military to physically respond with whatever means they have available. But as explained by Isaiah Wilson III—president of Joint Special Operations University—today we face “compound security threats,” which include physical security threats as well as “communication and information operations that scale with the speed of a social media post that goes viral, as well as cyber warfare, hacking, and theft by our adversaries, both state and non-state
actors.” These compound security threats can exploit cybersecurity vulnerabilities as well as psychological and emotional vulnerabilities of targets, using modern internet platforms to reach targets worldwide.

 

Tactics used in such attacks include various forms of deception and provocation, from deepfake videos and fake social media accounts to gaslighting, doxing, trolling, and many others. Fake news can be a complete fabrication, but often there’s a kernel of truth that’s taken out of context or edited to change its meaning.

 

In recent years, social media have become an important channel for politicians to address the public, making them more accessible to their prospective voters. Although social media are often used to disseminate informative content, such as event announcements on a candidate’s public appearances, recent studies have shown that social media are also used for spreading misinformation as a part of political propaganda. Analysts have found evidence of doctored or misleading photographs spread by social media in the Syrian Civil War and 2014 Russian military intervention in Ukraine, possibly with state involvement.

 

There are three main trends driving the conversation about the significant geopolitical threat of digital disruption. The first is the ability of non-state actors, or indeed, even a few individuals, to use new technologies to wield influence or shift political outcomes on a large scale. The second trend is that of authoritarian governments using new technologies, or even simply restricting access to the internet, to repress human rights, quash dissent, and discredit political opponents. The third trend is state actors using new technologies to spread disinformation to discredit political opponents, bury opposing views, and influence foreign affairs with regularity, not just during times of war.

 

Social media has also become an important medium to conduct psychological warfare for terrorists to Nation-states. Psychological warfare involves the planned use of propaganda and other psychological operations to influence the opinions, emotions, motives, reasoning,  attitudes, and behavior of opposition groups. Social media has enabled the use of disinformation on a wide scale.

 

Terrorists have also become successful in the use of social media to recruit, radicalize and raise funds. Terrorist groups increasingly using social media platforms like YouTube, Facebook, and Twitter to further their goals and spread their message, because of its convenience, affordability, and broad reach of social media.

 

Rising social media manipulation by countries

With billions of people around the world accessing platforms such as Facebook, WhatsApp, Twitter, and Instagram, to name a few, social media has become a powerful force for political and cultural change, and a force that is more than capable of influencing global geopolitics. In the past three years alone, propaganda and disinformation spread on social media platforms have resulted in Russian interference in the 2016 presidential election and a deepening of the partisan divide in politics in the United States, created justification for the Qatar blockade, sowed violence against both individuals and religious or ethnic groups in countries like Myanmar, India, and Sri Lanka, and played a role in a number of protests that have led to government changes, such as in Sudan.

 

2019 report from Researchers at the University of Oxford found evidence of “social media manipulation campaigns” by governments or political parties in 70 countries, up from 28 countries in 2017, with Facebook being the top venue where the disinformation is disseminated. Discussion of government-directed campaigns usually starts with Russia. But the Oxford report singles out China as having become “a major player in the global disinformation order.” Along with those two countries, five others — India, Iran, Pakistan, Saudi Arabia, and Venezuela — have used Facebook and Twitter “to influence global audiences,” according to the Oxford report.

 

In 2015 and 2016, Russia used disinformation techniques on Facebook, Twitter, Instagram, and other social media to disseminate inflammatory messages intended to divide Americans in the 2016 presidential election. Russia was accused of using thousands of covert human agents and robot computer programs to spread disinformation referencing the stolen campaign emails of Hillary Clinton, amplifying their effect.  Russian influence operations on social media have been reported to alter the course of events in the U.S. by manipulating public opinion. The report believes there may be various reasons why Russia chooses to spread disinformation but notes that such activity is “all in support of its underlying foreign policy objectives”.

 

Since then, governments in many other countries — including Bangladesh, Iran, and Venezuela — have also used Facebook and Twitter to sow discord at home and abroad. According to a report Before India’s 2019 elections, shadowy marketing groups connected to politicians used the WhatsApp messaging service to spread doctored stories and videos to denigrate opponents. The country also has been plagued with deadly violence spurred by rumors that spread via WhatsApp groups.

 

A study of 100,000 political images shared on WhatsApp in Brazil in the run-up to its 2018 election found that more than half contained misleading or flatly false information; It’s unclear who was behind them.

 

Russell Hsiao, (蕭良其), a researcher and executive director of the Global Taiwan Institute speaking at a seminar of the Atlantic Council and explained how he believes Beijing is using popular online social networks, in addition to traditional media forms, to wage a kind of psychological warfare on Taiwanese people.

 

Facebook and Twitter accounts that originated in China acted in a coordinated fashion to amplify messages and images that portrayed Hong Kong’s protesters as violent and extreme, the two social media companies said in August 2019. On Facebook, one post from a China-linked account likened the protesters to ISIS fighters. And a Twitter message said, “We don’t want you radical people in Hong Kong. Just get out of here!” “These accounts were deliberately and specifically attempting to sow political discord in Hong Kong, including undermining the legitimacy and political positions of the protest movement on the ground,” Twitter said in a statement. “Based on our intensive investigations, we have reliable evidence to support that this is a coordinated state-backed operation.”

 

Advanced methods

Modern ICT technology has also lowered the barriers of executing misinformation campaigns. One only needs a computer or smartphone and an internet connection to reach a potentially huge audience — openly, anonymously or disguised as someone or something else, such as a genuine grassroots movement. In addition, armies of people, known as trolls, and so-called internet bots — software that performs automated tasks quickly — can be deployed to drive large-scale disinformation campaigns.

 

Disinformation campaigns track and collect behavioral data like personal web browsing habits, location data, purchasing patterns, and more and associate this data with personal identifiers say, an email address and phone number. Using data analytics then Insights and inferences are drawn from this behavioral data that are then either sold or shared — by large Internet platform companies, digital advertising firms, data brokers and online services,  including disinformation operators, who often appear as legitimate entities to the firms in this ecosystem. Because of their shifting online identities and vast number, it is very difficult to detect their activity, despite the advanced algorithmic technologies meant to find them.

 

Sophisticated social media management software has been developed that combines all of these services into an integrated system that coordinates data collection, audience formation, and message-testing across multiple channels in real-time, thus enabling them to determine how to target you with specific messages with tremendous speed and efficiency.

 

The most sophisticated disinformation operations use troll farms, artificial intelligence, and internet bots — what the Oxford researchers call “cyber troops” — to flood the zone with social-media posts or messages to make a fake or doctored story appear authentic and consequential.

 

Cyberwarfare is also enabling sophisticated misinformation campaigns. According to the ODNI’s report on January 6, 2017, the Russian military intelligence service (GRU) had hacked the servers of the Democratic National Committee (DNC) and the personal Google email account of Clinton campaign chairman John Podesta and forwarded their contents to WikiLeaks. Although Russian officials have repeatedly denied involvement in any DNC hacks or leaks, there is strong forensic evidence linking the DNC breach to known Russian operations.

 

Harmful Impact

Today’s digital influence attacks have multiple dimension implications. The goals of digital influence attacks can include disrupting and degrading a target’s societal cohesion, undermining confidence in political systems and institutions (i.e., democratic elections), fracturing international alliances, and much more.

 

U.S. Department of Defense has used the term military information support operations to describe efforts to “convey selected information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign governments, organizations, groups, and individuals in a manner favorable to the originator’s objectives

 

Social media has also become tool for carrying out Information operations and psychological warfare.  Chinese People’s Liberation Army’s (PLA’s) new Strategic Support Force (SSF) is a critical force for dominance in the space, cyber, and electromagnetic domains, the SSF’s function of “strategic support,” namely information support, will be equally vital to the PLA’s capabilities to fight and win wars.

 

Disinformation, now known as fake news, has tainted public discourse for centuries, even millennia. It’s been amplified in our digital age as a weapon of fearmongers, mob-baiters, and election-meddlers to widen social fissures, subvert democracy and boost authoritarian regimes. State-sponsored trolling, for instance, governments create digital hate mobs to smear critical activists or journalists, suppress dissent, undermine political opponents, spread lies and control public opinion.

 

The Russian government interfered in the 2016 U.S. presidential election in order to increase political instability in the United States and to damage Hillary Clinton’s presidential campaign by bolstering the candidacies of Donald Trump, Bernie Sanders and Jill Stein. The release of emails, and the use of fake Facebook and Twitter accounts — were designed to undermine trust in institutions through manipulation, distortion, and disruption.

 

A Rand Corp. study of the conflict in eastern Ukraine, which has claimed some 13,000 lives since 2014, found the Russian government under President Vladimir Putin ran a sophisticated social media campaign that included fake news, Twitter bots, unattributed comments on web pages and made-up hashtag campaigns to “mobilize support, spread disinformation and hatred and try to destabilize the situation.”

 

In Myanmar, a study commissioned by Facebook blamed military officials for using fake news to whip up popular sentiment against the Rohingya minority, helping to set the stage for what UN officials have described as genocide.

 

It has also acquired an economic dimension. A  report, released by the majority staff of the House Science Committee, said it found evidence Russian-sponsored agents used Facebook, Twitter, and Instagram to suppress the research and development of fossil fuels and stymie efforts to expand the use of natural gas.

 

According to the report, groups such as the Internet Research Agency, a Russian company with links to the Russian government, used propaganda to target Energy Transfer Partners’ Dakota Access Pipeline, TransCanada Corp.’s Keystone XL pipeline, and other projects. Russian-sponsored agents also funneled money to U.S. environmental groups in an attempt to portray energy companies in a negative way and disrupt domestic energy markets, the report said.

 

“This report reveals that Russian agents created and spread propaganda on U.S. social media platforms in an obvious attempt to influence the U.S. energy market,” Texas Representative Lamar Smith, the chairman of the House Science, Space, and Technology Committee, said in a statement. “Russian agents attempted to manipulate Americans’ opinions about pipelines, fossil fuels, fracking, and climate change.”

 

Counterterrorism Strategy

Robert Hannigan, the new head of GCHQ, a British intelligence and security organization, has called for greater support from web companies, by trying harder to meet the security agencies half-way in a joint enterprise that will protect the privacy of most users while helping to identify those who would do us harm.

 

Although some counterterrorism programs use social media to push back against extremist rhetoric, these efforts are too limited. The internet propaganda has to be countered vociferously on the internet and social media. A prominent Syrian Sunni cleric condemned the ISIS killing of the American Peter Kassig and said that ISIS chief Abu Bakr al-Baghdadi “is going to hell.” “We have to speak loud and very clear that Muslims and Islam have nothing to do with this,” Shaykh Muhammad al-Yaqoubi told CNN’s Christiane Amanpour. “ISIS has no nationality. Its nationality is terror, savagery, and hatred.”.

 

Meanwhile, DHS says it has developed a strategy of supporting people within communities where recruitment is taking place who want to spread a counterterrorism message, rather than trying to put out its own “terrorism doesn’t pay” style communications.In fighting ISIS, that involves supporting “Imams and moms,” Nielsen said. In dealing with the threat from white supremacists, she added, the agency looks to people who’ve left organizations driving that movement to help fight recruitment and calls to violence.

 

Jessica Stern and J.M. Berger the authors of “ISIS: The State of Terror.” Have suggested six point plan to Defeat ISIS in the Propaganda War:

1. Stop exaggerating ISIS’s invincibility: A first step in countering ISIS is to put it in perspective. We should not downplay its threat below a realistic level. But neither should we inflate it. Strikes designed to degrade the group’s real internal strength are good, but our targeting priorities should also aim to expose vulnerabilities for counterpropaganda purposes.

 

2. Amplify the stories of the real wives of ISIS, and other defectors: We need to amplify the stories of defectors and refugees from the areas that ISIS controls. Stories about the horrific real lives of jihadi wives need to be told, by women who manage to run away.

 

3. Take on ISIS’s version of Islam: ISIS has developed convoluted arguments about why it engages in war crimes that are forbidden by Islamic law. Hundreds of religious scholars have taken on ISIS’s interpretation of Islam. Those arguments need to get to the right audience: ISIS’s potential recruits. At least some of those recruits can be reached via social media, including via one-on-one conversations.

 

4. Highlight ISIS’s hypocrisy: ISIS makes much of its supposedly puritanical virtue and promotion of chastity, whipping women who do not wear attire ISIS considers appropriate and executing gay men by throwing them off the tops of buildings. Yet according to the U.N. and ISIS’s own propaganda, its fighters are involved in a wide range of horrifying sexual abuse, from sexual slavery to the reported rape of men and women, including both adults and children. In this area and many others, ISIS’s deranged double standards should be addressed head-on.

 

5. Publicize ISIS’s atrocities against Sunnis: We need to fully exploit aerial and electronic surveillance and remote imaging to show what really happens in the belly of the beast. We should pay particular attention to documenting war crimes and atrocities against Sunni Muslims in regions controlled by ISIS. It is patently obvious that ISIS has no qualms about advertising its war crimes against certain classes of people — Shi’a Muslims primarily, and religious minorities like the Yazidis. ISIS claims to protect Sunnis from sectarian regimes in both Iraq and Syria. While ISIS is happy to flaunt its massacres of Shi’ites and Iraqi military personnel, it has been relatively quiet in regard to its massacres of uncooperative Sunni tribes. Our countermessaging should highlight the murder of Sunnis in particular.

 

6. Aggressively suspend ISIS social-media accounts: There is a robust debate over the merits of suspending extremist social-media accounts, which encompasses a complex set of issues including free speech and the question of who should decide what content is acceptable. What we do know, based on an analysis of tens of thousands of Twitter accounts, is that suspensions do limit the audience for ISIS’s gruesome propaganda. The current rate of suspensions is damaging the ISIS social-media machine. The practice should be maintained at the current rate at the very least — but it would be better to get more aggressive.

 

The nations fighting ISIS need an organization to run a counternarrative campaign. One model, still in a testing phase, is called P2P: Challenging Extremism. This initiative provides an opportunity for university students from the U.S., Canada, the Middle East, North Africa, Europe, Australia and Asia to create an online community whose goal is to counter the extremist narrative by becoming educated influencers.

 

UN Security Council said, “In addition to security, legal and intelligence measures, most also stressed the need to provide a counter-narrative to radicalization, addressing root causes and working with communities in that regard.”

 

RAND identifies new strategies for countering Russian social media

A new RAND Corporation report finds that Russia is waging a social media campaign in the Baltics, Ukraine and nearby states to sow dissent against neighboring governments, as well as NATO and the European Union.

 

In addition to employing a state-funded multi-lingual television network, operating various pro-government news websites and working through Russian-backed “civil society” organizations, Russia also employs a sophisticated social media campaign that includes news tweets, non-attributed comments on web pages, troll and bot social media accounts, and fake hashtag and Twitter campaigns.

 

“Nowhere is this threat more tangible than in Ukraine, which has been an active propaganda battleground since the 2014 Ukrainian revolution,” said Todd Helmus, lead author on the report and senior behavioral scientist at the RAND Corporation, a nonpartisan research organization. “Other countries in the region look at Russia’s actions and annexation of Crimea and recognize the need to pay careful attention to Russia’s propaganda campaign.” In Estonia, Latvia, Lithuania, Ukraine, Moldova and Belarus, according to the RAND report, Russia aims to divide ethnic Russian or Russian-speaking populations and their host governments.

 

RAND researchers recommend that to counter the Russian campaign, Western countries need to strengthen and expand means to track, block and tag Russian propaganda more quickly, to offer alternative television, social media, and other media to help displace the Russian narrative, and to develop more compelling arguments for populations to align with the west and better understand NATO troop deployments in the region. The report also recommends the training of local journalists and funding the creation of alternative media content to counteract Russia propaganda campaigns.

 

“We paid special attention to the role of non-attributed social media accounts, which are frequently, but not solely, employed on Twitter and Facebook,” said Elizabeth Bodine-Baron, an author on the report, engineer and co-director of the RAND Center for Applied Network Analysis and System Science. “Russia has established that during critical moments, such as during the Ukrainian conflict, it can flood news websites with tens of thousands of comments each day,”

 

The report finds that U.S., EU and NATO efforts to counter Russian influence in the region are complicated by the relatively high presence of historically marginalized Russian-speaking populations in the region, which gives Russia a unique opportunity to communicate with a sympathetic audience.

 

Host government policies giving priority to national languages have limited government outreach via the Russian language, thus complicating state outreach to Russian speakers. Furthermore, Russian broadcast media dominates in the region, particularly in the Baltics. Ukraine is the exception as it has censored Russian government broadcasting and social media platforms. Finally, heavy-handed anti-Russian messaging may backfire given local skepticism of Western propaganda.

 

 

Countering Social media by social media companies

The finger is pointed at social media companies, saying they must take action and remove covert hostile state material, urging the Government to name and shame those that fail to act. “It is the social media companies who hold the key but are failing to play their part,” the UK Intelligence and Security Committee report states. But the committee said that it was “surprisingly difficult” to establish who has responsibility for what during its investigation.

 

Under pressure from lawmakers and regulators, Facebook and Google (a unit of Alphabet Inc.) have started requiring political ads in the U.S. and Europe to disclose who is behind them. Google’s YouTube division adjusted its “up next” algorithms to limit recommendations for suspected fake or inflammatory videos, a move it had resisted for years. WhatsApp now limits, to five, how many people or groups a message can be forwarded to.

 

In August 2018,  Major social media companies stepped up their policing of online disinformation campaigns. Google disabled dozens of YouTube channels and other accounts linked to a state-run Iranian broadcaster running a political-influence campaign. Facebook removed 652 suspicious pages, groups and accounts linked to Russia and Iran. Twitter took similar action shortly thereafter.

 

Its parent company, Facebook, said it spent 18 months preparing for India’s 2019 election: It blocked and removed fake accounts, looked for attempts at meddling and partnered with outside fact-checkers (albeit relatively few) to combat fake news. Facebook has developed artificial intelligence tools to help identify content that’s abusive or otherwise violates the site’s policies. In the wake of the March 15 shooting massacre in Christchurch, New Zealand, Facebook, Google and Twitter signed a voluntary agreement with world leaders pledging to fight hate speech online.

 

To counter these trends, the companies try to feed the algorithms with instructions to spot and limit negative content. For example, services like YouTube continue to onboard more and more human reviewers to help identify, label and curate policy-violating content including extremist videos.

 

Facebook chief executive Mark Zuckerberg announced that his company is revamping its flagship News Feed service: The algorithm powering it will now prioritize content shared by your friends and family over news stories and viral videos. The company followed up by announcing it will survey users and potentially relegate untrusted outlets.

 

Facebook has now  launched a new tool to allow users to see if they’ve liked or followed Russian propaganda accounts.The social network says its  tool will  allow  users to see whether they interacted with a Facebook page or Instagram account created by the Internet Research Agency (IRA), a state-backed organisation based in St Petersburg that carries out online misinformation operations..

 

Governments are also passing strict laws to control misinformation. A Singapore law that took effect Oct. 2019 allows for criminal penalties of up to 10 years in prison and a fine of up to S$1 million ($720,000) for anyone convicted of spreading online inaccuracies. The responsibility for identifying falsehoods detrimental to the public interest was given to government ministers. Malaysia enacted a similar law that the government, elected last year, is trying to repeal. Indonesia set up a 24-hour “war room” ahead of its 2019 elections to fight hoaxes and fake news. France has a new law that allows judges to determine what is fake news and order its removal during election campaigns.

 

FireEye is emerging as a key player in the fight against election interference and disinformation campaigns. FireEye was founded in 2004 by Ashar Aziz, who developed a system for spotting threats that haven’t been tracked before, unlike older companies that sold firewalls or anti-virus programs that block known malware. Aziz, a former Sun Microsystems engineer, created a system that uses software to simulate a computer network and check programs for suspicious behavior, before allowing them into the network itself.

 

Lee Foster, manager of information operations analysis at FireEye, said his team works within the company’s intelligence outfit, which researches not only “info-ops” — like the Iran-linked social media activity it recently uncovered — but also espionage, financial crime and other forms of vulnerability and exploitation. Specialist teams at FireEye focus on particular areas of cyber threats, each with its own expertise and language capabilities.

 

“Overall, the issue of defending the UK’s democratic processes and discourse has appeared to be something of a ‘hot potato’, with no one organisation recognising itself as having an overall lead,” UK report says. “DCMS (Department for Digital, Culture, Media & Sport) is a small Whitehall policy department and the Electoral Commission is an arm’s length body; neither is in the central position required to tackle a major hostile state threat to our democracy. “Protecting our democratic discourse and processes from hostile foreign interference is a central responsibility of Government, and should be a ministerial priority.” The committee believes MI5 should have an operational role, utilising the relationship already built with social media companies when dealing with terrorist content on their platforms.

 

References and Resources also include:

https://eurekalert.org/pub_releases/2018-04/rc-rin040918.php

http://www.latimes.com/business/technology/la-fi-tn-fireeye-20180824-story.html#

http://time.com/5112847/facebook-fake-news-unstoppable/

https://www.nytimes.com/2019/08/19/technology/hong-kong-protests-china-disinformation-facebook-twitter.html

https://www.washingtonpost.com/business/facebook-twitter-and-the-digital-disinformation-mess/2019/10/01/53334c08-e4b4-11e9-b0a6-3d03721b85ef_story.html

About Rajesh Uppal

Check Also

Decoding India’s Strategic ‘Cold Start’ Doctrine: Swiftly Countering Pakistan’s Aggression

In the realm of military strategy, nations often tread a delicate balance between deterrence and …

error: Content is protected !!