Home / Technology / AI & IT / Harnessing Artificial Intelligence and Data Analytics to Combat Terrorism on Social Media: Promoting Cross-Platform and Transnational Cooperation

Harnessing Artificial Intelligence and Data Analytics to Combat Terrorism on Social Media: Promoting Cross-Platform and Transnational Cooperation

In today’s digital age, social media platforms have become powerful tools for communication, information sharing, and social interaction. However, they have also become breeding grounds for extremist ideologies and terrorist propaganda. To counter this alarming trend, the integration of Artificial Intelligence (AI) and data analytics is emerging as a promising solution, fostering cross-platform and transnational cooperation to combat terrorists’ presence on social media.

 

Rise of employment of Social Media by Terrorist groups

Terrorist groups have increasingly utilized social media platforms like YouTube, Facebook, and Twitter to further their goals and spread their message due to their convenience, affordability, and vast reach. These platforms provide an opportunity for terrorist organizations to disseminate propaganda, recruit members, and gather intelligence. For instance, the Islamic State (IS) has made extensive use of social media, with publications like Rumiyah providing instructions on carrying out attacks and releasing instructional videos promoting violence. Al-Qaeda has maintained an internet presence for almost two decades, using social media platforms to distribute materials anonymously and spread its ideology. The Taliban, fighting in Afghanistan, heavily relies on the internet for propaganda purposes, operating multiple websites and having an active presence on Twitter.

These terrorist groups take advantage of the massive global reach and audience of social media platforms. With billions of users and a constant stream of content, these platforms provide an ideal medium for terrorists to disseminate their message and connect with potential recruits. The widespread use of platforms like YouTube, Twitter, and Facebook allows terrorist organizations to amplify their propaganda and reach a wide audience, potentially inciting violence and fostering radicalization.

The rise of social media as a tool for terrorists highlights the need for increased vigilance, cooperation, and countermeasures. Governments, technology companies, and law enforcement agencies must work together to develop strategies to detect and counter extremist content, disrupt recruitment efforts, and promote counter-narratives. Efforts to improve content moderation algorithms, enhance intelligence sharing, and encourage responsible use of social media can help mitigate the spread of terrorist propaganda and ensure a safer digital environment for all users.

 

Challenges of fighting online propaganda of Terrorists

The fight against online propaganda of terrorists poses several challenges due to the complex nature of their tactics and the diverse platforms they utilize. Terrorist campaigns typically employ multiple platforms, including smaller private coordination platforms, cloud storage or file-sharing sites, and larger social media platforms for content dissemination. This interplay between platforms makes it challenging for any single company or platform to have full visibility and control over the trends and content circulated elsewhere online.

The lack of strong on-platform signals and the use of third-party platforms hosting violating content further complicates the identification and removal of terrorist content. Without explicit indicators or obvious signals, such as text or images, it becomes difficult for platforms to automatically detect and take action against such content. Additionally, the ability of users to quickly create new accounts and the wide array of content-hosting platforms make it harder to track and prevent the spread of terrorist propaganda.

Tech companies also face the delicate task of balancing content removal with concerns around free speech. Determining when free speech crosses the line into hate speech requires clear guidelines and effective communication between governments, technology companies, and social media users. Collaboration, communication, and public-private partnerships are essential in tackling these challenges, particularly for smaller platforms that rely on third-party intelligence and tools.

As major tech companies improve their efforts to remove violent and extremist content from their platforms, terrorists adapt by finding new ways to disseminate their messages. Platforms such as blogs, chat rooms, and encrypted chat apps become alternative avenues for radicalization and recruitment, posing ongoing challenges for content moderation and enforcement.

Furthermore, terrorists’ ability to operate in the “dark web” adds another layer of complexity. Utilizing encrypted channels, dark sites, and cryptocurrencies like Bitcoin, terrorist organizations can store propaganda materials and conduct illicit activities beyond the reach of common search engines and hackers.

Addressing these challenges requires continuous collaboration, technological advancements, clear guidelines, and international cooperation. By fostering cross-sector efforts and strengthening public-private partnerships, there is a greater chance of effectively countering online terrorist propaganda while respecting free speech and human rights.

 

AI and Data Analytics

Social media platforms have faced immense challenges in identifying and removing terrorist content swiftly and efficiently. Traditional methods of content moderation alone are inadequate, given the sheer volume of user-generated content. This is where AI and data analytics step in, offering advanced technological capabilities to identify, analyze, and combat extremist content on a large scale.

By harnessing the power of AI, algorithms can be trained to automatically detect patterns and recognize key indicators of terrorist content. Machine learning algorithms can analyze vast amounts of text, images, and videos, identifying elements that align with extremist ideologies. These algorithms can continuously learn and adapt, improving their accuracy over time and staying ahead of evolving tactics used by terrorists on social media.

Data analytics further complements AI by providing valuable insights into the networks and behaviors of terrorist organizations. By analyzing data points such as social connections, user engagement, and content dissemination patterns, analysts can uncover hidden links between individuals, organizations, and extremist movements. This data-driven approach enables a deeper understanding of the dynamics of terrorist networks, aiding law enforcement agencies and intelligence communities in tracking, disrupting, and preventing terrorist activities.

For a deeper understanding on How Social media is mitigating global threats including Terrorism please visit:  The Role of Social Media in Mitigating Global Security Threats

Countering terrorist propaganda

Countering terrorist propaganda requires collaboration between tech companies, governments, and civil society to develop effective frameworks and tools. Initiatives like the Global Internet Forum to Counter Terrorism (GIFCT), Tech Against Terrorism (TAT), and the Global Network on Extremism and Technology (GNET) have facilitated such collaboration, advancing self-regulation and proactive responses by tech companies.

The European Union’s Digital Services Act (DSA) is a significant development in combatting illegal content. It places greater responsibility on tech giants like Google and Meta to swiftly remove illegal content or face substantial fines. The DSA is a landmark piece of legislation. It will require Big Tech firms to quickly rid their platforms of illegal content. Failure to comply with the rules may result in fines of up to 6% of companies’ global annual revenues.

The DSA aims to curb the distribution of harmful content, including misinformation, hate speech, and violent extremist material. It requires platforms to be more transparent about their algorithms and holds them liable for content removal within shorter timeframes. This entails the removal of video, pictures, text, or other postings that may pose a “societal risk” through their reference to sensitive and illegal materials such as hate speech, terrorist content, child sexual abuse material, or coordinated bouts of misinformation.

However, challenges remain in countering online extremism. Resource constraints, the role of social media influencers, and the need to address the tools and methods used to promote extremist content are significant concerns that the DSA may not fully address.

Moving forward, ongoing efforts to enhance cooperation, increase transparency, and allocate sufficient resources will be crucial in countering terrorist propaganda effectively. The commitment of tech companies, governments, and civil society, as well as the development of comprehensive frameworks and regulations, will play a vital role in combating the spread of extremist content and ensuring a safer online environment.

 

Global Internet Forum fighting Terrorism online

Global Internet Forum to Counter Terrorism (GIFCT) was established in 2017 by Facebook, Microsoft, Twitter, and YouTube to counter terrorist propaganda online. The consortium aims to make hosted consumer services hostile to terrorists and violent extremists through the use of technology, research, and counter-speech strategies.

“Our mission is to substantially disrupt terrorists’ ability to use the internet in furthering their causes, while also respecting human rights. This disruption includes addressing the promotion of terrorism, dissemination of propaganda, and the exploitation of real-world terrorist events through online platforms.” It focuses on sharing knowledge, information, and best practices, as well as conducting research and funding initiatives.

The members of GIFCT have taken significant steps to combat terrorist content on their platforms. They have developed policies and removal practices to take a hard line against such content. Companies like Facebook, Twitter, and Google employ automated systems and machine learning to detect and remove terrorist-related content swiftly. Facebook does this primarily through the use of automated systems like photo and video matching and text-based machine learning. Once the company is aware of a piece of terror content, it also removes “83% of subsequently uploaded copies within one hour of upload”.

They have implemented measures to improve transparency, enhance cooperation with governments and civil society, and engage in workshops to address the challenges of countering online extremism.

Facebook’s Monika Bickert and Brian Fishman say that the human element still needs to play a key role in identifying terrorist content. “We still rely on specialised reviewers to evaluate most posts, and only immediately remove posts when the tool’s confidence level is high enough that its ‘decision’ indicates it will be more accurate than our human reviewers.“We can reduce the presence of terrorism on mainstream social platforms, but eliminating it completely requires addressing the people and organisations that generate this material in the real world.”

The coalition has made progress in tackling terrorist propaganda through shared industry databases, such as the Global Internet Forum Hash Database, which helps platforms track and remove terrorist content using unique digital fingerprints. GIFCT’s work has expanded to include cooperation with governments, NGOs, academia, and other companies to share information and address the evolving landscape of terrorist activities.

To further strengthen their efforts, GIFCT is transforming into an independent NGO and has established an Independent Advisory Committee (IAC) consisting of representatives from governments, international organizations, and civil society. The IAC provides strategic expertise, assesses performance, and advises on GIFCT priorities.

Moving forward, GIFCT plans to expand its database to include a wider range of threats, including white supremacist and racially motivated violent extremism. By broadening its scope and fostering collaboration, GIFCT aims to combat various forms of extremist content and promote a safer online environment for all users.

 

To date, GIFCT and others have worked on cross-platform efforts for its member companies. This includes things like a hash-sharing database to share hashes, or “digital fingerprints,” of photos and videos that have been identified as terrorist content so that platforms can track and remove it if necessary. Learning from progress made in the child-safety space, hashed versions of labeled terrorist content allow the identifiers of terrorist content to be shared without sharing any user data or personally identifiable information.

 

Google developing free anti-terrorism moderation tool for smaller websites

Google is reportedly developing a free moderation tool that smaller websites can use to identify and remove terrorist material.

According to the Financial Times, the software is being developed in collaboration with Google’s research and development unit Jigsaw and the UN-supported Tech Against Terrorism, an initiative that assists tech companies in combating online terrorism.

“There are a lot of websites that just don’t have any people to do the enforcement. It is a really labour-intensive thing to even build the algorithms [and] then you need all those human reviewers,” Yasmin Green, chief executive of Jigsaw, was quoted as saying.

Meanwhile, Meta launched a new open-source software tool called “Hasher-Matcher-Actioner” (HMA) that will help platforms stop the spread of terror content, child exploitation, or any other violating content.

With HMA, platforms will be able to scan for any violating content and take action as required.

HMA builds on Meta’s previous open-source image and video matching software, which can be used for any type of violating content.

 

Artificial intelligence  to deter and remove terrorist propaganda online

Artificial intelligence (AI) plays a crucial role in deterring and removing terrorist propaganda online. Tech giants like Facebook, Twitter, and YouTube have invested in AI technologies to proactively identify and remove extremist content from their platforms. These platforms employ thousands of human reviewers and use AI algorithms for tasks such as image and video matching, language understanding, and identifying terrorist clusters.

AI algorithms can quickly analyze and match images and videos to known terrorist content, preventing them from being uploaded or shared on the platforms. Language understanding algorithms help identify text that advocates for terrorism, enabling swift action against such content. Platforms also use algorithms to identify and remove accounts that support terrorism, taking into account factors like their connections to disabled accounts.

 

There is no one tool or algorithm to stop terrorism and violent extremism online. Instead, we use a range of tools to address different aspects of how we see dangerous content manifest on our platforms, writes Erin saltman for facebook. Some examples of the tooling and AI we use to proactively detect terrorist and violent extremist content includes:

Some examples of the tooling and AI we use to proactively detect terrorist and violent extremist content includes:

  • Image and video matching: When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, for instance, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.
  • Language understanding: We have used AI to understand text that might be advocating for terrorism. This is language and often broad group-type specific.
  • Removing terrorist clusters:We know from studies of terrorists that they tend to radicalise and operate in clusters. This offline trend is reflected online as well. So, when we identify pages, groups, posts or profiles as supporting terrorism, we also use algorithms to “fan out” to try to identify related material that may also support terrorism. We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.
  • Recidivism: We are now much faster at detecting new accounts created by repeat offenders (people who have already been blocked from Facebook for previous violations). Through this work, we have been able to dramatically reduce the time that terrorist recidivist accounts are on Facebook. This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too. We are constantly identifying new ways that terrorist actors try to circumvent our systems, and we update our tactics accordingly.

Data analytics can be used to identify vulnerable people, and to ensure potential victims are identified quickly and consistently. These tools can also be used  to pinpoint and monitor pathways to radicalization, stop the spread of terrorist propaganda and better identify individuals being radicalized.

The use of AI in combating terrorism online has shown promising results. Tech companies have made significant progress in proactively detecting and removing terrorist content, reducing the time that recidivist accounts are active on their platforms. Continuous collaboration, innovation, and a balance between technology and human expertise are key to effectively countering terrorist propaganda and ensuring a safer online environment for all users.

However, AI alone cannot solve the problem of online extremism. It is complemented by human expertise, as trained reviewers play a crucial role in understanding nuanced language, detecting emerging trends, and reviewing content that may not be obviously violating. Partnerships with external organizations help platforms gain insights into regional trends and enhance counterspeech efforts.

 

 

 

 

 

 

Ultimately this is about finding the right balance between technology, human expertise and partnerships. Technology helps us manage the scale and speed of online content. Human expertise is needed for nuanced understanding of how terrorism and violent extremism manifests around the world and track adversarial shifts. Partnerships allow us to see beyond trends on our own platform, better understand the interplay between online and offline, and build programmes with credible civil society organisations to support counterspeech at scale, writes Erin saltman for facebook.

 

However, combating terrorism on social media requires more than just technological advancements. The nature of the threat is both cross-platform and transnational. That is why partnerships with other technology companies and other sectors will always be key.

It necessitates cross-platform and transnational cooperation among governments, social media companies, and law enforcement agencies. Collaboration is crucial to sharing expertise, intelligence, and best practices, ensuring a coordinated and efficient response to the spread of extremist content.

International agreements and partnerships between governments and technology companies are essential for information sharing and joint efforts to combat online radicalization. Platforms must be encouraged to collaborate, developing and implementing standardized protocols for content removal, reporting mechanisms, and sharing of data with law enforcement agencies. Mutual cooperation can help overcome jurisdictional challenges and create a unified front against online terrorism.

Furthermore, investing in public-private partnerships can enhance the effectiveness of AI and data analytics in combating terrorism. Governments can collaborate with technology companies and research institutions to develop sophisticated tools that leverage AI and data analytics while respecting privacy and ethical considerations. Funding initiatives and grants can encourage innovation in this field and support research that focuses on refining algorithms, enhancing detection capabilities, and improving the overall effectiveness of counterterrorism efforts.

In conclusion, the rise of extremist content on social media calls for innovative solutions that transcend traditional content moderation techniques. By harnessing the power of AI and data analytics, we can develop advanced tools to detect and combat terrorist presence online. However, this endeavor requires collaboration, cooperation, and partnerships across platforms, governments, and law enforcement agencies. Together, we can build a safer digital landscape, thwarting terrorist activities, and preserving the integrity of social media platforms for positive engagement and communication.

 

 

 

References and Resources also include:

https://www.siliconrepublic.com/companies/twitter-facebook-anti-terrorism-unit

https://link.springer.com/article/10.1007/s12115-017-0114-0

https://www.theatlantic.com/international/archive/2016/02/twitter-isis/460269/

http://www.philly.com/philly/opinion/commentary/terror-attack-sayfullo-saipov-manhattan-technology-artificial-intelligence-radicalization-20171103.html

 

About Rajesh Uppal

Check Also

Exploring Hybrid Quantum Technology: The Future of Computing and Beyond

In the realm of cutting-edge technology, quantum computing stands out as a beacon of innovation. …

error: Content is protected !!