AI is a powerful tool that is increasingly being used in both the public and private sectors to make people and society at large healthier, wealthier, safer, and more sustainable. The United Nations Secretary-General, António Guterres, has indicated that, if harnessed appropriately, AI can play a role in the fulfillment of the 2030 Agenda for Sustainable Development, ending poverty, protecting the planet, and ensuring peace and prosperity for all. It is, however, an enormously powerful technology that is not without its challenges. If not used properly and with appropriate safeguards, this technology can hamper fundamental freedoms and infringe upon human rights, such as the right to privacy, equality, non-discrimination, and freedom of opinion.
In 2020, UNOCT/UNCCT and UNICRI launched a joint research initiative to take stock of advancements in AI from the counter-terrorism perspective. This initiative, which is funded with generous contributions from the Kingdom of Saudi Arabia and the Government of Japan, seeks to explore the dual nature potential of this technology.
Through this initiative, UNOCT/UNCCT and UNICRI have examined some of the more concerning aspects of the advent of this technology, including the possibility of its use with malicious intent by terrorist groups and individuals to spread terrorist propaganda and disinformation, as well as the vulnerability of AI systems embedded into critical infrastructure through cyberattacks targeting the integrity of the data upon which these systems are based.
The initiative also explored the possibility of how AI might be leveraged to support counter-terrorism efforts, in particular to combat terrorist use of the Internet and social media in the regions of South and South-East Asia. It is well established that big tech platforms increasingly make use of AI, to detect terrorist and extremist content online, but the other applications to support law enforcement and counter-terrorism agencies such as complex event processing or social network mapping and analysis remain under-explored.
AI in Terrorism
Terrorists have been observed to be early adopters of emerging technologies, which tend to be under-regulated and under-governed, and AI is no exception. More than two decades into the 21st century, we have seen many examples of terrorists turning to new and emerging technologies such as drones, virtual currencies and social media.
But we have also seen the dark side of AI – a side that has not received as much attention and remains underexplored. The reality is that AI can be extremely dangerous if used with malicious intent. With a proven track record in the world of cybercrime, it is a powerful tool that could conceivably be employed to further or facilitate terrorism and violent extremism conducive to terrorism, by, for instance, providing new modalities for physical attacks with drones or self-driving cars, augmenting cyberattacks on critical infrastructure, or enabling the spread of hate speech and incitement to violence in a faster and more efficient way.
As it continues to be weaponized, AI could prove a formidable threat, allowing adversaries — including nonstate actors — to automate killing on a massive scale. The combination of drone expertise and more sophisticated AI could allow terrorist groups to acquire or develop lethal autonomous weapons, or “killer robots,” which would dramatically increase their capacity to create incidents of mass destruction in Western cities.
According to a February 2018 report, terrorists could benefit from commercially available AI systems in several ways. The report predicts that autonomous vehicles will be used to deliver explosives; low-skill terrorists will be endowed with widely available high-tech products; attacks will cause far more damage; terrorists will create swarms of weapons to “execute rapid, coordinated attacks”; and, finally, attackers will be farther removed from their targets in both time and location. As AI technology continues to develop and begins to proliferate, “AI [will] expand the set of actors who are capable of carrying out the attack, the rate at which these actors can carry it out, and the set of plausible targets.”
AI in counterterrorism
According to the Engineering and Physical Science and Research Council: Artificial Intelligence technologies aim to reproduce or surpass abilities (in computational systems) that would require ‘intelligence’ if humans were to perform them. These include: learning and adaptation; sensory understanding and interaction; reasoning and planning; optimisation of procedures and parameters; autonomy; creativity; and extracting knowledge and predictions from large, diverse digital data.
The vast amount of digital information now generated by the average individual means that more of this routine activity could be understood through analysis. Sources include communications metadata and internet connection records, but also extend to location and activity tracking, purchases and social media activity.
Automated data analytics are used to support the activities of the intelligence and security services, particularly through data visualization. Algorithms prioritize terrorist suspects, and routinely assess the risk of air-travel passengers. Information can be collected and stored by default, to be analysed at a later time with a view to revealing patterns and links that expose terrorist networks or suspicious activities. Machine learning approaches allow the interpretation and analysis of otherwise inaccessible patterns in large amounts of data. These approaches may involve filtering, analysis of relationships between entities, or more sophisticated image- or voice- recognition
AI can be used to make predictions about terrorism based on communications metadata, financial transaction information, travel patterns and internet browsing activity, as well as publicly available information such as social media activity. The development and use of AI in the financial services sector has been spurred by mandatory reporting of suspicious activity in financial transactions.
AI can perform a myriad of complex tasks that formerly required a human being. Social media companies use AI to help identify and remove terrorist content and materials that violate their terms of service, so far with mixed results at best. tempts made it through—proving that their technology is not yet up to the task. In fact, instead of preventing terrorist content from spreading, the Associated Press recently reported that Facebook’s AI was making videos of and promoting the terrorist content it should have been removing.
AI technology in its current form is limited and cannot currently evaluate context when reviewing content. Machine learning excels at identifying subtle patterns in old data and applying it to new data. It fails when those patterns are not completely relevant to the new situation, and it cannot consider any other context than within which it has been trained.
In our areas of work in the fields of justice, crime prevention, security and the rule of law, we have seen promising uses of AI, including its ability to help locate long-missing children, scan illicit sex ads to identify and disrupt human trafficking rings, and flag financial transactions that may indicate money laundering, said Antonia Marie De Meo, Director of the United Nations Interregional Crime and Justice Research Institute
This initiative looks at how the emerging more powerful technologies can affect counter-terrorism: by organizing and interpreting a vast array of seemingly uncorrelated data; automating decision-making; and predicting behaviour and events at individual and societal level. The initiative examines the human rights risks that come with use of such technologies and sets out a framework for human rights-compliant use.
Given the international linkages and cross-border implications of many technological systems, a regional and international approach becomes vital to ensure terrorists do not have the opportunity to exploit regulatory gaps that can expose vulnerabilities in AI systems. We need to build resilient governing structures that can quickly and effectively respond to and mitigate the impact of the malicious use of AI by terrorists.
Additionally, in the context of the Global Counter-Terrorism Coordination Compact Working Group on Promoting and Protecting Human Rights and the Rule of Law while Countering Terrorism and Supporting Victims of Terrorism (the Global Compact Working Group on Human Rights) and with the generous support of seed funding provided by UNOCT/UNCCT, OHCHR in cooperation with UNOCT/UNCCT and UNICRI have analyzed the human rights perspectives to the use of AI in counter-terrorism, with the goal of providing practical guidance and recommendations to Member States, technology providers, and United Nations entities to support them in using AI to counter terrorism in full compliance with human rights.