Home / Critical & Emerging Technologies / AI & IT / The Growing Threat of Deepfakes: AI-Generated Videos and Audio as Tools for Terrorism, Psychological Warfare, and Cyber Attacks

The Growing Threat of Deepfakes: AI-Generated Videos and Audio as Tools for Terrorism, Psychological Warfare, and Cyber Attacks

In today’s rapidly evolving technological landscape, artificial intelligence (AI) is driving remarkable advancements. However, as with any powerful tool, there are dark sides to its applications. One of the most alarming developments in recent years is the rise of deepfakes—AI-generated videos and audio that are nearly indistinguishable from real recordings. Originally hailed for creative and entertainment purposes, deepfakes have quickly become a tool of manipulation, deceit, and potential harm. Their use in terrorism, psychological warfare, and cyber attacks is a growing threat that demands urgent attention from governments, organizations, and individuals alike.

What Are Deepfakes?

In recent years, consumer imaging technology—through digital cameras, mobile phones, and other devices—has become ubiquitous, allowing people around the world to take and share images and videos instantly. Historically, falsifying photos and videos required significant skill and resources, either through advanced CGI or painstaking Photoshop work. However, advances in artificial intelligence have dramatically lowered the barrier for creating fake video and audio.

The term “deepfakes” refers to AI-generated media that can make people—often celebrities or political figures—appear to say or do things they never did. Examples include actor Alden Ehrenreich’s face being replaced with Harrison Ford’s in Solo: A Star Wars Story or a deepfake of Mark Zuckerberg bragging about his power to rule the world.

Deepfakes are synthetic media created using deep learning algorithms to swap faces, alter voices, or mimic real people in audio and video content. By feeding the AI large datasets of real video or audio of a target, these algorithms can produce highly realistic, fake representations that can fool even the most trained eye or ear. Advanced image and video editing applications, widely available to the public, enable this manipulation, making it difficult to detect visually or through current image analysis and media forensics tools. As the technology continues to improve, detecting a deepfake becomes increasingly difficult.

Originally created for entertainment or creative purposes, deepfakes have found a darker application: cybercrime. In 2023, there was an unprecedented surge in deepfake-related scams. A study by Onfido revealed a staggering 3,000% increase in fraud attempts using deepfakes over the past year, with financial scams being one of the primary targets.

While convincing deepfakes are still relatively rare due to the complexity involved in creating them, the technology is advancing rapidly. As creation tools become more accessible, less expensive, and easier to use, the threat of deepfakes being deployed in large-scale attacks is no longer a distant possibility but an imminent danger. Industries that rely on secure communication, such as finance and government sectors, are particularly vulnerable to these types of attacks, which can lead to significant financial losses and reputational damage.

Economic and Personal Consequences

The manipulation of visual media extends beyond the cosmetic industry, where “touch-ups” are common to make models and skincare products look more appealing. It has seeped into politics and business, where the stakes are significantly higher. While many image manipulations are benign or artistic, others are used for more sinister purposes, such as propaganda or misinformation campaigns.

Deepfake technology can also be exploited for financial scams. For example, hackers could create a convincing audio deepfake of a company executive instructing an employee to transfer funds to a fraudulent account. These attacks, known as Business Email Compromise (BEC) scams, could become even more dangerous with deepfake audio, as employees may not question the legitimacy of the request.

In the corporate world, a deepfake of a CEO making false announcements could affect stock prices, damage a brand’s reputation, or create legal issues. On a personal level, deepfakes have already been used in malicious ways, such as revenge porn or identity theft.

Experts have raised concerns about other potential uses of deepfakes, such as misrepresenting products in online sales or fabricating car accidents to deceive insurance companies. There have also been cases where manipulated images were used to falsify research data. As the technology improves, there are fears that deepfakes could eventually create convincing portrayals of major events that never happened, amplifying misinformation and conspiracy theories.

Cybercrimes

In the Chinese company’s case, cybercriminals created a highly realistic deepfake of a company executive. The impersonator used AI-generated video and audio to trick an employee into authorizing a massive wire transfer. By the time the scam was uncovered, the company had lost 26 million euros. This attack exemplifies how deepfakes can be weaponized to exploit human trust and manipulate individuals into taking actions that benefit criminals.

This incident also reveals the growing sophistication of cybercriminals. Rather than relying solely on traditional methods like phishing or ransomware, attackers are increasingly using advanced AI tools to create convincing simulations of real people, adding a new layer of complexity to cybersecurity defenses.

Psychological Warfare and Propaganda

Deepfakes present an unprecedented tool for psychological warfare. By creating fake videos or audio clips of leaders, celebrities, or other influential figures, adversaries can manipulate emotions, incite fear, or generate distrust. For instance, deepfake technology could be used to create an inflammatory speech from a prominent leader, spreading panic or unrest in targeted populations.

During an election, a deepfake of a candidate admitting to criminal activity or making derogatory remarks could sway public opinion and undermine the democratic process. The ability to manufacture fake but seemingly real events threatens the integrity of information, undermining trust in media and institutions.

Adversaries could also use deepfakes to psychologically torture individuals by creating realistic fake content that harms their reputation or well-being, fostering fear and paranoia.

The most infamous form of this kind of content is the category called “deepfakes” — usually pornographic video that superimposes a celebrity or public figure’s likeness into a compromising scene. Actress Scarlett Johansson, who has unwittingly starred in a supposedly leaked pornographic video that the Washington Post says has been viewed more than 1.5 million times on a major porn site, feels the situation is hopeless. “Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” she told the Post in a story published a few days ago. “The Internet is a vast wormhole of darkness that eats itself.

Cyber Warfare and Disinformation Campaigns

While some deepfakes are created for entertainment, others serve more dangerous purposes. Cybercriminals, for instance, could use AI-generated videos to manipulate stock prices by releasing fake footage of a CEO discussing a company’s financial troubles.

These “deepfakes” could be a boon to hackers in a couple of ways. AI-generated “phishing” e-mails that aim to trick people into handing over passwords and other sensitive data have already been shown to be more effective than ones generated by humans. Now hackers will be able to throw highly realistic fake video and audio into the mix, either to reinforce instructions in a phishing e-mail or as a standalone tactic.

Deepfakes also pose a threat to political stability, as they could spread false information in elections or stoke geopolitical tensions by creating realistic yet fabricated videos of national leaders making aggressive declarations

Cyber warfare has traditionally involved attacks on digital infrastructure, but with the rise of deepfakes, the scope of cyber warfare is expanding to include the manipulation of human perception. A well-timed, convincing deepfake could be deployed during critical moments—such as military operations, elections, or economic negotiations—causing confusion, delay, or even conflict.

One of the most prominent threats is the ability to use deepfakes as part of disinformation campaigns. A deepfake of a financial executive making false statements could crash markets, or a fabricated video of a military general could lead to real-world consequences in geopolitical conflicts. Combined with the speed and reach of social media, these deepfakes can go viral before they are debunked, leaving lasting damage in their wake.

For instance, the Bipartisan Senate Intelligence Committee and Special Counsel Robert Mueller found that Russia’s social media influence operation, beginning in 2014, included Russian intelligence operatives visiting the United States to study how to maximize Moscow’s campaign effectiveness. Their goal was to divide Americans and give one presidential candidate an advantage over another. This campaign marked a turning point in how digital media manipulation could be weaponized.

Information Warfare tool

This technology lowers the costs of engaging in information warfare at scale and broadens the range of actors able to engage in it. Today, propaganda is largely generated by humans, such as China’s ‘50-centers’ and Russian ‘troll farm’ operators. However, improvements in deep fake technology, especially text-generation tools, could help take humans ‘out of the loop’.  The key reason for this isn’t that deep fakes are more authentic than human-generated content, but rather that they can produce ‘good enough’ content faster, and more economically, than current models for information warfare. Deep fake technology will be a particular value-add to the so-called Russian model of propaganda, which emphasises volume and rapidity of disinformation over plausibility and consistency in order to overwhelm, disorient and divide a target

Where things get especially scary is the prospect of malicious actors combining different forms of fake content into a seamless platform,” Andrew Grotto at the Center for International Security at Stanford University said. “Researchers can already produce convincing fake videos, generate persuasively realistic text, and deploy chatbots to interact with people. Imagine the potential persuasive impact on vulnerable people that integrating these technologies could have: an interactive deepfake of an influential person engaged in AI-directed propaganda on a bot-to-person basis.”

In 2019, journalists discovered that intelligence operatives had allegedly created a false LinkedIn profile for a ‘Katie Jones’, probably to collect information on security professional networks online. Researchers exposed the Katie Jones fake through technical photo analysis and a rather old-fashioned mechanism: asking the employer listed on LinkedIn (the Center for Strategic and International Studies) if such a person worked for it.

Deepfakes in Terrorism

One of the most chilling potential uses of deepfakes is in the hands of terrorist organizations. Deepfakes can be weaponized to create false narratives, incite violence, or manipulate public opinion. For example, a deepfake video of a political leader declaring war or surrender could spark chaos, riots, or even violent attacks, particularly in regions already vulnerable to conflict.

Extortion and Blackmail: Deepfakes can be used to extort individuals or organizations by threatening to release damaging or embarrassing content.

Terrorist groups could also use deepfakes to discredit government officials, spread disinformation, or recruit new members by creating fake testimonials or messages of support from prominent figures. As deepfakes become more sophisticated, these scenarios become more plausible, amplifying the potential for widespread destabilization.

Implications for National Security

The use of deepfakes in cyber warfare and terrorism poses a severe national security threat. Intelligence agencies rely on authentic information to make decisions, and if deepfakes can successfully mimic real events or people, it may lead to erroneous assessments. This problem is compounded by the fact that the public’s trust in the media and government institutions is already at an all-time low. The rapid dissemination of disinformation through social media platforms only adds to the urgency of addressing the deepfake threat.

A realistic-seeming video showing an invasion, or a clandestine nuclear program, may trigger war between nations. A country’s population is galvanized as the newspaper headlines call for war. “We must strike first” they say in response to alleged footage of another country’s President declaring war on their nation. Is the footage real? Or was it their own country’s intelligence services trying to create the pretext for war?

Governments must invest in developing AI-powered detection tools that can differentiate real content from AI-generated fakes. However, as deepfake technology continues to evolve, the arms race between creators of deepfakes and detectors will only intensify.

Efforts to Combat the Deepfake Threat

Governments, private companies, and researchers are all working to combat the growing threat of deepfakes. Several promising AI-driven tools have been developed to detect deepfake videos and audio, and social media platforms are working on ways to identify and flag manipulated content before it goes viral.

A key issue is that many deepfakes are designed to exploit our trust in visual media, blurring the line between fact and fiction. Startups are working to develop deepfake detection technologies, but their effectiveness remains uncertain. The most reliable defense so far is security awareness training that sensitizes people to the risks.

In the interim, the most effective defense is raising awareness through security training to educate people about the risks. The rapid evolution of AI has made it possible to generate highly convincing fakes, leaving researchers in a constant race to keep pace. One team, for example, found that early deepfakes had glaring flaws, such as subjects who never blinked due to the training data lacking images of closed eyes. However, even this flaw was quickly corrected, highlighting the speed at which the technology is improving.

Legislation is also being considered in many countries to address the misuse of deepfakes, particularly in the context of elections and cybersecurity. However, the problem remains complex. While detection tools and policies are critical, public awareness and media literacy are equally important. Teaching people how to critically assess the content they consume online and recognize potential signs of manipulation will help mitigate the harm caused by deepfakes.

Implications on Trust and Authority:

Deepfakes, particularly in the manipulation of faces, can be exploited for various malicious purposes, including bullying, revenge, political sabotage, and blackmailing. The potential exposure of deepfakes threatens to suppress information, leading to a breakdown in confidence in public authorities. As photographs and videos are commonly used as evidence, the reliability of these sources is compromised by increasingly sophisticated editing techniques. As the capabilities of deepfake technology continue to evolve, the need for robust countermeasures becomes increasingly urgent.

Government Initiatives and Policies:

Governments globally are recognizing the need for legislation to address the deepfake threat.

Lawmakers emphasized the imperative of addressing the risks associated with AI-generated deepfake content during a House Oversight subcommittee hearing. The bipartisan consensus acknowledged the government’s role in regulating deceptive deepfake materials, particularly those involving nonconsensual pornography, which constitutes around 96 percent of deepfake videos online, primarily targeting women, according to a study by the Dutch AI company Sensity. Representative Gerry Connolly, D-Va., stressed the critical need for additional funding for the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF) to develop advanced and effective deepfake detection tools. Dr. David Doermann of SUNY Buffalo highlighted DARPA’s leadership in tackling deepfakes but underlined the necessity of exploring explainability and trust and safety aspects at the grassroots level within the agency.

During the hearing, Connolly commended the Biden administration’s recent AI executive order for taking productive steps to address deepfakes. The order specifically focuses on tools like watermarking to help users identify the authenticity of online content, differentiating between government documents and disinformation tools. The order instructs the Secretary of Commerce to collaborate enterprise-wide in developing standards and best practices for detecting fake content and tracking the providence of authentic information.

In addition to funding, Connolly advocated for targeted legislation to combat deepfakes. Representative Joe Morelle, D-N.Y., introduced the “Preventing Deepfakes of Intimate Images Act” in May, which seeks to criminalize the sharing of nonconsensual deepfake pornography. The proposed bill includes provisions ensuring that consent for creating an AI image does not imply consent to share it, and it aims to safeguard the anonymity of plaintiffs seeking protection from deepfake content. As the threat of AI-generated deepfakes continues to evolve, comprehensive measures, including financial support, legislative initiatives, and technological solutions, are deemed crucial to addressing this growing challenge effectively.

  1. Legislation and Regulation: Governments worldwide are recognizing the need for legislation to address the deepfake threat. Some countries have introduced or updated laws to criminalize the creation and dissemination of malicious deepfakes. Legislation often includes penalties for offenders, serving as a deterrent.
  2. International Collaboration: Given the borderless nature of the internet, international collaboration is crucial. Governments are increasingly working together to share intelligence, best practices, and technologies to combat the global spread of deepfakes.
  3. Investment in Research and Development: Governments are allocating resources for research and development to stay ahead of emerging deepfake techniques. Funding is directed towards developing cutting-edge detection technologies and fostering collaboration between the public and private sectors.
  4. Public Awareness Campaigns: Governments are launching public awareness campaigns to educate citizens about the existence of deepfakes and the potential risks they pose. These campaigns aim to empower individuals to critically evaluate the authenticity of online content.

International collaboration is crucial in the face of the borderless nature of the internet. Investments in research and development, public awareness campaigns, and penalties for offenders are part of the multifaceted approach to mitigating the risks associated with deepfake technology.

Conclusion

The rise of deepfake technology presents serious challenges to the integrity of visual media. From undermining political campaigns to stoking international conflict, the potential consequences of this technology are far-reaching.  The weaponization of AI-generated fake videos and audio for terrorism, psychological warfare, and cyber attacks is a significant and evolving threat.

As deepfake technology becomes more accessible and sophisticated, the potential for misuse will continue to grow.  While detection and prevention efforts are underway, the responsibility to address this issue lies with governments, tech companies, and individuals alike.

While AI has enabled the creation of these fakes, it is also being leveraged to counter them. As researchers race to develop more sophisticated detection tools, it is essential for individuals, businesses, and governments to remain vigilant and invest in new solutions. Only by staying ahead of this rapidly evolving threat can we preserve trust in the images and videos that shape our understanding of the world.

As we move further into the digital age, safeguarding the integrity of information must become a top priority. The deepfake threat is not just a technological challenge but a societal one that requires a coordinated and proactive response.

Deepfakes are the latest battleground in the fight for truth and security, and the stakes are higher than ever.

 

 

The paper, “Hybrid LSTM and Encoder–Decoder Architecture for Detection of Image Forgeries,” is published in the July 2019 issue of IEEE Transactions on Image Processing and was funded by DARPA. Other authors include Jawadul H. Bappy, Cody Simons, Lakshmanan Nataraj, and B. S. Manjunath.

 

 

 

 

 

 

 

 

 

 

 

 

 

About Rajesh Uppal

Check Also

DARPA’s GARD Program: Building Resilient Machine Learning to Counter Adversarial Attacks

Modern artificial intelligence (AI) systems have reached near-human levels in various tasks, including object recognition, …

error: Content is protected !!