Artificial intelligence and machine learning capabilities are growing an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation, medical image analysis, driverless cars, digital assistants for nurses and doctors, to AI-enabled drones for expediting disaster relief operations. AI is lowering barriers to entry and empowering organizations around the world to deliver services at a previously inaccessible scale and speed. Unfortunately, this same power is proving attractive to cyber-attackers. The cyber criminals are adopting AI for cyber crime.
“We think attackers are leveraging automation in building their attacks for a long time,” said Brian Witten, senior director at Symantec research labs. Symantec expects artificial intelligence-enabled cyber attacks to cause an explosion of network penetration, personal data theft, and an epidemic-level spread of intelligent viruses in the coming years, The number of malware variants rose 357 million in 2016 from 275 million two years earlier, while email malware rate also soared from 1 in 244 to 1 in 131during the same period, as per a report by Symantec in April this year. Ransomware detections touched 463,841 in 2016.
At Darktrace, we have seen the early signs of threat-actors using AI – whether it’s to supercharge spoofing emails or to create advanced malware that adopts its behavior to blend into the background noise of the network, said Sanjay Aurora, Managing Director, Asia Pacific, Darktrace . Take for example, the creation of spoof emails. By using AI, an attacker would be able to generate communication that for the average person, is virtually indistinguishable from genuine correspondence. And by leveraging the speed and scale made possible with AI, it would only take 2 attackers to create code that could generate 2 million emails a day with an 85% success rate – ultimately, making attacks significantly more profitable.
The threat is also enhanced as the rates of AI adoption growing rapidly, the number of open source and commercial AI tools, libraries, and platforms, are becoming available which can be exploited by the hackers and cyber criminals. Some of the tools are cloud based Azure Machine Learning service that provides tooling for deploying predictive models as analytic solutions; Caffe Developed by Yangqin Jia, Caffe , an open-source framework for deep learning that supports various types of software architectures that were designed with image segmentation and image classification; and Deeplearning4j an open-source, distributed deep learning library for the JVM.
But AI attacks won’t just target emails and corporate networks. There is a more worrying type of attack on the horizon – the sabotage of critical infrastructure. Advanced threat-actors are turning away from just simple data theft and look instead to cause mass disruption. And as cities and nations trend towards ‘smart city’ infrastructure, the attack surface has grown exponentially – meaning that the risk has never been higher. Attackers can use AI to bypass traditional security tools and slowly and subtly cause instrumental damage to the operations of the infrastructure – all whilst going undetected.
Just recently, an unknown group of hackers launched a massive “distributed denial of service” (DDoS) attack that took down part of the internet in the West. Analysis of the incident confirmed that the hackers used a huge “botnet,” or a system of computers, that comprised simple internet of things (IoT) devices to overload the systems of Dynamic Network Services (Dyn), a firm that is part of the internet address system. However, Musk said in a tweet that these DDoS attacks might not need human hackers, and in the future, they may be simple feats for advanced AI systems.
AI will also enable sophisticated Cyber warfare where countries can destroy critical infrastructure such as power, telecommunications or banking by damaging the computer systems that control those infrastructures. It’s widely acknowledged that offensive cyberattacks will be a necessary component of any future military campaign, and the extreme cyberweapons are being developed now. Stuxnet was the first cyberweapon, discovered in 2010 and the subsequent information leaks confirmed that the trojan was indeed a state sponsored malware designed to damage the targeted industrial control systems for a specific type of centrifuge equipment in a special nuclear facility in Iran. Developing such malware takes a lot of resources and skill and time.
The UK’s intelligence and security agency GCHQ commissioned a study into the use of AI for national security purposes. It warns that while the emergence of AI create new opportunities for boosting national security and keeping members of the public safe, it also presents potential new challenges, including the risk of the same technology being deployed by attackers. “Malicious actors will undoubtedly seek to use AI to attack the UK, and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities,” says the report from the Royal United Services Institute for Defence and Security Studies (RUSI).
Adversaries can use AI to cut short the development time of cyber weapons by using AI to discover the areas of weakness that may exist in targets. The cyber weapon can also be made adaptive according to the targets. Nation states will have to be on high alert to protect their energy grids, manufacturing plants, and airports from sophisticated cyber-threats. Ultimately, the future almost certainly holds the reality of AI-driven cyber-attacks, where malware will have the ability to self-propagate via a series of autonomous decisions and intelligently tailor itself to the parameters of the infected system in order to become stealthier to evade detection, said Sanjay Aurora.
Developments in the field of artificial intelligence and a recent string of attacks on numerous websites signal a terrifying future of cyber warfare, Elon Musk told his five million Twitter followers. His dire warning pertains to a mixture of machine-learning AI and rather “vulnerable” systems that lay the foundation of the internet. Musk said that the future of cyber warfare may not be waged with humans and our weapons, but with AI systems.