AI race has ensued between countries like US, China and Russia to take a lead in this strategic technology. US has launched third Offset strategy to leverage technologies such as artificial intelligence, autonomous systems and human-machine networks to equalise advances made by the nations opponents in recent years.
IN JULY 2017, CHINA’S government issued a sweeping new strategy with a striking aim: draw level with the US in artificial intelligence technology within three years, and become the world leader by 2030. China aims to dominate the next generation of “intelligentized” warfare, relying on “long-range, precise, smart, stealthy and unmanned weapons platforms.”
Putin warns: “Artificial intelligence is the future, not only for Russia but for all of humankind. Whoever becomes the leader in this sphere will become the ruler of the world.” The Russian military is also developing robots, anti-drone systems, and cruise missiles that would be able to analyze radars and make decisions on the altitude, speed and direction of their flight, according to state media.
“We think attackers are leveraging automation in building their attacks for a long time,” said Brian Witten, senior director at Symantec research labs. “In that sense, it is only a matter of time they start leveraging artificial intelligence (AI) a lot more aggressively. It will be their AI against our AI and whoever builds the smartest AI will end up winning the arms race.”
Symantec expects artificial intelligence-enabled cyber attacks to cause an explosion of network penetration, personal data theft, and an epidemic-level spread of intelligent viruses in the coming years, The number of malware variants rose 357 million in 2016 from 275 million two years earlier, while email malware rate also soared from 1 in 244 to 1 in 131during the same period, as per a report by Symantec in April this year. Ransomware detections touched 463,841in 2016.
Developments in the field of artificial intelligence and a recent string of attacks on numerous websites signal a terrifying future of cyber warfare, Elon Musk told his five million Twitter followers. His dire warning pertains to a mixture of machine-learning AI and rather “vulnerable” systems that lay the foundation of the internet. Musk said that the future of cyber warfare may not be waged with humans and our weapons, but with AI systems.
AI enabling Cyber Security
AI has been used recently for fighting cyber crime and cyber warfare. Many cyber security firms are using recent advances in AI and machine learning (ML) to secure systems and data of their clients as attacks get more complex and sophisticated causing unprecedented levels of disruption.
“It is important to recognise that a lot of companies in the security industry have started leveraging AI to make individual products more effective and for not only detecting malware, spam and phishing but also security operations,” said Witten. “Cyber criminals are getting smarter and they are relying on artificial intelligence to stage attacks.”
Organizations and Intelligence agencies are using User Behavior Analytics or UBA to detect when legitimate user accounts/identities have been compromised by external attackers or are being abused by insiders for malicious purposes. DARPA, earlier had launched a program known as Cyber Insider Threat (CINDER) that proposed to monitor the “keystrokes, mouse movements, and visual cues” of insider threats.
In the cyber security context AI definitely helps perceive, identify vents and patterns in a much more predictive way so we can get a well defined output. The whole point of AI is you use pattern recognition software algorithms, deep learning algorithm to detect an anomaly early on and much faster than a human being will do,” said Burgess Cooper, Partner – Cyber Security, EY.
“One of the things driving them to apply AI and ML to security operations is there are not many security experts in the world for hiring. AI doubles the effectiveness of human security experts. It is amazing. Humans with the help of AI are able to detect all kinds of attacks that human alone could not detect,” said Witten. Witten believes that AI should handle tons of data, letting humans focus on strategy.
In a recent blog post, McAfee’s chief technology officer Steve Grobman, said that in the field of cyber security, as long as there is a shortage of human talent, the industry must rely on technologies such as artificial intelligence and ML to amplify the capabilities of the humans.
However, he added as long as there are human adversaries behind cybercrime and cyber warfare, there will always be a critical need for human intellect teamed with technology.
AI enabled cyber crime and cyber Warfare
Just recently, an unknown group of hackers launched a massive “distributed denial of service” (DDoS) attack that took down part of the internet in the West. Analysis of the incident confirmed that the hackers used a huge “botnet,” or a system of computers, that comprised simple internet of things (IoT) devices to overload the systems of Dynamic Network Services (Dyn), a firm that is part of the internet address system. However, Musk said in a tweet that these DDoS attacks might not need human hackers, and in the future, they may be simple feats for advanced AI systems.
As the rates of AI adoption growing rapidly, the number of open source and commercial AI tools, libraries, and platforms, are becoming available which can be exploited by the hackers and cyber criminals. Some of the tools are cloud based Azure Machine Learning service that provides tooling for deploying predictive models as analytic solutions; Caffe Developed by Yangqin Jia, Caffe , an open-source framework for deep learning that supports various types of software architectures that were designed with image segmentation and image classification; and Deeplearning4j an open-source, distributed deep learning library for the JVM.
Cyber warfare has developed into a more sophisticated type of combat between countries, where you can destroy critical infrastructure such as power, telecommunications or banking by damaging the computer systems that control those infrastructures. It’s widely acknowledged that offensive cyberattacks will be a necessary component of any future military campaign, and the extreme cyberweapons are being developed now.
Stuxnet was the first cyberweapon, discovered in 2010 and the subsequent information leaks confirmed that the trojan was indeed a state sponsored malware designed to damage the targeted industrial control systems for a specific type of centrifuge equipment in a special nuclear facility in Iran. Developing such malware takes a lot of resources and skill and time.
Adversaries can use AI to cut short the development time of cyber weapons by using AI to discover the areas of weakness that may exist in targets. The cyber weapon can also be made adaptive according to the targets.
References and resources also include: