Home / Technology / AI & IT / Artificial Intelligence and Machine Learning are enabling the future of fully automated Cyber Security

Artificial Intelligence and Machine Learning are enabling the future of fully automated Cyber Security

AI emulates human cognition – i.e. learning based on experience and patterns, rather than by inference (cause and effect). Artificial intelligence covers everything from machine learning to business intelligence. Machine learning is a branch of artificial intelligence (AI) that refers to technologies that enable computers to learn and adapt through experience. Today, deep learning advancements in machine learning allow machines to teach themselves how to build models for pattern recognition (rather than relying on humans to build them), said Kris Lahiri, Co-founder, Chief Security Officer, Egnyte, on Quora. The last five years have really seen a rise in AI and ML technologies for enterprises. Most of which can be attributed to advancements in computing power and the evolution of paradigms like distributed computing, big data and cloud computing. Artificial intelligence (AI) and its subset machine learning are being hailed by experts as a means to fight cyber-attacks.

 

Rising Sophistication of Cyber Attacks

Cyber attacks are continuously increasing in numbers, becoming more varied more sophisticated, and more impactful.

 

Nation sponsored organizations continue to develop cyber-attack technologies for defense and offense. Financially driven criminal groups will seek ways to monetize cyber-attacks, hacktivists use cyber to convey their messages; terrorist groups will also shift to cyber space; and finally – people with no apparent motive, who seek to demonstrate their technical skills, will continue “contributing” to the attacker ecosystem, said Gil Shwed, Founder and CEO of Check Point Software Technologies Ltd.

 

Attacks are becoming more and more dangerous despite the advancements in cybersecurity. The main challenges of cybersecurity include:

  • Geographically-distant IT systems—geographical distance makes manual tracking of incidents more difficult. Cybersecurity experts need to overcome differences in infrastructure to successfully monitor incidents across regions.
  • Manual threat hunting—can be expensive and time-consuming, resulting in more unnoticed attacks.
  • Reactive nature of cybersecurity—companies can resolve problems only after they have already happened. Predicting threats before they occur is a great challenge for security experts.
  • Hackers often hide and change their IP addresses—hackers use different programs like Virtual Private Networks (VPN), Proxy servers, Tor browsers, and more. These programs help hackers stay anonymous and undetected.

 

Cybersecurity is one of the multiple uses of artificial intelligence. A report by Norton showed that the global cost of typical data breach recovery is $3.86 million. The report also indicates that companies need 196 days on average to recover from any data breach. For this reason, organizations should invest more in AI to avoid waste of time and financial losses.

 

AI/ML for Cyber Security

Cyber security defense systems will need to become more sophisticated in order to cope with huge amounts of data. The human analyst will not be able to cope with all this information and we will rely on more artificial intelligence to help us in making decisions. Cyber security companies deal with a lot of data and high dimensionality of data.

 

Machine learning is at its best in processing huge volumes of data, and processing data fast. For example, determining personally identifiable information (PII) within huge amounts of data is a good use for AI in cyber security, says Joan Pepin, CISO and VP of operations at Auth0. This is especially important in regions such as Europe where the EU update to General Data Protection Regulation (GDPR) is “mandating new levels of data governance”, she says.

 

Data science / Machine learning  techniques to process and analyze large collections of threat data

Organizations are already beginning to use AI to bolster cybersecurity and offer more protections against sophisticated hackers. AI helps by automating complex processes for detecting attacks and reacting to breaches. These applications are becoming more and more sophisticated as AI is deployed for security.

 

The most established cyber security companies have a long history of utilizing AI. F-Secure’s researcher Andrew Patel says: Cyber security companies have been using data science techniques to process and analyze large collections of both historic and fresh threat intelligence data for many years. F-Secure have been utilizing machine learning algorithms to solve classification, clustering, dimensionality reduction, and regression problems for over a decade, and nowadays, many of us use data science techniques in our everyday work. Recent advances in neural network architectures, such as generative adversarial networks, have opened new and interesting paths to solving problems in the cyber security space. We’re enthusiastically exploring many of these new paths right now.

 

Detection and response solutions are an excellent example of the use of AI and machine learning. F-Secure, collects billions of events every month from our customers’ computers. Only a fraction of these events are real attacks. Machine learning helps narrow down the number of events to a level a human can handle. It is then possible to identify the real attacks and contain them quickly.

 

Included in the Endpoint Detection and Response (EDR) service, F-Secure’s Broad Context DetectionTM uses real-time behavioral, reputational and big data analysis with machine learning to automatically place detections into context. It evaluates the risk levels, affected host criticality and the prevailing threat landscape to understand the scope of a targeted attack. Machine learning is an integral building block of the EDR service. It helps detect and respond to targeted attacks efficiently.

 

Cyber security is one of the key domains, where machine learning is extremely helpful.  They  could be applied to several specific use cases within cyber security. At the moment AI – or more specifically, machine learning – is mostly used for anomaly detection, says Etienne Greeff, CTO and co-founder, SecureData. He says the most useful systems are those that solve “specific problems”.

 

AI algorithms / Machine Learning (ML) to find new attacks

Traditionally, cyber security has protected companies against threats we have seen before. But the cyber threat landscape is getting more complicated. New generations of malware and cyber-attacks can be difficult to detect with conventional cybersecurity protocols. They evolve over time, so more dynamic approaches are necessary. It is difficult to build a rule for something we don’t know to exist.

 

Machine learning systems can be trained to find attacks, which are similar to known attacks. This way we can detect even the first intrusions of their kind, and develop better security measures. New AI algorithms use Machine Learning (ML) to adapt over time. Simon Crosby Co–founder and CTO at Bromium, writes that ML makes it easier to respond to cybersecurity risks.  Cybersecurity solutions that rely on ML use data from prior cyber-attacks to respond to newer but somewhat similar risk.

 

AI can also collect intelligence about new threats, attempted attacks and successful breaches and learn from it all, says Dan Panesar, VP EMEA, Certes Networks. Indeed, current iterations of machine learning have proven to be more effective at finding correlations in large data sets than human analysts, says Sam Curry, chief security officer at Cybereason. “This gives companies an improved ability to block malicious behaviour and reduce the dwell time of active intrusions.”  ,  “AI technology has the ability to pick up abnormalities within an organisation’s network and flag them more quickly than a member of the cyber security or IT team could,” he says.

 

Traditional security techniques use signatures or indicators of compromise to identify threats. This technique might work well for previously encountered threats, but they are not effective for threats that have not been discovered yet. Signature-based techniques can detect about 90% of threats. Replacing traditional techniques with AI can increase the detection rates up to 95%, but you will get an explosion of false positives. The best solution would be to combine both traditional methods and AI. This can result in 100% detection rate and minimize false positives.

 

Companies can also use AI to enhance the threat hunting process by integrating behavioral analysis. For example, you can leverage AI models to develop profiles of every application within an organization’s network by processing high volumes of endpoint data.

 

Using Machine Learning To Hunt Down  IP hijackers

Now using machine-learning Researchers at MIT and the University of California at San Diego (UCSD) by illuminating some of the common qualities of what they call “serial hijackers,” the team trained their system to be able to identify roughly 800 suspicious networks — and found that some of them had been hijacking IP addresses for years.

 

IP hijacking

Hijacking IP addresses is an increasingly popular form of cyber-attack. This is done for a range of reasons, from sending spam and malware to stealing Bitcoin. It’s estimated that in 2017 alone, routing incidents such as IP hijacks affected Another challenge is because of the inherent architecture of the internet and threat actors’ ability to obfuscate the source of an attack, it is nearly impossible to attribute attacks with a high degree of certainty.

 

Last but not least, many cyber-attacks are run automatically by “bots” that scan the entire network and find the weakest spot, so we won’t need to look like an “attractive target”.  We simply need to have a vulnerable point.  “more than 10 percent of all the world’s routing domains. There have been major incidents at Amazon and Google and even in nation-states — a study last year suggested that a Chinese telecom company used the approach to gather intelligence on western countries by rerouting their internet traffic through China.

 

IP hijackers exploit a key shortcoming in the Border Gateway Protocol (BGP), a routing mechanism that essentially allows different parts of the internet to talk to each other. Through BGP, networks exchange routing information so that data packets find their way to the correct destination.

 

In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That’s unfortunately not very hard to do, since BGP itself doesn’t have any security procedures for validating that a message is actually coming from the place it says it’s coming from. “It’s like a game of Telephone, where you know who your nearest neighbor is, but you don’t know the neighbors five or 10 nodes away,” says Testart.

 

To better pinpoint serial attacks, the group first pulled data from several years’ worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table. From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviors.

 

The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:

  • Volatile changes in activity: Hijackers’ address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network’s prefix was under 50 days, compared to almost two years for legitimate networks.
  • Multiple address blocks: Serial hijackers tend to advertise many more blocks of IP addresses, also known as “network prefixes.”
  • IP addresses in multiple countries: Most networks don’t have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.

 

In 1998 the U.S. Senate’s first-ever cybersecurity hearing featured a team of hackers who claimed that they could use IP hijacking to take down the Internet in under 30 minutes. Dainotti says that, more than 20 years later, the lack of deployment of security mechanisms in BGP is still a serious concern.

 

Identifying false positives

Testart said that one challenge in developing the system was that events that look like IP hijacks can often be the result of human error, or otherwise legitimate. For example, a network operator might use BGP to defend against distributed denial-of-service attacks in which there’s huge amounts of traffic going to their network. Modifying the route is a legitimate way to shut down the attack, but it looks virtually identical to an actual hijack.

 

Because of this issue, the team often had to manually jump in to identify false positives, which accounted for roughly 20 percent of the cases identified by their classifier. Moving forward, the researchers are hopeful that future iterations will require minimal human supervision and could eventually be deployed in production environments.

 

“The authors’ results show that past behaviors are clearly not being used to limit bad behaviors and prevent subsequent attacks,” says David Plonka, a senior research scientist at Akamai Technologies who was not involved in the work. “One implication of this work is that network operators can take a step back and examine global Internet routing across years, rather than just myopically focusing on individual incidents.”

 

As people increasingly rely on the Internet for critical transactions, Testart says that she expects IP hijacking’s potential for damage to only get worse. But she is also hopeful that it could be made more difficult by new security measures. In particular, large backbone networks such as AT&T have recently announced the adoption of resource public key infrastructure (RPKI), a mechanism that uses cryptographic certificates to ensure that a network announces only its legitimate IP addresses.

 

“This project could nicely complement the existing best solutions to prevent such abuse that include filtering, antispoofing, coordination via contact databases, and sharing routing policies so that other networks can validate it,” says Plonka. “It remains to be seen whether misbehaving networks will continue to be able to game their way to a good reputation. But this work is a great way to either validate or redirect the network operator community’s efforts to put an end to these present dangers.”

 

The project was supported, in part, by the MIT Internet Policy Research Initiative, the William and Flora Hewlett Foundation, the National Science Foundation, the Department of Homeland Security, and the Air Force Research Laboratory.

 

Concerns

At the same time, there are concerns about the accuracy of AI and machine learning: If the technology gets something wrong – or if people misinterpret a valid security alert – it can actually decrease business efficiency. False positives can be incredibly damaging to a security team, says Simon Whitburn, SVP cyber security services at Nominet. “Hackers, hostile nations, and wannabes are constantly trying to overwhelm cyber defences and false positives can distract security teams from these threats and increase complacency.”

 

An ML algorithm builds a model that represents the behavior of a real-world system from data that represents samples of its behavior. Training can be supervised — with prelabeled example data — or unsupervised. Either way, the data needs to be a representative of the real world. Without representative data, no algorithm can offer useful and generalizable insights.

 

ML is bad when there’s massive variation in the data that makes training useless. For example, in anti-virus, polymorphism makes every attack using the same underlying malware look different. ML can’t adapt to this variance. Moreover, ML is not perfect. Depending on the techniques used and the domain of application, it will fail to spot all attacks and may falsely classify activity.

 

The only way to avoid potentially disastrous consequences is to let malware execute in isolation to study it and map its behavior. ML, coupled with application isolation, prevents the downside of malware execution — isolation eliminates the breach, ensures no data is compromised and that malware does not move laterally onto the network.

 

“With cybersecurity, as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field we call adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models — what we call poisoning the models, or machine learning poisoning – or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.”

 

AI based cyber security systems also need large computing power, memory, and data to build and maintain . AI models are trained with learning data sets. Security teams need to get their hands on many different data sets of malicious codes, malware codes, and anomalies. Some companies just don’t have the resources and time to obtain all of these accurate data sets.

 

On the other hand, AI can open vulnerabilities as well, particularly when it depends on interfaces within and across organizations that inadvertently create opportunities for access by “bad actors” or disreputable agents. According to Aksela, cyber criminals are most likely using AI as well. They might, for example, want to learn which phishing emails work best, how to hide inside the target network for months and how to automate their actions. Attackers are beginning to deploy AI too, enabling it to have the ability to make decisions that benefit attackers. Meaning they will gradually develop automated hacks that are able to study and learn about the systems they target, and identify vulnerabilities, on the fly.

 

At the International Defence Conference, Omar bin Sultan Al Olama, UAE’s Minister of State for AI, Digital Economy and Teleworking Applications, noted that securing of systems is as critical as defending the sovereignty of the country. Al Olama underlined that there are a range of challenges in deploying artificial intelligence. “Principally ignorance within the decision process. Additionally, if accurate variables are not set, and an artificial intelligence programme does not have the correct data sets, then the system’s decision-making processes will be hindered. Finally, if artificial intelligence software is not developed locally, or the country itself is not involved in the development process, there is always the chance of backdoor access. This can lead to data sets being impaired by malicious third-parties, which can impact these systems and have a detrimental impact on the nation.

 

AI/ML will make cyber security fully automated in future

As cyber-attacks grow in scale and sophistication, AI and machine learning might go some way in helping to keep up with cyber criminals. However, Kenyon warns there could eventually be “an arms race”, with one computer protecting the environment and another attacking it.  “It could mean the network is changing rapidly and no one knows what’s going on. You need some kind of human mediation or a predetermined limit that you don’t allow the automated response to transcend.”

 

We will need to interconnect our defense systems to be able to act in real time. For example, our network gateway will need to share information with our personal devices. In the future, intelligent systems including these technologies will be able to accurately detect and remediate attacks in real-time.

 

Data analysis in a cyber security platform for automated network defence

CACI are working with HMG on a prototype cyber security platform to improve capability in the area of automated network defence. The platform ingests messages from disparate sources, including mainstream cyber security sensors and several bespoke sensors that are unique to our customer, and analyse the resulting data so a corresponding action can be automated. Before you can respond to a cyber threat you must first detect that threat in real time, with attackers aiming to subvert detection by any means necessary, sometimes ‘real time’ can mean data spread across days, months or years, depending on the value of the target.

 

Our ingest pipeline was designed with these factors in mind, developing a scalable solution using message queues and data stored in elastic search which allows us to receive a large amount of fine grained data from sensors. The data received into the system is characteristically small and noisy. The natural background noise on these networks can make it difficult to decipher a legitimate cyber-attack and sensors often only report small snippets of information. The first step is to normalise the data, extracting common features from the sensors data such as devices involved, files, URLs, timings and severity.

 

Once all data is normalised into a common model, we seek to understand more about it by passing it through an enrichment process. In an ideal world you would perform a high level of enrichment on every message, but this is computationally expensive – especially if it requires a 3rd party service such as a DNS lookup. We aim to perform a basic level of enrichment on every message, for example, by using internal databases we can geocode external IPs to their country. An asset library of all known devices within the system is a valuable resource, adding in device information, physical location and operational status. We can optionally use 3rd party enrichment or even query back into the network using tools like Osquery on an on-demand basis. This allows us to make decisions on how to enhance the dataset while balancing the load on the system and network.

 

Once the ingest process has finished enrichment, we have a large pool of data to analyse. To reduce the burden on the Cyber Analyst we make use of ML techniques. Using Recommendation Engines, we can look at the previous actions performed by the user for similar messages and make a suggestion on the correct response. If the confidence of the recommendation is high, then a response can be automated by the system, for example blocking an attacker’s access to the network.

 

Our UI is a key part of the application and allows for Cyber Analysts to browse the data within the system. They follow threads through the data, spotting patterns that could indicate the presence of an attacker. It’s important the UI can enable the user’s workflow of pivoting on the data and following these threads. We provide tools to group the data together to provide context. The system has built-in tasks to replicate these groups and present it to the user, should a similar pattern of events happen in the future. The sequence of events can be as important as the body of the events themselves, and our system’s UI accounts for this by enabling a timeline view. Ultimately the automated data analysis and ML applied in this project means that the caseload of the Cyber Analyst is reduced, responses to cyber threats can be made at all hours of the day with significant levels of trust and can result in the fast, automatic removal of a malicious entity from an unmonitored network.

 

Data centers

AI can optimize and monitor many essential data center processes like backup power, cooling filters, power consumption, internal temperatures, and bandwidth usage. The calculative powers and continuous monitoring capabilities of AI provide insights into what values would improve the effectiveness and security of hardware and infrastructure.

In addition, AI can reduce the cost of hardware maintenance by alerting on when you have to fix the equipment. These alerts enable you to repair your equipment before it breaks in a more severe manner. In fact, Google reported a 40 percent reduction in cooling costs at their facility and a 15 percent reduction in power consumption after implementing AI technology within data centers in 2016

 

Conclusion

It is true that AI increases efficiency, but the technology isn’t intended to completely replace human security analysts. “It’s not to say we are replacing people – we are augmenting them,” says Neill Hart, head of productivity and programs at CSI.

 

Matti Aksela, F-Secure’s VP of Artificial Intelligence, describes the future role of a cyber security expert as a “Cyber Centaur”. This means combining the best sides of man and machine to protect customers better. Because AI is used on both sides, “the good” and “the bad”, it will be even more important that the man and the machine work together and learn from one another in the future, too. To make better cyber security solutions

 

 

 

 

 

References and Resources also include:

https://business.f-secure.com/artificial-intelligence-and-machine-learning-in-cyber-security

https://www.forbes.com/sites/quora/2018/02/15/how-will-artificial-intelligence-and-machine-learning-impact-cyber-security/#7f8b02fa6147

https://www.information-age.com/ai-a-new-route-for-cyber-attacks-or-a-way-to-prevent-them-123481083/

https://scienceblog.com/511168/using-machine-learning-to-hunt-down-cybercriminals/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+scienceblogrssfeed+%28ScienceBlog.com%29

https://www.techuk.org/insights/opinions/item/17734-data-analysis-in-a-cyber-security-platform

https://www.computer.org/publications/tech-news/trends/the-impact-of-ai-on-cybersecurity

About Rajesh Uppal

Check Also

DARPA developing AI tools to enhance adult learning in security technologies

The way people work is shifting; acquiring new skill sets can help ensure the national …

error: Content is protected !!