Home / Technology / AI & IT / What would be Cyber future? AI, Persistent surveillance, War of data, Internet of Emotions, and human hacking

What would be Cyber future? AI, Persistent surveillance, War of data, Internet of Emotions, and human hacking

Cyber attacks are  continuously  increasing in numbers, becoming more varied more sophisticated, and more impactful. What Will Cybersecurity Look Like 10 Years From Now?  According to Gil Shwed, Founder and CEO of Check Point Software Technologies Ltd., the future of cybersecurity is tightly connected to the future of information technology and the advancements of the cyberspace. “Today, most of our critical systems are interconnected and driven by computers. In the future, this connection will be even tighter. More decisions will be automated. Our personal lives will be reliant on virtual assistants, and IoT connected devices will be part of almost every function of our daily lives. Connected cars will make our daily commute easier, and virtually all of our personal data will reside in cloud computing, where we don’t fully control the dataflow and access to information.”

In the coming ten years, nation sponsored organizations will continue to develop cyber-attack technologies for defense and offense; financially driven criminal groups will continue to seek ways to monetize cyber-attacks; hacktivists will continue to use cyber to convey their messages; terrorist groups will also shift to cyber space; and finally – people with no apparent motive, who seek to demonstrate their technical skills, will continue “contributing” to the attacker ecosystem, Gil further said.

“Another challenge we will encounter in cyber defense is that, unlike the physical world where we kind of know who our potential adversaries are and what “weapons” they use, in cyber space anyone could be our enemy. Last but not least, many cyber-attacks are run automatically by “bots” that scan the entire network and find the weakest spot, so we won’t need to look like an “attractive target”. We simply need to have a vulnerable point. Yes, we are all targets.”

Cyber security defense systems will need to become more sophisticated in order to cope with huge amounts of data. First, we will need to interconnect our defense systems to be able to act in real time. For example, our network gateway will need to share information with our personal devices. Second, the human analyst will not be able to cope with all this information and we will rely on more artificial intelligence to help us in making decisions.

“So overall, we will see systems that are smarter, sophisticated, able to handle large populations and large amounts of data, systems that can update themselves rapidly, that can take decisions in real time and that connect to shared-intelligence centers that will keep us guarded.”

 

What will be the state of digital security in five and 10 years? Will it be a “Wild West” where every person and organization must fight to protect their own personal data? Will the Internet of Things advance so much into our homes and cities that everyone – at all times – is under surveillance? Are sensors going to be smart enough to determine and predict human feelings – opening the door to cybercriminals hacking human emotion? These are scenarios from The University of California – Berkeley’s Center for Long-Term Cybersecurity, which has modeled what the Internet and cybersecurity could look like in 2020 and beyond.

 

Cybersecurity solutions that rely on AI can use existing data to handle new generations of malware and cybersecurity attacks.  AI systems that directly handle threats on their own do so according to a standardized procedure or playbook. Rather than the variability (and ultimately inaccuracy) that comes with a human touch, AI systems don’t make mistakes in performing their function. As such, each threat is responded to in the most effective and proper way. Unfortunately, there will always be limits of AI, which is why Steve Grobman, CTO at McAfee states that human-machine teams will be key to solving increasingly complex cybersecurity challenges.

 

AI is future of Cybersecurity

New AI algorithms use Machine Learning (ML) to adapt over time. Simon Crosby Co–founder and CTO at Bromium, writes that ML makes it easier to respond to cybersecurity risks. New generations of malware and cyber-attacks can be difficult to detect with conventional cybersecurity protocols. They evolve over time, so more dynamic approaches are necessary. Cybersecurity solutions that rely on ML use data from prior cyber-attacks to respond to newer but somewhat similar risk.

 

An ML algorithm builds a model that represents the behavior of a real-world system from data that represents samples of its behavior. Training can be supervised — with prelabeled example data — or unsupervised. Either way, the data needs to be a representative of the real world. Without representative data, no algorithm can offer useful and generalizable insights.

 

To summarize, ML is bad when there’s massive variation in the data that makes training useless. For example, in anti-virus, polymorphism makes every attack using the same underlying malware look different. ML can’t adapt to this variance. Moreover, ML is not perfect. Depending on the techniques used and the domain of application, it will fail to spot all attacks and may falsely classify activity.

The only way to avoid potentially disastrous consequences is to let malware execute in isolation to study it and map its behavior. ML, coupled with application isolation, prevents the downside of malware execution — isolation eliminates the breach, ensures no data is compromised and that malware does not move laterally onto the network.

“With cybersecurity, as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field we call adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models — what we call poisoning the models, or machine learning poisoning – or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.”

 

 

Cyber Security Futures

In April 2016, the UC Berkeley Center for Long-Term Cybersecurity (CLTC) released “Cybersecurity Futures 2020,” a series of five scenarios detailing possible futures for humans and technology in the year 2020: “The new normal”, “Omega”, “Bubble 2.0”, “Intentional Internet of Things”, “Sensorium (Internet of Emotion)”

 

Among the questions considered are: How might individuals function in a world where literally everything they do online will likely be hacked or stolen? How could the proliferation of networked appliances, vehicles, and devices transform what it means to have a “secure” society? What would be the consequences of almost unimaginably powerful algorithms that predict individual human behavior at the most granular scale?

 

These are among the questions considered through a set of five scenarios developed by the Center for Long-Term Cybersecurity (CLTC), a new research and collaboration center founded at UC Berkeley’s School of Information with support from the Hewlett Foundation.

 

These scenarios are not predictions—it’s impossible to make precise predictions about such a complex set of issues. Rather, the scenarios paint a landscape of future possibilities, exploring how emerging and unknown forces could intersect to reshape the relationship between humans and technology—and what it means to be “secure.”

 

The scenarios will inform CLTC’s research agenda and serve as a starting point for conversation among academic researchers, industry practitioners, and government policymakers. They provide a framework for questions we should be asking today to ensure a more secure information technology environment in the future.

 

SCENARIO 1: THE NEW NORMAL

Following years of mounting data breaches, internet users in 2020 now assume that their data will be stolen and their personal information broadcast. Law enforcement struggles to keep pace as larger-scale attacks continue, and small-scale cyberattacks become entirely commonplace—and more personal.

Governments are hamstrung by a lack of clarity about jurisdiction in most digital-crime cases. Hackers prove adept at collaborating across geographies while law enforcement agencies do not. Individuals and institutions respond in diverse ways: a few choose to go offline; some make their data public before it can be stolen; and others fight back, using whatever tools they can to stay one step ahead of the next hack. Cyberspace in 2020 is the new Wild West, and anyone who ventures online with the expectation of protection and justice ultimately has to provide it for themselves.

 

SCENARIO 2: OMEGA

With accelerated developments in machine learning, algorithms, and sensors that track human action and enable datasets to feed off one another, the internet of 2020 will have embedded within it profoundly powerful models capable of predicting—and manipulating—a surprising range of human behavior. Data scientists of 2020 have developed profoundly powerful models capable of predicting—and manipulating—the behavior of single individuals with a high degree of accuracy.

 

The ability of algorithms to predict when and where a specific person will undertake particular actions is considered by some to be a signal of the last—or “omega”—algorithm, the final step in humanity’s handover of power to ubiquitous technologies.

 

For those responsible for cybersecurity, the stakes have never been higher. Illicit actors (indifferent on the philosophical point) will simply take advantage of these new technologies and the controversies they create to more precisely target and differentiate their attacks, making security even harder to achieve than it is today.  Individual predictive analytics generate new security vulnerabilities that outmatch existing concepts and practices of defense, focus increasingly on people rather than infrastructure, and prove capable of causing irreparable damage, financial and otherwise.

 

SCENARIO 3: BUBBLE 2.0

Two decades after the first dot-com bubble burst, the advertising-driven business model for major internet companies falls apart. As overvalued web companies large and small collapse, criminals and companies alike race to gain ownership of underpriced but potentially valuable data assets. It’s a “war for data” under some of the worst possible circumstances: financial stress and sometimes panic, ambiguous property rights, opaque markets and data trolls everywhere.

 

How might cybercriminals adapt to a more open and raucous data market? If governments want to prevent certain datasets from having a “for-sale” sign attached to them, what kinds of options will they have? What new systems or standards could emerge to verify the legitimacy or provenance of data? What does “buyer beware” look like in a fast-moving market for data? What role should government play in making markets for data more efficient and secure?

 

It’s a “war for data” under some of the worst possible circumstances: financial stress and sometimes panic, ambiguous property rights, opaque markets, and data trolls everywhere. In this world, cybersecurity and data security become inextricably intertwined. There are two key assets that criminals exploit: the datasets themselves, which become the principal targets of attack; and the humans who work on them, as the collapse of the industry leaves unemployed data scientists seeking new frontiers.

 

SCENARIO 4: INTENTIONAL INTERNET OF THINGS

In 2020, the Internet of Things (IoT) is a profound social force that proves powerful in addressing problems in education, the environment, health, work productivity, and personal well-being. California leads the way with its robust “smart” system for water management, and cities adopt networked sensors to manage complex social, economic, and environmental issues such as healthcare and climate change that used to seem unfixable. Not everyone is happy, though. Critics assert their rights and autonomy as “nanny technologies” take hold, and international tensions rise as countries grow wary of integrating standards and technologies. Hackers find countless new opportunities to manipulate and repurpose the vast network of devices, often in subtle and undetectable ways. Because the IoT is everywhere, cybersecurity becomes just “security” and essential to daily life.

 

SCENARIO 5: SENSORIUM (INTERNET OF EMOTION)

What if, in 2020, wearable devices did not care about how many steps you took, and instead were concerned with your real-time emotional state? With networked devices tracking hormone levels, heart rates, facial expressions, voice tone and more, the Internet could become a vast system of “emotion readers,” touching the most intimate aspects of human psychology. What if these technologies allowed people’s underlying mental, emotional and physical states to be tracked – and manipulated?

 

These technologies allow people’s underlying mental, emotional, and physical states to be tracked—and manipulated. Whether for blackmail, “revenge porn,” or other motives, cybercriminals and hostile governments find new ways to exploit data about emotion. The terms of cybersecurity are redefined, as managing and protecting an emotional public image and outward mindset appearance become basic social maintenance.

 

Imagining scenarios

“At the heart of our approach is scenario thinking, a proven methodology for identifying important driving forces and unexpected consequences that could shape the future. This approach often leads to more questions than answers, but what we identify can help guide us toward solutions as society and technology evolve.”

In our scenario about emotion-sensing, for example, many questions arise:

  • How might biosensing technologies evolve, and what would be the effect of having sensors tracking massive numbers of individuals’ emotions and mental states?
  • How will people respond when their most private and intimate experiences are understood by the Internet better than they themselves understand them?
  • How might virtual reality, sentiment analysis, wearable devices and other “sensory” technologies intersect with domains such as marketing, politics and the workforce?
  • What are the potential cybersecurity risks and benefits that could come with the proliferation of sensors capable of capturing and interpreting emotions?

 

Conclusion

Because scenarios are models, not predictions, no single scenario that we have described in this work, nor any single implication, will necessarily “come true.” Cybersecurity in 2020 will likely include elements of all these scenarios, in some indeterminate mix. Whatever that mix will look like, this work helps to demonstrate that “cybersecurity” will be stretched and broadened far beyond its meaning at present.

 

The cybersecurity world of 2020 will still be talking about malware, firewalls, network security, and social engineering. But it will also be talking about personal memories, new distinctions between what is public and private, the power of prediction, faith in public institutions, the provision of public good, psychological stability, the division of labor between humans and machines, coercive power (both visible and invisible), what it means for a human-machine system to have “intention,” and more.

 

 

References and Resources also include:

https://cltc.berkeley.edu/scenarios/

https://www.sbs.ox.ac.uk/cybersecurity-capacity/system/files/cltcReport_04-27-04a_pages.pdf

http://phys.org/news/2016-04-year-2020how-cybersecurity.html

https://www.forbes.com/sites/quora/2017/09/14/what-will-cybersecurity-look-like-10-years-from-now/#798ce8a36e6e

https://www.infosecurity-magazine.com/next-gen-infosec/ai-future-cybersecurity/

 

About Rajesh Uppal

Check Also

Cloud Security Market and Trends: An In-Depth Analysis

The cloud has revolutionized the way businesses operate, offering unparalleled scalability, flexibility, and cost-efficiency. However, …

error: Content is protected !!