Trending News
Home / International Defence Security and Technology / Cyber / Cyber future may include elements of persistent surveillance, War of data, Internet of Emotions, and human hacking, according to Cybersecurity Futures 2020

Cyber future may include elements of persistent surveillance, War of data, Internet of Emotions, and human hacking, according to Cybersecurity Futures 2020

What will be the state of digital security in five and 10 years? Will it be a “Wild West” where every person and organization must fight to protect their own personal data? Will the Internet of Things advance so much into our homes and cities that everyone – at all times – is under surveillance? Are sensors going to be smart enough to determine and predict human feelings – opening the door to cybercriminals hacking human emotion? These are scenarios from The University of California – Berkeley’s Center for Long-Term Cybersecurity, which has modeled what the Internet and cybersecurity could look like in 2020 and beyond.


In April 2016, the UC Berkeley Center for Long-Term Cybersecurity (CLTC) released “Cybersecurity Futures 2020,” a series of five scenarios detailing possible futures for humans and technology in the year 2020: “The new normal”, “Omega”, “Bubble 2.0”, “Intentional Internet of Things”, “Sensorium (Internet of Emotion)”


Among the questions considered are: How might individuals function in a world where literally everything they do online will likely be hacked or stolen? How could the proliferation of networked appliances, vehicles, and devices transform what it means to have a “secure” society? What would be the consequences of almost unimaginably powerful algorithms that predict individual human behavior at the most granular scale?


These are among the questions considered through a set of five scenarios developed by the Center for Long-Term Cybersecurity (CLTC), a new research and collaboration center founded at UC Berkeley’s School of Information with support from the Hewlett Foundation.


These scenarios are not predictions—it’s impossible to make precise predictions about such a complex set of issues. Rather, the scenarios paint a landscape of future possibilities, exploring how emerging and unknown forces could intersect to reshape the relationship between humans and technology—and what it means to be “secure.”

The scenarios will inform CLTC’s research agenda and serve as a starting point for conversation among academic researchers, industry practitioners, and government policymakers. They provide a framework for questions we should be asking today to ensure a more secure information technology environment in the future.



Following years of mounting data breaches, internet users in 2020 now assume that their data will be stolen and their personal information broadcast. Law enforcement struggles to keep pace as larger-scale attacks continue, and small-scale cyberattacks become entirely commonplace—and more personal.

Governments are hamstrung by a lack of clarity about jurisdiction in most digital-crime cases. Hackers prove adept at collaborating across geographies while law enforcement agencies do not. Individuals and institutions respond in diverse ways: a few choose to go offline; some make their data public before it can be stolen; and others fight back, using whatever tools they can to stay one step ahead of the next hack. Cyberspace in 2020 is the new Wild West, and anyone who ventures online with the expectation of protection and justice ultimately has to provide it for themselves.



With accelerated developments in machine learning, algorithms, and sensors that track human action and enable datasets to feed off one another, the internet of 2020 will have embedded within it profoundly powerful models capable of predicting—and manipulating—a surprising range of human behavior. Data scientists of 2020 have developed profoundly powerful models capable of predicting—and manipulating—the behavior of single individuals with a high degree of accuracy.


The ability of algorithms to predict when and where a specific person will undertake particular actions is considered by some to be a signal of the last—or “omega”—algorithm, the final step in humanity’s handover of power to ubiquitous technologies.


For those responsible for cybersecurity, the stakes have never been higher. Illicit actors (indifferent on the philosophical point) will simply take advantage of these new technologies and the controversies they create to more precisely target and differentiate their attacks, making security even harder to achieve than it is today.  Individual predictive analytics generate new security vulnerabilities that outmatch existing concepts and practices of defense, focus increasingly on people rather than infrastructure, and prove capable of causing irreparable damage, financial and otherwise.



Two decades after the first dot-com bubble burst, the advertising-driven business model for major internet companies falls apart. As overvalued web companies large and small collapse, criminals and companies alike race to gain ownership of underpriced but potentially valuable data assets. It’s a “war for data” under some of the worst possible circumstances: financial stress and sometimes panic, ambiguous property rights, opaque markets and data trolls everywhere.


How might cybercriminals adapt to a more open and raucous data market? If governments want to prevent certain datasets from having a “for-sale” sign attached to them, what kinds of options will they have? What new systems or standards could emerge to verify the legitimacy or provenance of data? What does “buyer beware” look like in a fast-moving market for data? What role should government play in making markets for data more efficient and secure?


It’s a “war for data” under some of the worst possible circumstances: financial stress and sometimes panic, ambiguous property rights, opaque markets, and data trolls everywhere. In this world, cybersecurity and data security become inextricably intertwined. There are two key assets that criminals exploit: the datasets themselves, which become the principal targets of attack; and the humans who work on them, as the collapse of the industry leaves unemployed data scientists seeking new frontiers.



In 2020, the Internet of Things (IoT) is a profound social force that proves powerful in addressing problems in education, the environment, health, work productivity, and personal well-being. California leads the way with its robust “smart” system for water management, and cities adopt networked sensors to manage complex social, economic, and environmental issues such as healthcare and climate change that used to seem unfixable. Not everyone is happy, though. Critics assert their rights and autonomy as “nanny technologies” take hold, and international tensions rise as countries grow wary of integrating standards and technologies. Hackers find countless new opportunities to manipulate and repurpose the vast network of devices, often in subtle and undetectable ways. Because the IoT is everywhere, cybersecurity becomes just “security” and essential to daily life.



What if, in 2020, wearable devices did not care about how many steps you took, and instead were concerned with your real-time emotional state? With networked devices tracking hormone levels, heart rates, facial expressions, voice tone and more, the Internet could become a vast system of “emotion readers,” touching the most intimate aspects of human psychology. What if these technologies allowed people’s underlying mental, emotional and physical states to be tracked – and manipulated?


These technologies allow people’s underlying mental, emotional, and physical states to be tracked—and manipulated. Whether for blackmail, “revenge porn,” or other motives, cybercriminals and hostile governments find new ways to exploit data about emotion. The terms of cybersecurity are redefined, as managing and protecting an emotional public image and outward mindset appearance become basic social maintenance.


Imagining scenarios

“At the heart of our approach is scenario thinking, a proven methodology for identifying important driving forces and unexpected consequences that could shape the future. This approach often leads to more questions than answers, but what we identify can help guide us toward solutions as society and technology evolve.”

In our scenario about emotion-sensing, for example, many questions arise:

  • How might biosensing technologies evolve, and what would be the effect of having sensors tracking massive numbers of individuals’ emotions and mental states?
  • How will people respond when their most private and intimate experiences are understood by the Internet better than they themselves understand them?
  • How might virtual reality, sentiment analysis, wearable devices and other “sensory” technologies intersect with domains such as marketing, politics and the workforce?
  • What are the potential cybersecurity risks and benefits that could come with the proliferation of sensors capable of capturing and interpreting emotions?



Because scenarios are models, not predictions, no single scenario that we have described in this work, nor any single implication, will necessarily “come true.” Cybersecurity in 2020 will likely include elements of all these scenarios, in some indeterminate mix. Whatever that mix will look like, this work helps to demonstrate that “cybersecurity” will be stretched and broadened far beyond its meaning at present.


The cybersecurity world of 2020 will still be talking about malware, firewalls, network security, and social engineering. But it will also be talking about personal memories, new distinctions between what is public and private, the power of prediction, faith in public institutions, the provision of public good, psychological stability, the division of labor between humans and machines, coercive power (both visible and invisible), what it means for a human-machine system to have “intention,” and more.



The article sources also include:



Check Also

Trident and Successor. The dangers of cyber attacks: One retired former head of Strategic Command, General James Cartright thinks this danger is such that all nuclear weapons should be ‘de-alerted’ because a cyber attack might: Spoof early warning systems of an imminent nuclear attack. Hack into communications and issue order for attack. Hack directly into actual missile control systems. He suggest extending alert time from 3-5 mins to hours.

Rising Nuclear and Cyber threats require up gradation of Nuclear Command and Control Capabilities

The potential use of nuclear weapons poses the greatest danger to U.S. security. According to …

error: Content is protected !!