The technology uses scientific knowledge in the form of new processes, materials, devices, systems or tools. Technology is a dual-use tool, it can either be used to mitigate the threat or if used by malicious actors it can itself become a threat.
The technologies are mostly dual-use, in that they can be used as much to serve malicious or lethal purposes in the hands of hackers and terrorists. Technology improves the way we communicate, share, and learn. The Internet allows us to obtain real-time information from all over the world, mobiles and social media help us to communicate, and networks and YouTube help us to learn from basic to advanced topics.
On the other hand, mobile and social media also affects some people in a negative way like decreasing face-to-face interactions. There are risks of seeing inappropriate content, cyberbullying, stealing identity, and hacking emails. There is also a loss of privacy because anyone can find you anywhere, at any time.
Significant technological advances are being made across a range of fields, including information communications technology (ICT); artificial intelligence (AI), particularly in terms of machine learning and robotics; nanotechnology; space technology; biotechnology; and quantum computing. The technological advances are driven by a digital revolution and need to gather, process, and analyze enormous reams of data. These advances promise significant social and economic benefits, increased efficiency, and enhanced productivity across a host of sectors.
Human Security
Human security means freedom from threats to our lives, safety, and rights. United Nations have identified seven essential human security elements: economic security, food security, health security, environmental security, personal security, community security, and political security.
Human rights are inherent to all human beings. They are defined and established in more than 80 international legal instruments and include fundamental protections of human dignity, needs, and freedoms, such as food, housing, privacy, personal security, and democratic participation.
Technology threats to human security
Emerging technologies, such as artificial intelligence, may significantly expand the availability and quality of data that informs policy and healthcare decisions for the benefit of society. At the same time, rapid developments in artificial intelligence, automation, and robotics raise questions about their impacts on human rights and the future of work. And in the background, mass data collection can lead to violations of the right to privacy and inhibit free and fair societies.
From a practical perspective, technology can help move the human rights agenda forward. For instance, the use of satellite data can monitor the flow of displaced people; artificial intelligence can assist with image recognition to gather data on rights abuses; and the use of forensic technology can reconstruct crime scenes and hold perpetrators accountable. Yet for the multitude of areas in which emerging technologies advance the human rights agenda, technological developments have equal capacity to undermine efforts.
It has long been a truism that today’s innovations can become tomorrow’s threats. But the current speed of technological change has resulted in a world in which emerging dangers are rapidly outpacing our defenses. New technologies- from artificial intelligence to unmanned aerial systems – have the potential to disrupt the status quo and fundamentally alter the security landscape.
From authoritarian states monitoring political dissidents by way of surveillance technologies, to the phenomenon of “deepfakes” destabilizing the democratic public sphere, ethical and policy-oriented implications must be taken into consideration with the development of technological innovations.
Privacy Threat
One of the threats from emerging technologies has been the privacy threat. Nowadays data storage is primarily on computer systems. With the advent of internet technology, the world has got interconnected and data can be accessed remotely by those who are otherwise unauthorized to do the same.
ICT technologies like Wi-Fi hotspots, mobile internet, and broadband connections have made us online for the predominant part of our daily lives. The networking of objects, devices, people, and organizations to create the so-called “internet of things” is enabling a wide range of new products, services, and solutions, such as smart cities, sustainable agriculture, self-driving cars, connected healthcare, and more efficient industrial processes.
While living in an ever-more connected world provides us with easier access to lot of useful services and information, it also exposes large amounts of our personal information including our personal data, habits, and life to a wider world. Depending on your browsing habits, the websites and services you visit, all manner of data from your birthday, address, and marital status can be harvested from your online presence.
While living in an ever-more connected world provides us with easier access to lot of useful services and information, it also exposes large amounts of our personal information including our personal data, habits, and life to a wider world. Depending on your browsing habits, the websites and services you visit, all manner of data from your birthday, address, and marital status can be harvested from your online presence.
There is the growth of the new type of recommendation industry wherein Google, Facebook or Twitter can track what people read, watch and post and can assign scores and values to rank their interests and even their possible political leanings to ensure that when they’re selling people’s information to advertisers, it’s incredibly accurate, thus receiving top dollar. Services like Google Maps can also track your real-time and historic location by default, could lead to always being stalked by faceless tech companies.
These opportunities are accompanied by new risks and challenges, such as the difficulty of obtaining informed consent from citizens for data use, or the need to establish privacy protocols for who has access to data, who controls data, and how data is used.
We’re also seeing greater market demand, evident by the significant growth of the privacy tech industry. Will companies simply do the minimum amount required to comply with data-related regulations, or will they go above and beyond to collect, use, and protect data in a more equitable way for everyone?
Hate Speech and Countering Violent Extremism
As set out in Article 19 of the UDHR, everyone has the right to freedom of opinion and expression, including the right to seek, receive, and impart information and ideas through any media and regardless of frontiers. However, governments are increasingly interested in proactive monitoring, surveilling, removing, and blocking of certain types of content, especially terrorist content and hate speech. These content restrictions are important for human rights protection but must be “necessary and proportionate” and the least intrusive restrictions to achieve the desired result. Access to appeal and remedy in the event of over-blocking is crucial.
Environmental sustainability
New products make their way and leave the existing ones obsolete. In fact technological change and innovation is at the heart of consumerism, which is bad for economy and environment in general. The recent economic downturn makes up for a very good example.
Increasingly technological products are adding up to environmental degradation. Computer screens, keyboards, the ink used in the printers are some of the ways in which technology is polluting the environment. All these produce toxins that cannot be decomposed easily.
There’s a push for technology companies to go beyond what’s required by law on environmental sustainability. There are those who challenge the industry for its energy use, supply chains that could be more efficient, manufacturing waste, and water use in semiconductor fabrication. The good news is technology companies have the market power to create significant change. Tech companies are some of the largest buyers of renewable energy in the world and are working to run their massive data centers off that energy. Some focus on zero waste initiatives, improving recycling and promoting circular economy principles. Cisco’s Takeback and Reuse program and Microsoft’s 2030 zero waste goal are examples. Others work toward net-zero carbon through The Climate Pledge, spearheaded by Amazon, or individual efforts, such as Apple’s pledge to become carbon-neutral across its businesses by 2030.
Trustworthy AI
The related developments of AI and big data analytics have been enabled by more powerful computing and the ability to utilize large and complex data sets. These developments present tremendous opportunities, such as in medical diagnostics, retail, and law enforcement. However, a variety of new risks emerge with their use, such as automated systems making discriminatory decisions (such as in housing, credit, employment, and health), the automation of jobs impacting labor rights by reducing demand for certain skills, or the misuse of personal data.
The rapid deployment of AI into societal decision-making—from health care recommendations to hiring decisions and autonomous driving—has catalyzed an ongoing ethics conversation. It’s increasingly important that AI-powered systems operate under principles that benefit society and avoid issues with bias, fairness, transparency, and explainability.
To address these issues, we’ve seen tech industry players establish advisory panels, guiding principles, and the sponsoring of academic programs. We’ve also seen action beyond statements of principle. Some larger tech players decided in 2020 to stop providing AI-powered facial recognition systems to police departments until clear guidelines, or legislation, is in place. This is a solid foundation to build on, but faith in the industry is low.10 As a consequence, we see a growing potential for government action and regulation, such as the EU’s proposed Artificial Intelligence Act and recent statements from the Federal Trade Commission in the United States.
Threats to truth
There are hordes of people and groups using disinformation, misinformation, deepfakes, and the weaponizing of data to attack, manipulate, and influence for personal gain, or to sow chaos.
Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive. Deepfakes have garnered widespread attention for their uses in celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud. This has elicited responses from both industry and government to detect and limit their use.
To help address this intractable issue, technology companies have asked governments to pass regulations clearly outlining responsibilities and standards. They’re also cooperating more with law enforcement and intelligence agencies, publishing public reports of their findings, and increasing overall vigilance and action.13 In addition, many companies have signed up for the EU’s voluntary Code of Practice on Disinformation, which is currently being strengthened. Is this all happening fast and comprehensively enough and with enough forethought?
Physical and mental health:
Technology has also made inroads into the field of medicine and life care. The technology industry can not only impact the physical and mental well-being of customers who use and overuse its products and services, but also by its direct involvement in health care, which has been accelerated by the pandemic.
New cloning techniques, genetic modifications or other life saving drugs need continuous monitoring and surveillance. Bioethics has thus emerged as ethics in the field of medical technology.
We’re still working to better understand the impacts of technology on health, and a lot of research and debate are ongoing. Although measuring the impact of both is difficult and complex, the technology industry has shown it can improve health-related areas with tech such as wearables, and through better access to providers through telehealth, sensors, devices, and apps for chronic disease monitoring, and improving diagnoses through advanced analytics and AI.
Mitigating technology risks
In 2011, the UN Human Rights Council unanimously endorsed the UN Guiding Principles on Business and Human Rights (Guiding Principles), stating that governments must put in place good policies, laws, and enforcement measures to prevent companies from violating rights, that companies must refrain from negatively impacting rights, and that victims of corporate abuses must have access to effective remedy. As part of this responsibility, the Guiding Principles require companies to undertake due diligence to identify and manage their negative human rights impacts.
The growth of these technologies raises important questions about whether our current policies, legal systems, and documentation and advocacy strategies are sufficient to mitigate the human rights risks that may result, many of which are still unknown.
According to the UN, 128 of 194 countries currently have enacted some form of data protection and privacy legislation. Even more regulation and increased enforcement are being considered. This attention is due to multiple industry problems including abuse of consumer data and massive data breaches. Until clear and universal standards emerge, the industry continues to work toward addressing this dilemma. This includes making data privacy a core tenet and competitive differentiator, like Apple, which recently released an app tracking transparency feature.
Technological advancements also introduce new actors to the human rights framework. The movement has historically focused on the role of the state in ensuring rights and justice. Today, technological advancements and the rise of artificial intelligence and machine learning, in particular, necessitate interaction, collaboration, and coordination with leaders from business and technology in addition to government.
The role of businesses, who will both create and utilize new technologies, is a critical issue. Will the private sector develop and deploy technologies in a way that is consistent with respect for human rights, and that builds in appropriate safeguards to prevent and mitigate negative human rights outcomes? At the same time, governments must also focus on their duty and examine how to ensure that businesses act responsibly.
Ethics of technology or Technoethics (TE)
Ethics address the issues of what is ‘right’, what is ‘just’, and what is ‘fair’. Ethics describe moral principles influencing conduct; accordingly, the study of ethics focuses on the actions and values of people in society (what people do and how they believe they should act in the world)
Technology is merely a tool like a device or gadget. With this thought process of technology just being a device or gadget, it is not possible for technology to possess a moral or ethical quality. Going by this thought process the tool maker or end user would be the one who decides the morality or ethicality behind a device or gadget.
The ethics of technology or Technoethics (TE) is a sub-field of ethics addressing the ethical questions specific to the Technology Age, the transitional shift in society wherein personal computers and subsequent devices provide for the quick and easy transfer of information.
The term “technoethics” was coined in 1977 by the philosopher Mario Bunge to describe the responsibilities of technologists and scientists to develop ethics as a branch of technology. Bunge argued that the current state of technological progress was guided by ungrounded practices based on limited empirical evidence and trial-and-error learning. He recognized that “the technologist must be held not only technically but also morally responsible for whatever he designs or executes: not only should his artifacts be optimally efficient but, far from being harmful, they should be beneficial, and not only in the short run but also in the long term.” He recognized a pressing need in society to create a new field called ‘Technoethics’ to discover rationally grounded rules for guiding science and technological progress
Technoethics (TE) is an interdisciplinary research area that draws on theories and methods from multiple knowledge domains (such as communications, social sciences information studies, technology studies, applied ethics, and philosophy) to provide insights on ethical dimensions of technological systems and practices for advancing a technological society.
Using theories and methods from multiple domains, technoethics provides insights on ethical aspects of technological systems and practices, examines technology-related social policies and interventions, and provides guidelines for how to ethically use new advancements in technology.
In the former case, ethics of such things as computer security and computer viruses asks whether the very act of innovation is an ethically right or wrong act. Similarly, does a scientist have an ethical obligation to produce or fail to produce a nuclear weapon? What are the ethical questions surrounding the production of technologies that waste or conserve energy and resources? What are the ethical questions surrounding the production of new manufacturing processes that might inhibit employment, or might inflict suffering in the third world?
In the latter case, the ethics of technology quickly break down into the ethics of various human endeavors as they are altered by new technologies.
Artificial Intelligence seems to be one of the most talked of challenges when it comes ethics. In order to avoid these ethical challenges some solutions have been established; first and for most it should be developed for the common good and benefit of humanity. Secondly, it should operate on principles of intelligibility and fairness. It should also not be used to diminish the data rights or privacy of individuals, families, or communities. It is also believed that all citizens should have the right to be educated on artificial intelligence in order to be able to understand it. Finally, the autonomous power to hurt, destroy, or deceive humans should never be vested in artificial intelligence.
For example, bioethics is now largely consumed with questions that have been exacerbated by the new life-preserving technologies, new cloning technologies, and new technologies for implantation. In law, the right of privacy is being continually attenuated by the emergence of new forms of surveillance and anonymity. The old ethical questions of privacy and free speech are given new shape and urgency in an Internet age. Such tracing devices as RFID, biometric analysis and identification, genetic screening, all take old ethical questions and amplify their significance.
As you can see, the fundamental problem is as society produces and advances technology that we use in all areas of our life from work, school, medicine, surveillance, etc. we receive great benefits, but there are underlying costs to these benefits. As technology evolves even more, some of the technological innovations can be seen as inhumane and those same technological innovations can be seen by others as creative, life changing, and innovative.
References and Resources also include
https://www2.deloitte.com/us/en/insights/industry/technology/ethical-dilemmas-in-technology.html
https://www.bsr.org/en/our-insights/primers/10-human-rights-priorities-for-the-ict-sector