Home / Critical & Emerging Technologies / AI & IT / Countries plan mitigating high-risks of AI technology by strict regulation and trustworthy and explainable AI development

Countries plan mitigating high-risks of AI technology by strict regulation and trustworthy and explainable AI development

Artificial Intelligence (AI) is rapidly transforming industries, from healthcare to finance, autonomous vehicles to cybersecurity, promising groundbreaking advancements in technology and efficiency. However, this immense potential is accompanied by significant risks. AI technologies, if not regulated properly, could lead to unintended consequences, such as biased decision-making, invasions of privacy, security vulnerabilities, or even the weaponization of AI.

From self-driving cars to medical devices, AI systems are now deeply embedded in industries ranging from healthcare to transportation. However, as AI becomes increasingly integrated into these critical sectors, its opacity and potential for harm pose significant challenges. With these growing concerns, many countries around the world are moving toward implementing strict regulations and promoting the development of trustworthy and explainable AI systems to mitigate the high risks associated with this transformative technology.

The Evolution of AI: From Rule-Based to Deep Learning

The first wave of AI was primarily rule-based, relying on pre-programmed logic to perform tasks. This system was limited in scope, unable to adapt or learn from new data. The second wave, powered by machine learning (ML), enabled AI systems to improve through experience. Recent advancements in deep learning, a subset of ML, have propelled AI to unprecedented heights, enabling systems to handle complex tasks like autonomous driving, healthcare diagnostics, and data analysis with remarkable accuracy.

However, deep learning algorithms are often criticized as “black boxes.” These systems can make highly accurate predictions and decisions, but their internal workings are opaque, making it difficult for humans to understand the reasoning behind their actions. This lack of transparency is particularly problematic in high-stakes applications where human lives are on the line. For example, in healthcare, doctors need to understand the reasoning behind an AI’s treatment recommendation before they can trust it. Similarly, judges must be able to interpret the factors influencing AI predictions about recidivism before basing sentencing decisions on them.

David Gunning, Program Manager for DARPA’s Explainable Artificial Intelligence (XAI) initiative, emphasizes the importance of transparency in AI, especially when decisions have a profound impact on people’s lives. The goal of XAI is to make AI systems not only more understandable but also more predictable, enabling users to see why a particular decision was made. For example, an AI system could explain that it classified an image as a cat because of features like fur, whiskers, and claws.

The Need for Strict AI Regulation

AI’s rapid advancement has outpaced the creation of regulatory frameworks, leaving many nations scrambling to keep up. The European Union (EU) has emerged as a global leader in implementing AI regulations, recognizing that innovation must be balanced with protection against potential harm. In 2021, the EU proposed the Artificial Intelligence Act, which aims to establish a comprehensive legal framework for AI across its member states. The act categorizes AI applications into four risk levels—unacceptable, high, limited, and minimal—ensuring that the most high-risk applications, such as those in healthcare or critical infrastructure, are subject to strict compliance measures, including transparency, accountability, and oversight.

Europe’s Regulatory Approach to AI

In response to these challenges, the European Commission unveiled plans to regulate AI more strictly. In 2020, Europe set itself apart from the U.S. and China by adopting a cautious approach to AI, focusing on public trust rather than unchecked technological advancement. The Commission aims to create binding rules for “high-risk” AI applications, such as those used in healthcare, transport, and criminal justice. High-risk AI systems will be required to be interpretable, with clear human oversight and transparent decision-making processes.

The European Commission’s plan also includes measures to regulate the large datasets that train AI systems, ensuring that they are legally procured, traceable, and sufficiently diverse. The overarching goal is to create AI systems that are robust, accurate, and trustworthy, fostering public confidence in their use.

One of the major highlights of the plan is the introduction of a “trustworthy AI” certification for low-risk applications. This certification encourages voluntary compliance with ethical guidelines, providing a framework for companies to self-regulate. However, any systems found to breach the rules could face fines or other penalties. This approach aims to strike a balance between fostering innovation and ensuring public safety.

Similarly, the United States, though lagging behind the EU in terms of formal AI regulation, has initiated discussions around AI governance through legislative proposals such as the Algorithmic Accountability Act. Countries like Canada, Japan, and the UK are also establishing frameworks to ensure AI technologies are developed and deployed responsibly. These regulatory frameworks are critical not only to prevent malicious uses of AI but also to ensure that AI systems are developed with ethical considerations at the forefront, fostering public trust and safety.

Building Trustworthy and Explainable AI Systems

While regulation is crucial, it is equally important for AI systems to be trustworthy and transparent. One of the primary concerns surrounding AI technologies is the “black-box” nature of many AI algorithms, where decisions are made without clear reasoning or understanding, even by the developers. This lack of transparency raises questions about accountability, fairness, and the potential for bias in AI decision-making.

To address these concerns, many countries are focusing on the development of explainable AI (XAI). Explainable AI refers to systems that can provide clear, understandable explanations for their decisions or actions. This is particularly important in high-stakes areas like criminal justice, hiring practices, and lending, where decisions made by AI could have significant consequences for individuals’ lives. By designing AI systems that are not only accurate but also interpretable, regulators can ensure that AI systems are held accountable for their actions, while also enabling the detection and mitigation of biases.

For example, in the EU’s AI regulations, there is a strong emphasis on ensuring that high-risk AI systems provide clear explanations for their actions and decisions. This transparency is essential to ensure that individuals and organizations impacted by AI systems can challenge or contest decisions made by these systems when necessary. In addition, various research initiatives worldwide are focused on enhancing the interpretability of AI models. Techniques like attention mechanisms, rule-based systems, and model-agnostic methods are all being explored to make AI more explainable, thus fostering greater trust in its use.

AI’s opacity becomes even more concerning as machines begin to work alongside humans in collaborative or assistive roles. Trust in these systems will be hard to establish if users cannot understand how or why decisions are made. Researchers at UCLA have developed a robotic system that generates human-readable explanations of its actions, a step toward improving trust in AI systems. Funded by DARPA’s XAI program, the project aims to make AI systems capable of understanding their environment and providing real-time explanations that enhance user trust.

The Road Ahead: Global Cooperation and Ethical AI Development

As the global conversation around AI regulation and development continues, it is becoming increasingly clear that addressing the risks associated with AI technology requires international cooperation. AI’s borderless nature means that national regulations alone may not be sufficient to tackle the global challenges posed by the technology. Countries must collaborate to develop international standards and frameworks for ethical AI development, ensuring that innovations in AI benefit society at large without compromising safety, fairness, or accountability.

In addition to international collaboration, ethical considerations must remain central to the development of AI. The EU has taken strides to ensure that AI is developed with respect for fundamental rights, such as privacy and non-discrimination. Furthermore, nations must invest in AI literacy and education to create a workforce that understands the implications of AI technologies and can participate in the governance of AI systems.

Conclusion

AI is a transformative technology, offering vast potential across industries, but also posing significant risks. To ensure that AI develops in a way that maximizes its benefits while mitigating its risks, countries must take a proactive approach in regulating AI and fostering the development of trustworthy, explainable AI systems. By prioritizing transparency, accountability, and ethical considerations, we can build a future where AI is not only intelligent but also fair, secure, and aligned with the values of society. The road ahead involves a delicate balance of innovation and regulation, where collaboration at both national and international levels will be key to ensuring that AI remains a force for good in the world.

 

 

 

 

The  first wave of AI was rule-based and “second wave” was based on statistical-learning.  Machine learning (ML) methods have demonstrated outstanding recent progress and, as a result, artificial intelligence (AI) systems can now be found in myriad applications, including autonomous vehicles, industrial applications, search engines, computer gaming, health record automation, and big data analysis.

 

But the problem with deep learning is that it is a black box, which means it is very difficult to investigate the reasoning behind the decisions it makes. The opacity of AI algorithms complicates their use, especially where mistakes can have severe impacts. For instance, if a doctor wants to trust a treatment recommendation made by an AI algorithm, they have to know what is the reasoning behind it. The same goes for a judge who wants to pass sentence based on recidivism prediction made by a deep learning application.

 

These are decisions that can have a deep impact on the life of the people affected by them, and the person assuming responsibility must have full visibility on the steps that go into those decisions, says David Gunning, Program Manager at XAI, DARPA’s initiative to create explainable artificial intelligence models. For instance, a second-wave AI system can provide image classification where it is given an image and it does calculations to detect what is in the image. However the agency would prefer if the system could respond and not only say what the image is, but explain why it came to that conclusion. If the image is of a cat, the system not only knows there is a cat in the image, but it can detect that because the cat has fur, whiskers, claws and other features.

 

In feb 2020, The European Commission  unveiled its plan to strictly regulate artificial intelligence (AI), distinguishing itself from more freewheeling approaches to the technology in the United States and China. The commission will draft new laws—including a ban on “black box” AI systems that humans can’t interpret—to govern high-risk uses of the technology, such as in medical devices and self-driving cars. Europe is taking a more cautious approach to AI than the United States and China, where policymakers are reluctant to impose restrictions in their race for AI supremacy. But EU officials hope regulation will help Europe compete by winning consumers’ trust, thereby driving wider adoption of AI.

 

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

 

Countries are also giving thrust to development of  explainable and trustworthy AI systems. Among DARPA’s many exciting projects is Explainable Artificial Intelligence (XAI), an initiative launched in 2016 aimed at solving one of the principal challenges of deep learning and neural networks, the subset of AI that is becoming increasing prominent in many different sectors. “XAI is trying to create a portfolio of different techniques to tackle [the black box] problem, and explore how we might make these systems more understandable to end users. Early on we decided to focus on the lay user, the person who’s not a machine learning expert,” Gunning says.

 

The problem will become  further risky as humans start to work more closely with robots, in collaborative tasks or social or assistive contexts, as it will be  hard for  people to trust them if their autonomy is such that we find it difficult to understand what they’re doing. In a paper published in Science Robotics, researchers from UCLA have developed a robotic system that can generate different kinds of real-time, human-readable explanations about its actions, and then did some testing to figure which of the explanations were the most effective at improving a human’s trust in the system. This work was funded by DARPA’s Explainable AI (XAI) program, which has a goal of being able to “understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena.”

Europe plans to strictly regulate high-risk AI technology

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

 

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

 

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

 

Although the regulations would be broader and stricter than any previous EU rules, European Commission President Ursula von der Leyen said at a press conference today announcing the plan that the goal is to promote “trust, not fear.” The plan also includes measures to update the European Union’s 2018 AI strategy and pump billions into R&D over the next decade.

 

The proposals are not final: Over the next 12 weeks, experts, lobby groups, and the public can weigh in on the plan before the work of drafting concrete laws begins in earnest. Any final regulation will need to be approved by the European Parliament and national governments, which is unlikely to happen this year.

 

“The EU tries to exercise leadership in what they’re best at, which is a very solid and comprehensive regulatory framework,” says Andrea Renda, a member of the commission’s independent advisory group on AI, and an AI policy researcher at the Centre for European Policy Studies. Eleonore Pauwels, an AI ethics researcher at the Global Center on Cooperative Security, says the regulations are a good idea. She says there could be public “backlash” if policymakers don’t find alternatives to what she calls “surveillance capitalism” in the United States and the “digital dictatorship” being built in China.

 

The commission says it will also “launch a broad European debate” on facial recognition systems, a form of AI that can identify people in crowds without their consent. Although EU countries such as Germany have announced plans to deploy these systems, officials say they often violate EU privacy laws, including special rules for police work. Pauwels, a former commission official, says the AI industry has so far demonstrated a “pervasive lack of normative vision.” But Vestager points out that 350 businesses have expressed a willingness to comply with the ethical principles drawn up by its AI advisory group.

 

The new AI plan is not only about regulation. The commission will come up with an “action plan” for integrating AI into public services such as transport and health care, and will update its 2018 AI development strategy, which plowed €1.5 billion into research. The commission is calling for more R&D, including AI “excellence and testing centres” and a new industrial partnership for AI that could invest billions. Alongside its AI plan, the commission also outlined a separate strategy to promote data sharing, in part to support the development of AI.

 

EU nations call for ‘soft law solutions’ in future Artificial Intelligence regulation

In a position paper spearheaded by Denmark and signed by digital ministers from other EU tech heavyweights such as France, Finland and Estonia, the signatories call on the Commission to incentivise the development of next-gen AI technologies, rather than put up barriers. “We should turn to soft law solutions such as self-regulation, voluntary labelling and other voluntary practices as well as robust standardisation process as a supplement to existing legislation that ensures that essential safety and security standards are met,” the paper noted.

 

“Soft law can allow us to learn from the technology and identify potential challenges associated with it, taking into account the fact that we are dealing with a fast-evolving technology,” it continued. Along with Denmark, the paper has also been signed by Belgium, the Czech Republic, Finland, France Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain and Sweden.

 

Of particular note in the executive’s plans, a series of ‘high-risk’ technologies were earmarked for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’ Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such technologies with a risk of death, damage or injury, or with legal ramifications. Sanctions could be imposed should certain technologies fail to meet such requirements. At the time, the Commission had also floated the idea of introducing a ‘voluntary labelling scheme’ for AI technology not considered to be of a particular high-risk.

DoD Adopts AI Ethics Principles

Defense Innovation Board promulgated five broad principles for ethical use of artificial intelligence that Defense Secretary Mark Esper officially endorsed, in modified form, in Feb 2020.

Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.

Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.

Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.

Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.

Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

 

Misapplication of AI raises the potential for “rapid escalation and strategic instability,” Groen, said  new director of the Pentagon’s Joint AI Center. “The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

 

 

References and Resources also include:

https://www.sciencemag.org/news/2020/02/europe-plans-strictly-regulate-high-risk-ai-technology

 

About Rajesh Uppal

Check Also

Software-Defined Satellite Networks: Revolutionizing Connectivity and Space Operations

The advent of software-defined technology has fundamentally reshaped various industries, from telecommunications to defense, and …

wpChatIcon
wpChatIcon
error: Content is protected !!