Home / Technology / AI & IT / Countries plan mitigating high-risks of AI technology by strict regulation and trustworthy and explainable AI development

Countries plan mitigating high-risks of AI technology by strict regulation and trustworthy and explainable AI development

The  first wave of AI was rule-based and “second wave” was based on statistical-learning.  Machine learning (ML) methods have demonstrated outstanding recent progress and, as a result, artificial intelligence (AI) systems can now be found in myriad applications, including autonomous vehicles, industrial applications, search engines, computer gaming, health record automation, and big data analysis.

 

But the problem with deep learning is that it is a black box, which means it is very difficult to investigate the reasoning behind the decisions it makes. The opacity of AI algorithms complicates their use, especially where mistakes can have severe impacts. For instance, if a doctor wants to trust a treatment recommendation made by an AI algorithm, they have to know what is the reasoning behind it. The same goes for a judge who wants to pass sentence based on recidivism prediction made by a deep learning application.

 

These are decisions that can have a deep impact on the life of the people affected by them, and the person assuming responsibility must have full visibility on the steps that go into those decisions, says David Gunning, Program Manager at XAI, DARPA’s initiative to create explainable artificial intelligence models. For instance, a second-wave AI system can provide image classification where it is given an image and it does calculations to detect what is in the image. However the agency would prefer if the system could respond and not only say what the image is, but explain why it came to that conclusion. If the image is of a cat, the system not only knows there is a cat in the image, but it can detect that because the cat has fur, whiskers, claws and other features.

 

In feb 2020, The European Commission  unveiled its plan to strictly regulate artificial intelligence (AI), distinguishing itself from more freewheeling approaches to the technology in the United States and China. The commission will draft new laws—including a ban on “black box” AI systems that humans can’t interpret—to govern high-risk uses of the technology, such as in medical devices and self-driving cars. Europe is taking a more cautious approach to AI than the United States and China, where policymakers are reluctant to impose restrictions in their race for AI supremacy. But EU officials hope regulation will help Europe compete by winning consumers’ trust, thereby driving wider adoption of AI.

 

The commission also plans to offer a “trustworthy AI” certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

 

Countries are also giving thrust to development of  explainable and trustworthy AI systems. Among DARPA’s many exciting projects is Explainable Artificial Intelligence (XAI), an initiative launched in 2016 aimed at solving one of the principal challenges of deep learning and neural networks, the subset of AI that is becoming increasing prominent in many different sectors. “XAI is trying to create a portfolio of different techniques to tackle [the black box] problem, and explore how we might make these systems more understandable to end users. Early on we decided to focus on the lay user, the person who’s not a machine learning expert,” Gunning says.

 

The problem will become  further risky as humans start to work more closely with robots, in collaborative tasks or social or assistive contexts, as it will be  hard for  people to trust them if their autonomy is such that we find it difficult to understand what they’re doing. In a paper published in Science Robotics, researchers from UCLA have developed a robotic system that can generate different kinds of real-time, human-readable explanations about its actions, and then did some testing to figure which of the explanations were the most effective at improving a human’s trust in the system. This work was funded by DARPA’s Explainable AI (XAI) program, which has a goal of being able to “understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena.”

Europe plans to strictly regulate high-risk AI technology

The commission wants binding rules for “high-risk” uses of AI in sectors like health care, transport, or criminal justice. The criteria to determine risk would include considerations such as whether someone could get hurt—by a self-driving car or a medical device, for example—or whether a person has little say in whether they’re affected by a machine’s decision, such as when AI is used in job recruitment or policing.

 

For high-risk scenarios, the commission wants to stop inscrutable “black box” AIs by requiring human oversight. The rules would also govern the large data sets used in training AI systems, ensuring that they are legally procured, traceable to their source, and sufficiently broad to train the system. “An AI system needs to be technically robust and accurate in order to be trustworthy,” the commission’s digital czar Margrethe Vestager said at the press conference.

 

The law will also establish who is responsible for an AI system’s actions—such as the company using it, or the company that designed it. High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union.

 

Although the regulations would be broader and stricter than any previous EU rules, European Commission President Ursula von der Leyen said at a press conference today announcing the plan that the goal is to promote “trust, not fear.” The plan also includes measures to update the European Union’s 2018 AI strategy and pump billions into R&D over the next decade.

 

The proposals are not final: Over the next 12 weeks, experts, lobby groups, and the public can weigh in on the plan before the work of drafting concrete laws begins in earnest. Any final regulation will need to be approved by the European Parliament and national governments, which is unlikely to happen this year.

 

“The EU tries to exercise leadership in what they’re best at, which is a very solid and comprehensive regulatory framework,” says Andrea Renda, a member of the commission’s independent advisory group on AI, and an AI policy researcher at the Centre for European Policy Studies. Eleonore Pauwels, an AI ethics researcher at the Global Center on Cooperative Security, says the regulations are a good idea. She says there could be public “backlash” if policymakers don’t find alternatives to what she calls “surveillance capitalism” in the United States and the “digital dictatorship” being built in China.

 

The commission says it will also “launch a broad European debate” on facial recognition systems, a form of AI that can identify people in crowds without their consent. Although EU countries such as Germany have announced plans to deploy these systems, officials say they often violate EU privacy laws, including special rules for police work. Pauwels, a former commission official, says the AI industry has so far demonstrated a “pervasive lack of normative vision.” But Vestager points out that 350 businesses have expressed a willingness to comply with the ethical principles drawn up by its AI advisory group.

 

The new AI plan is not only about regulation. The commission will come up with an “action plan” for integrating AI into public services such as transport and health care, and will update its 2018 AI development strategy, which plowed €1.5 billion into research. The commission is calling for more R&D, including AI “excellence and testing centres” and a new industrial partnership for AI that could invest billions. Alongside its AI plan, the commission also outlined a separate strategy to promote data sharing, in part to support the development of AI.

 

EU nations call for ‘soft law solutions’ in future Artificial Intelligence regulation

In a position paper spearheaded by Denmark and signed by digital ministers from other EU tech heavyweights such as France, Finland and Estonia, the signatories call on the Commission to incentivise the development of next-gen AI technologies, rather than put up barriers. “We should turn to soft law solutions such as self-regulation, voluntary labelling and other voluntary practices as well as robust standardisation process as a supplement to existing legislation that ensures that essential safety and security standards are met,” the paper noted.

 

“Soft law can allow us to learn from the technology and identify potential challenges associated with it, taking into account the fact that we are dealing with a fast-evolving technology,” it continued. Along with Denmark, the paper has also been signed by Belgium, the Czech Republic, Finland, France Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain and Sweden.

 

Of particular note in the executive’s plans, a series of ‘high-risk’ technologies were earmarked for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’ Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such technologies with a risk of death, damage or injury, or with legal ramifications. Sanctions could be imposed should certain technologies fail to meet such requirements. At the time, the Commission had also floated the idea of introducing a ‘voluntary labelling scheme’ for AI technology not considered to be of a particular high-risk.

DoD Adopts AI Ethics Principles

Defense Innovation Board promulgated five broad principles for ethical use of artificial intelligence that Defense Secretary Mark Esper officially endorsed, in modified form, in Feb 2020.

Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.

Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.

Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.

Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.

Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

 

Misapplication of AI raises the potential for “rapid escalation and strategic instability,” Groen, said  new director of the Pentagon’s Joint AI Center. “The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

 

 

References and Resources also include:

https://www.sciencemag.org/news/2020/02/europe-plans-strictly-regulate-high-risk-ai-technology

 

About Rajesh Uppal

Check Also

Navigating the Ethical Terrain of Synthetic Biology: Addressing Biosecurity, Environmental Impact, and Access

Introduction: The rapid advancement of genetic engineering and synthetic biology has ushered in a new …

error: Content is protected !!