Trending News
Home / International Defence Security and Technology / Threats / Hyper-intelligent systems and Fully Autonomous Weapons and Platforms are gravest risk to mankind

Hyper-intelligent systems and Fully Autonomous Weapons and Platforms are gravest risk to mankind

Prof Stephen Hawking,one of Britain’s pre-eminent scientists has said that the invention of artificial intelligence could be the biggest disaster in humanity’s history, warning that if they are not properly managed, thinking machines could spell the end for civilisation. Professor Hawking, a prominent critic of making unchecked advances in AI, said that the technology promised to bring great benefits, such as eradicating disease and poverty, but “will also bring dangers, like powerful autonomous weapons or new ways for the few to oppress the many”.

The primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans. “It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” His comments come amid breakthroughs in artificial intelligence that are being achieved faster than many predicted. Google’s DeepMind subsidiary defeated the world champion of the ancient board game Go earlier this year. Microsoft has said it had achieved voice recognition on a par with humans.

“Artificial Intelligence” is also enabling development of fully autonomous weapons that can select and fire upon targets on their own, without any human intervention. Fully autonomous weapons can be enabled to assess the situational context on a battlefield and to decide on the required attack according to the processed information. The use of artificial intelligence in armed conflict poses a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law. ‘Fully autonomous weapons, also known as ‘killer robots,’ are quickly moving from the realm of science fiction toward reality.

Over 100 nations that are part of the international Convention on Conventional Weapons will debate a ban on so called ‘killer robots’ in 2017 UN convention in Geneva. At their five-year review conference in Geneva, the 123 nations that are part of the international Convention on Conventional Weapons, agreed to formalize their efforts next year to deal with the challenges raised by weapons systems that would select and attack targets without meaningful human control. It comes as experts warn ‘time is running out’ for controls on the technology.

Although weapons with full lethal autonomy have not yet been deployed, precursors with various degrees of autonomy and lethality are currently in use. Several states support and fund activities targeted at the development and research on fully autonomous weapons. Amongst them are China, Germany, India, Israel, Republic of Korea, Russia, and the United Kingdom. Robotic systems with a various degree of autonomy and lethality have already been deployed by the United States, the United Kingdom, Israel, and the Republic of Korea.

The oldest automatically-triggered lethal weapon is the land mine, used since at least the 1600s, and naval mines, used since at least the 1700s. Some of the Some current examples of Lethal autonomous weapon (LAWs ) are automated “hardkill” active protection systems, such as a radar-guided gun to defend ships such as the Russian Arena, the Israeli Tropy, and the German AMAP-ADS. Israel Aerospace Industries’ Harop drones are designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them.

“When we look at autonomous weapons, our concern is about the degree of human control over their targeting and attack functions. Those are the functions that we think should always be under human control, and that’s what the debate is coming down to,” says Mary Wareham, global coordinator of the Campaign to Stop Killer Robots.

AI enabled machines can also act as potential lethal multiplier at the hand of terrorists. FBI in its report, had also hinted at the dangers of fully autonomous cars, such as those being developed by Google and a number of automotive manufacturers, being “more of a potential lethal weapon”, allowing criminals to pack cars full of explosives and send them to a target.

 

Artificial Intelligence and “singularity”

Artificial intelligence (AI) term was coined by John McCarthy, defined it as “the science and engineering of making intelligent machines”. The field was founded on the claim that a central property of humans, intelligence can be so precisely described that a machine can be made to simulate it. The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. These include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.

Artificial intelligence has been the subject of tremendous optimism, but has also suffered stunning setbacks. On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In February 2011, in a Jeopardy quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones. In 2014 Facebook unveiled an algorithm called DeepFace that can recognise specific human faces in images around 97% of the time, even when those faces are partly hidden or poorly lit.

Microsoft drew on its research into speech recognition and language comprehension to create its virtual assistant Cortana, which is built into the mobile version of Windows. The app tries to enter a back-and-forth dialogue with people. That’s intended both to make it more endearing and to help it learn what went wrong when it makes a mistake. Microsoft is developing object-recognition software for Cortana, a digital personal assistant, that can tell its users the difference between a picture of a Pembroke Welsh Corgi and a Cardigan Welsh Corgi, two dog breeds that look almost identical . A report suggested America’s spies use voice-recognition software to convert phone calls into text, in order to make their contents easier to search.

Rollo Carpenter, creator of Cleverbot says we are a long way from having the computing power or developing the algorithms needed to achieve full artificial intelligence, but believes it will come in the next few decades.

If research into Strong AI produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to recursive self-improvement. The new intelligence could thus increase exponentially and dramatically surpass humans. Science fiction writer Vernor Vinge named this scenario “singularity”.

Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.

Ray Kurzweil has used Moore’s law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.

Fully Autonomous Robots

The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion – where the robot moves while maintaining physical contact with an object).

Robotics has made significant progress, since its first deployment as an industrial robot more than 50 years ago. Robots are now days being used extensively in manufacturing, services, healthcare/medical, defense, and space. They have enhanced safety of humans from dangerous tasks, improved the productivity and the quality of life. In the future the robots are poised to transform the human society in the same way as the computers or internet did in the past.

The defense forces are primarily interested in mobile robots or unmanned vehicles in air, land and sea domain. The mobile robots or unmanned systems have transformed warfare as evidenced by thousands of them been deployed in Iraq, Afghanistan and in Pakistan, that have supported the armed forces in targeting, disarming roadside bombs, clearing land mines, surveying intelligence collection etc. Unmanned Systems have also proved very effective in fast response to catastrophic and unexpected incidents,including natural or civil disasters like fires, floods and earthquakes.

Fully autonomous weapons are distinct from remote-controlled weapon systems such as drones—the latter are piloted by a human remotely, while fully autonomous weapons would have no human guidance after being programmed. Military is on way to the development of fully autonomous weapons and platforms and can be considered as “revolution in modern warfare”. Completely autonomous robots are able to operate by themselves without the need for any human input. They are often able to learn by themselves and to modify their behavior accordingly.

Ethics of fully autonomous weapons

The development of fully autonomous weapons and platforms raises many fundamental ethical and principle questions:

  • Can the decision over death and life be left to a machine, reducing humans to just objects ?
  • Can fully autonomous weapons function in an ethically “correct” manner?
  • Are machines capable of acting in accordance to international humanitarian law (IHL) or international human rights law (IHRL)?
  • Are these weapon systems able to differentiate between combatants on the one side and defenceless and/or uninvolved persons on the other side?
  • Can such systems evaluate the proportionality of attacks?
  • Who can be held accountable: the programmer, commander or manufacturer of the robot ?
  • Would Fully autonomous weapons would lower the threshold of war?
  • Would autonomous weapons could be used to oppress opponents without fearing protest, conscientious objection, or insurgency within state security forces?
  • What would be consequences if fully autonomous weapon systems fall into the hands of non-authorized persons?

 

Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent. He argues that “any sufficiently advanced benevolence may be indistinguishable from malevolence.” Humans should not assume machines or robots would treat us favorably, because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of mankind, and would be extremely difficult to stop.

“We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it,” Elon Musk says. In the longer term, the technology entrepreneur has warned that AI is “our biggest existential threat”. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.

In April 2013, a group of non-governmental organizations launched the Campaign to Stop Killer Robots in London. The campaign seeks to establish a coordinated civil society call for a ban on the development of fully autonomous weapon systems and to address the challenges to civilians and the international law posed by these weapons. The campaign builds on previous experiences from efforts to ban landmines, cluster munitions, and blinding lasers.

References and Resources  also include:

Check Also

car-bomb

Isis developing remote controlled / driverless cars to carry out suicide attacks in the UK and US

According to Nato deputy assistant secretary general for emerging security threats, Dr Jamie Shae, the …

error: Content is protected !!