Trending News
Home / International Defence Security and Technology / Technology / ICT / Hyper-intelligent systems and Fully Autonomous Weapons are gravest risk to mankind

Hyper-intelligent systems and Fully Autonomous Weapons are gravest risk to mankind

Prof Stephen Hawking,one of Britain’s pre-eminent scientists has said that the invention of artificial intelligence could be the biggest disaster in humanity’s history, warning that if they are not properly managed, thinking machines could spell the end for civilisation. Professor Hawking, a prominent critic of making unchecked advances in AI, said that the technology promised to bring great benefits, such as eradicating disease and poverty, but “will also bring dangers, like powerful autonomous weapons or new ways for the few to oppress the many”.

 

The primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans. “It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” His comments come amid breakthroughs in artificial intelligence that are being achieved faster than many predicted. Google’s DeepMind subsidiary defeated the world champion of the ancient board game Go earlier this year. Microsoft has said it had achieved voice recognition on a par with humans.

 

AI enabled machines can also act as potential lethal multiplier at the hand of terrorists. FBI in its report, had also hinted at the dangers of fully autonomous cars, such as those being developed by Google and a number of automotive manufacturers, being “more of a potential lethal weapon”, allowing criminals to pack cars full of explosives and send them to a target.

 

LAWS are capable of automatically selecting and attacking their targets „without any human interference in the loop. They are also known as robotic weapons or killer robots. In Nov 2017,  academics, non-governmental organisations and representatives of over 80 governments gathered at Palais des Nations for a decisive meeting on the future of   lethal autonomous weapons (LAWS). Organised under the Convention on Certain Conventional Weapons (CCW), to formalize their efforts next year to deal with the challenges raised by weapons systems that would select and attack targets without meaningful human control. It comes as experts warn ‘time is running out’ for controls on the technology.

 

In the Robot Baby Project at Vrije Universiteit Amsterdam, scientists have developed a way for robots to have ‘sex’ and pass on their DNA to offspring. Doing this can allow them to ‘develop their bodies through evolution,’ making for successive generations that have more advanced physical and behavioural capabilities. As the process continues, the researchers say robots can become more suitable for use in unknown environments that could be hazardous to humans, like deep sea mines or even other planets. This self evolving , self replicating robots could be boon to militaries and terrorists as they could employ few robots in battlefield or cities and these robots  would multiply, evolve, and learn as they carry out their mission to destroy or damage military targets .  They  would also rapidly learn to  defeat any weapons or countermeasures employed against them and prove to be invincible.

 

The Defense Innovation Board is working with ethicists within the Defense Department and is in the process of bringing together more relevant experts. The work began in July when Defense Secretary James Mattis asked the board to begin work on a set of AI principles the department can follow as it develops this nascent technology and begins to deploy it in the Pentagon and on the battlefield. The promise of AI is huge but so are the potential pitfalls if the technology is misused, such as introducing implicit biases during the development phase, as Defense and academic experts noted during the meeting. “The stakes are high in the field of medicine and in banking. But nowhere are they higher than in national security,” Marcuse said. “This process will involve law, policy, strategy, doctrine and practice… We are taking care to include not only experts who often work with the department, but AI skeptics, department critics and leading AI engineers who have never worked with DOD before.”

 

Threat of fully autonomous weapons

Artificial intelligence (AI) term was coined by John McCarthy, defined it as “the science and engineering of making intelligent machines”. The field was founded on the claim that a central property of humans, intelligence can be so precisely described that a machine can be made to simulate it. The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. These include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.

 

“Artificial Intelligence” is also enabling development of fully autonomous weapons that can select and fire upon targets on their own, without any human intervention. Fully autonomous weapons can be enabled to assess the situational context on a battlefield and to decide on the required attack according to the processed information. The use of artificial intelligence in armed conflict poses a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law.

 

The oldest automatically-triggered lethal weapon is the land mine, used since at least the 1600s, and naval mines, used since at least the 1700s. Some of the Some current examples of Lethal autonomous weapon (LAWs ) are automated “hardkill” active protection systems, such as a radar-guided gun to defend ships such as the Russian Arena, the Israeli Tropy, and the German AMAP-ADS. Israel Aerospace Industries’ Harop drones are designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them.

 

Although weapons with full lethal autonomy have not yet been deployed, precursors with various degrees of autonomy and lethality are currently in use. On the southern edge of the Korean Demilitarized Zone, South Korea has deployed the Super aEgis II, a sentry gun that can detect, target, and destroy enemy threats. It was developed so it could operate on its own, although so far the robots reportedly can’t fire without human intervention.

 

Several states support and fund activities targeted at the development and research on fully autonomous weapons. Amongst them are China, Germany, India, Israel, Republic of Korea, Russia, and the United Kingdom.

 

“When we look at autonomous weapons, our concern is about the degree of human control over their targeting and attack functions. Those are the functions that we think should always be under human control, and that’s what the debate is coming down to,” says Mary Wareham, global coordinator of the Campaign to Stop Killer Robots.

 

Fully Autonomous Robots

The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion – where the robot moves while maintaining physical contact with an object).

 

Robotics has made significant progress, since its first deployment as an industrial robot more than 50 years ago. Robots are now days being used extensively in manufacturing, services, healthcare/medical, defense, and space. They have enhanced safety of humans from dangerous tasks, improved the productivity and the quality of life. In the future the robots are poised to transform the human society in the same way as the computers or internet did in the past.

 

The defense forces are primarily interested in mobile robots or unmanned vehicles in air, land and sea domain. The mobile robots or unmanned systems have transformed warfare as evidenced by thousands of them been deployed in Iraq, Afghanistan and in Pakistan, that have supported the armed forces in targeting, disarming roadside bombs, clearing land mines, surveying intelligence collection etc. Unmanned Systems have also proved very effective in fast response to catastrophic and unexpected incidents,including natural or civil disasters like fires, floods and earthquakes.

 

Fully autonomous weapons are distinct from remote-controlled weapon systems such as drones—the latter are piloted by a human remotely, while fully autonomous weapons would have no human guidance after being programmed. Military is on way to the development of fully autonomous weapons and platforms and can be considered as “revolution in modern warfare”. Completely autonomous robots are able to operate by themselves without the need for any human input. They are often able to learn by themselves and to modify their behavior accordingly.

 

‘Fully autonomous weapons, also known as ‘killer robots,’ are quickly moving from the realm of science fiction toward reality. Robotic systems with a various degree of autonomy and lethality have already been deployed by the United States, the United Kingdom, Israel, and the Republic of Korea. Robotic systems with a various degree of autonomy and lethality have already been deployed by the United States, the United Kingdom, Israel, and the Republic of Korea.

 

Britain’s Taranis, an experimental prototype for future stealth drones, has an autonomous mode where it flies and carries out missions on its own, including searching for targets. The US Office of Naval Research has been testing the Sea Hunter, the Navy’s next-generation submarine drone that can operate autonomously or by remote control. Currently oriented toward detecting mines and ultraquiet diesel-electric submarines, the drone is expected to be outfitted with weapons at some point.

 

Currently Employed LAWS

The automated systems available today can defend designated subjects through active protection systems such as the US Phalanx Close In Weapon System (CIWS),  the Israeli Trophy, Iron Dome,  Russian Arena , and the German AMAP Active Defence System (ADS).

 

The Iron Dome missile defence system  has the capability to identify and destroy projectiles before they land in Israeli territory and
is considered one of the most effective anti-missile systems in the world. Russia‟s new active protection system Arena-M for T-72 and T-90 tanks is capable of protecting armoured vehicles from US Tube-launched Optically-tracked Wire-guided (TOW) missiles.

 

Brimstone is an advanced air-to-ground radar-guided missile developed by MBDA for the UK Royal Air Force (RAF). The missile can
effectively strike fixed and moving ground-based targets with height accuracy. Brimstone works on the fire-and-forget rule and can be used against a massive enemy armoury. Laser guidance tools were added in the missile for specification of targets after problems due to heavy collateral damage during the Afghan War.

 

Dedicated for the Suppression of Enemy Air Defence (SEAD) mission, Harpy is an operational loitering attack weapon. The current
version of Harpy is also deployed as a fire-and-forget weapon. South Korean forces have installed a team of robots that have heat and motion detectors to identify potential targets more than 2 miles away . The SGR-1, however, needs a human operator to give it the go ahead  to fire.  Norway has manufactured the modern weapon system for its Joint  Strike Missile (JSM) aircraft, which can be carried externally and internally in the bomb bay of the F-35. Most of these weapons only need one time programming by a human operator that enables them to automatically select, engage and destroy their targets without any human in the loop.

 

Debate over LAWS

The debate regarding LAWS usually revolves around several assertions: ethical and legal issues backed by Non-Governmental Organisations (NGOs); political debate pertaining to the reduction of casualties during an armed conflict; military debate to maintain superiority on the battlefield by countering the enemy‟s autonomous weapons; and debate surrounding technological limitations as AI is not fully developed yet; and then there is the issue of cost.

 

Ethical and Legal considerations of fully autonomous weapons

Article 36 of the First Additional Protocol to the Geneva Convention of August 12, 1949 is based on two rules. First, are these weapons
indiscriminate? Second, do they inflict unnecessary pain? The first refers to the requirement that during an armed conflict, the general population and military targets must be distinguished. While the second one is about the limit which an army can cross to achieve its aims – civilian loss should not exceed military gains. It has been contended in a provision in the First Protocol, known as the Martens Clause that requires weapon frameworks and their use to meet the dictates of public conscience.

The development of fully autonomous weapons and platforms raises many fundamental ethical and principle questions:

  • Can the decision over death and life be left to a machine, reducing humans to just objects ?
  • Can fully autonomous weapons function in an ethically “correct” manner?
  • Are machines capable of acting in accordance to international humanitarian law (IHL) or international human rights law (IHRL)?
  • Are these weapon systems able to differentiate between combatants on the one side and defenceless and/or uninvolved persons on the other side?
  • Can such systems evaluate the proportionality of attacks?
  • Who can be held accountable: the programmer, commander or manufacturer of the robot ?
  • Would Fully autonomous weapons would lower the threshold of war? The extensive use of drone strikes by the United States in Afghanistan and Pakistan is a case in point.
  • Would autonomous weapons could be used to oppress opponents without fearing protest, conscientious objection, or insurgency within state security forces?
  • What would be consequences if fully autonomous weapon systems fall into the hands of non-authorized persons? These LAWS can be acquired and used by non-state armed groups, including terrorist entities and rogue states. Christof Heyns, the UN‟s former Special Rapporteur on Lethal Autonomous Robotics (LARs) is of the opinion that LARs can also be hacked like other computer systems and, in that case, it would be impossible to measure or determine who should be held responsible for the damage as it would not be clear who was controlling the machine.

 

In April 2013, a group of non-governmental organizations launched the Campaign to Stop Killer Robots in London. The campaign seeks to establish a coordinated civil society call for a ban on the development of fully autonomous weapon systems and to address the challenges to civilians and the international law posed by these weapons. The campaign builds on previous experiences from efforts to ban landmines, cluster munitions, and blinding lasers.

 

Noel Sharkey of the International Committee for Robot Arms Control explained the group’s intentions: “The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

 

Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent. He argues that “any sufficiently advanced benevolence may be indistinguishable from malevolence.” Humans should not assume machines or robots would treat us favorably, because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of mankind, and would be extremely difficult to stop.

 

“We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it,” Elon Musk says. In the longer term, the technology entrepreneur has warned that AI is “our biggest existential threat”. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.

 

References and Resources  also include:

image_pdfimage_print

Check Also

USAF requires Seamless multi-domain communications “network of networks” fabric across C2ISR enterprise for implementing multi domain operations

Developing and delivering air superiority for the highly contested environment in 2030 requires a multi-domain …

Leave a Reply

Your email address will not be published. Required fields are marked *


error: Content is protected !!