Robotics has made significant progress, since its first deployment as an industrial robot more than 50 years ago. Robots are now days being used extensively in manufacturing, services, healthcare/medical, defense, and space. They have enhanced safety of humans from dangerous tasks, improved the productivity and the quality of life. In the future the robots are poised to transform the human society in the same way as the computers or internet did in the past.
The defense forces are primarily interested in mobile robots or unmanned vehicles in air, land and sea domain. The mobile robots or unmanned systems have transformed warfare as evidenced by thousands of them been deployed in Iraq, Afghanistan and in Pakistan, that have supported the armed forces in targeting, disarming roadside bombs, clearing land mines, surveying intelligence collection etc. Unmanned Systems have also proved very effective in fast response to catastrophic and unexpected incidents,including natural or civil disasters like fires, floods and earthquakes.
Fully autonomous weapons are distinct from remote-controlled weapon systems such as drones—the latter are piloted by a human remotely, while fully autonomous weapons would have no human guidance after being programmed. Military is on way to the development of fully autonomous weapons and platforms and can be considered as “revolution in modern warfare”. Completely autonomous robots are able to operate by themselves without the need for any human input. They are often able to learn by themselves and to modify their behavior accordingly.
Lethal autonomous weapon systems (LAWS)
Lethal autonomous weapon systems (LAWS) are a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system. They are also known as robotic weapons or killer robots. Several states support and fund activities targeted at the development and research on fully autonomous weapons. Amongst them are China, Germany, India, Israel, Republic of Korea, Russia, and the United Kingdom.
Although there is no internationally agreed definition of lethal autonomous weapon systems, Department of Defense Directive (DODD) 3000.09 defines LAWS as a class of weapon systems capable of both independently identifying a target and employing an onboard weapon to engage and destroy the target without manual human control.
This concept of autonomy is also known as “human out of the loop” or “full autonomy.” The directive contrasts LAWS with human supervised, or “human on the loop,” autonomous weapon systems, in which operators have the ability to monitor and halt a weapon’s target engagement. Another category is semi-autonomous, or “human in the loop,” weapon systems that “only engage individual targets or specific target groups that have been selected by a human operator. Some analysts have noted that LAWS could additionally “allow weapons to strike military objectives more accurately and with less risk of collateral damage” or civilian casualties.
The directive does not cover “autonomous or semiautonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; [and] unexploded explosive ordnance,” nor subject them to its guidelines.
The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion – where the robot moves while maintaining physical contact with an object).
Lethal AWS may create a paradigm shift in how we wage war. This revolution will be one of software; with advances in technologies such as facial recognition and computer vision, autonomous navigation in congested environments, cooperative autonomy or swarming, these systems can be used in a variety of assets from tanks, ships to small commercial drones.
LAWS enabled by AI
Artificial intelligence is a branch of computer science dealing with the simulation of intelligent behavior in computers. The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. These include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.
AI is further divided into two categories: narrow AI and general AI. Narrow AI systems can perform only the specific task that they were trained to perform, while general AI systems would be capable of performing a broad range of tasks, including those for which they were not specifically trained. General AI systems do not yet exist. Machine learning is a subset of AI. That is, all machine learning counts as AI, but not all AI counts as machine learning. For example, symbolic logic – rules engines, expert systems and knowledge graphs – could all be described as AI, and none of them are machine learning.
Narrow AI is currently being incorporated into a number of military applications by both the United States and its competitors. Such applications include but are not limited to intelligence, surveillance, and reconnaissance; logistics; cyber operations; command and control; and semiautonomous and autonomous vehicles. These technologies are intended in part to augment or replace human operators, freeing them to perform more complex and cognitively demanding work. In addition, AI-enabled systems could (1) react significantly faster than systems that rely on operator input; (2) cope with an exponential increase in the amount of data available for analysis; and (3) enable new concepts of operations, such as swarming (i.e., cooperative behavior in which unmanned vehicles autonomously coordinate to achieve a task) that could confer a warfighting advantage by overwhelming adversary defensive systems, according to US Congress report.
“Artificial Intelligence” is also enabling development of fully autonomous weapons that can select and fire upon targets on their own, without any human intervention. Fully autonomous weapons can be enabled to assess the situational context on a battlefield and to decide on the required attack according to the processed information. The use of artificial intelligence in armed conflict poses a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law.
AI enabled machines can also act as potential lethal multiplier at the hand of terrorists. FBI in its report, had also hinted at the dangers of fully autonomous cars, such as those being developed by Google and a number of automotive manufacturers, being “more of a potential lethal weapon”, allowing criminals to pack cars full of explosives and send them to a target.
Prof Stephen Hawking, one of Britain’s pre-eminent scientists has said that the invention of artificial intelligence could be the biggest disaster in humanity’s history, warning that if they are not properly managed, thinking machines could spell the end or civilisation. Professor Hawking, a prominent critic of making unchecked advances in AI, said that the technology promised to bring great benefits, such as eradicating disease and poverty, but “will also bring dangers, like powerful autonomous weapons or new ways for the few to oppress the many”.
The primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans. “It would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” His comments come amid breakthroughs in artificial intelligence that are being achieved faster than many predicted. Google’s DeepMind subsidiary defeated the world champion of the ancient board game Go earlier this year. Microsoft has said it had achieved voice recognition on a par with humans.
In the Robot Baby Project at Vrije Universiteit Amsterdam, scientists have developed a way for robots to have ‘sex’ and pass on their DNA to offspring. Doing this can allow them to ‘develop their bodies through evolution,’ making for successive generations that have more advanced physical and behavioural capabilities. As the process continues, the researchers say robots can become more suitable for use in unknown environments that could be hazardous to humans, like deep sea mines or even other planets. This self evolving , self replicating robots could be boon to militaries and terrorists as they could employ few robots in battlefield or cities and these robots would multiply, evolve, and learn as they carry out their mission to destroy or damage military targets . They would also rapidly learn to defeat any weapons or countermeasures employed against them and prove to be invincible.
Unlike any weapon seen before, they could also allow for the selective targeting of a particular group based on parameters like age, gender, ethnicity or political leaning (if such information was available). Because lethal AWS would greatly decrease personel cost and could be easy to obtained at low cost (like in the case of small drones), small groups of people could potentially inflict disproportionate harm, making lethal AWS a new class of weapon of mass destruction.
Employment of LAWS
The oldest automatically-triggered lethal weapon is the land mine, used since at least the 1600s, and naval mines, used since at least the 1700s. Some of the Some current examples of Lethal autonomous weapon (LAWs ) are automated “hardkill” active protection systems, such as a radar-guided gun to defend ships such as the Russian Arena, the Israeli Tropy, and the German AMAP-ADS. Israel Aerospace Industries’ Harop drones are designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them.
Using pictures out of Ukraine showing a crumpled metallic airframe, open-source analysts of the conflict there say they have identified images of a new sort of Russian-made drone, one that the manufacturer says can select and strike targets through inputted coordinates or autonomously. When soldiers give the Kalashnikov ZALA Aero KUB-BLA loitering munition an uploaded image, the system is capable of “real-time recognition and classification of detected objects” using artificial intelligence (AI), according to the Netherlands-based organization Pax for Peace (citing Jane’s International Defence Review). In other words, analysts appear to have spotted a killer robot on the battlefield.
Likewise, whether this is Russia’s first use of AI-based autonomous weapons in conflict is also unclear: Some published analyses suggests the remains of a mystery drone found in 2019 Syria was from a KUB-BLA (though, again, the drone may not have used the autonomous function).
The KUB-BLA is not the first AI-based autonomous weapon to be used in combat. In 2020, during the conflict in Libya, a United Nations report said the Turkish Kargu-2 “hunted down and remotely engaged” logistics convoys and retreating forces. The Turkish government denied the Kargu-2 was used autonomously (and, again, it’s quite tough to know either way), but the Turkish Undersecretary for Defense and Industry acknowledged Turkey can field that capability.
Although weapons with full lethal autonomy have not yet been deployed, precursors with various degrees of autonomy and lethality are currently in use. On the southern edge of the Korean Demilitarized Zone, South Korea has deployed the Super aEgis II, a sentry gun that can detect, target, and destroy enemy threats. It was developed so it could operate on its own, although so far the robots reportedly can’t fire without human intervention.
‘Fully autonomous weapons, also known as ‘killer robots,’ are quickly moving from the realm of science fiction toward reality. Robotic systems with a various degree of autonomy and lethality have already been deployed by the United States, the United Kingdom, Israel, and the Republic of Korea. Robotic systems with a various degree of autonomy and lethality have already been deployed by the United States, the United Kingdom, Israel, and the Republic of Korea.
A satellite-controlled machine-gun with “artificial intelligence” was used to kill Iran’s top nuclear scientist, a Revolutionary Guards commander says. Mohsen Fakhrizadeh was shot dead in a convoy outside Tehran on 27 November 2020. Brig-Gen Ali Fadavi told local media that the weapon, mounted in a pick-up truck, was able to fire at Fakhrizadeh without hitting his wife beside him. The claim could not be verified. Gen Fadavi, the deputy commander of the Revolutionary Guards, told a ceremony in Tehran that a machine-gun mounted on the Nissan pick-up was “equipped with an intelligent satellite system which zoomed in on martyr Fakhrizadeh” and “was using artificial intelligence”.
The machine-gun “focused only on martyr Fakhrizadeh’s face in a way that his wife, despite being only 25cm [10 inches] away, was not shot”, he said. The general reiterated that no human assailants had been present at the scene, saying that “in total 13 bullets were fired and all of them were shot from the [weapon] in the Nissan”. Four bullets struck Fakhrizadeh’s head of security “as he threw himself” on the scientist, he added. Iran has blamed Israel and an exiled opposition group for the attack. Israel has neither confirmed nor denied responsibility.
US Defence Secretary Mark Esper revealed recently China was selling drones programmed to decide for themselves who lives and who dies, without any form of human ethical oversight. A state-controlled Chinese defence company is negotiating the sale of its Blowfish A3 armed helicopter drone, control equipment and fully autonomous software, to these troubled nations.
“As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East, as it prepares to export its next-generation stealth UAVs when those come online,” Esper told the National Security Commission on Artificial Intelligence conference. “In addition, Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal, targeted strikes.”
U.S. policy does not prohibit the development or employment of LAWS. Some senior military and defense leaders have stated that the United States may be compelled to develop LAWS in the future if potential U.S. adversaries choose to do so. At the same time, a growing number of states and nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns
In Nov 2017, academics, non-governmental organisations and representatives of over 80 governments gathered at Palais des Nations for a decisive meeting on the future of lethal autonomous weapons (LAWS). Organised under the Convention on Certain Conventional Weapons (CCW), to formalize their efforts next year to deal with the challenges raised by weapons systems that would select and attack targets without meaningful human control. It comes as experts warn ‘time is running out’ for controls on the technology.
“This isn’t a surprise,” says Australian Strategic Policy Institute senior analyst Dr Malcolm Davis. “Authoritarian adversaries do not need to conduct the same domestic debate on lethal autonomous weapons as western liberal democracies, because they are not answerable to their people. “There is no ‘ban killer robots’ movement in China or Russia. The regimes are simply developing and deploying the weapons – and in this case – exporting them to similar regimes in the Middle East.
Some believe that lethal AWS have the opportunity to make war more humane and reduce civilian casualties by being more precise and taking more soldiers off the battlefield. Others worry about accidental escalation and global instability, and the risks of seeing these weapons fall into the hands of non-state actors. Over 4500 AI and Robotics researchers, 250 organizations, 30 nations and the Secretary General of the UN have called for legally-binding treaty banning lethal AWS. They have been met with resistance from countries developing lethal AWS, fearing the loss of strategic superiority.
Britain’s Taranis, an experimental prototype for future stealth drones, has an autonomous mode where it flies and carries out missions on its own, including searching for targets. The US Office of Naval Research has been testing the Sea Hunter, the Navy’s next-generation submarine drone that can operate autonomously or by remote control. Currently oriented toward detecting mines and ultraquiet diesel-electric submarines, the drone is expected to be outfitted with weapons at some point.
The automated systems available today can defend designated subjects through active protection systems such as the US Phalanx Close In Weapon System (CIWS), the Israeli Trophy, Iron Dome, Russian Arena , and the German AMAP Active Defence System (ADS).
The Iron Dome missile defence system has the capability to identify and destroy projectiles before they land in Israeli territory and is considered one of the most effective anti-missile systems in the world. Russia‟s new active protection system Arena-M for T-72 and T-90 tanks is capable of protecting armoured vehicles from US Tube-launched Optically-tracked Wire-guided (TOW) missiles.
Brimstone is an advanced air-to-ground radar-guided missile developed by MBDA for the UK Royal Air Force (RAF). The missile can
effectively strike fixed and moving ground-based targets with height accuracy. Brimstone works on the fire-and-forget rule and can be used against a massive enemy armoury. Laser guidance tools were added in the missile for specification of targets after problems due to heavy collateral damage during the Afghan War.
South Korean forces have installed a team of robots that have heat and motion detectors to identify potential targets more than 2 miles away . The SGR-1, however, needs a human operator to give it the go ahead to fire. Norway has manufactured the modern weapon system for its Joint Strike Missile (JSM) aircraft, which can be carried externally and internally in the bomb bay of the F-35. Most of these weapons only need one time programming by a human operator that enables them to automatically select, engage and destroy their targets without any human in the loop.
According to U.S. Secretary of Defense Mark Esper, some Chinese weapons manufacturers, such as Ziyan, have advertised their weapons as having the ability to select and engage targets autonomously. It is unclear whether these claims are accurate; however, China has no prohibition on the development of LAWS, which it has characterized as weapons that exhibit—at a minimum—five attributes:
The first is lethality, which means sufficient pay load (charge) and for means [sic] to be lethal. The second is autonomy, which means absence of human intervention and control during the entire process of executing a task. Thirdly, impossibility for termination, meaning that once started there is no way to terminate the device. Fourthly, indiscriminate effect, meaning that the device will execute the task of killing and maiming regardless of conditions, scenarios and targets. Fifthly evolution, meaning that through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations.
Russia has noted that LAWS could “ensure the increased accuracy of weapon guidance on military targets, while contributing to lower rate of unintentional strikes against civilians and civilian targets.” Although Russia has not publicly stated that it is developing LAWS, Russian weapons manufacturer Kalashnikov has reportedly built a combat module for unmanned ground vehicles capable of autonomous target identification and, potentially, target engagement.
Russia’s ZALA Aero Group, the unmanned aircraft systems (UAS) division of Kalashnikov, unveiled a “kamikaze” drone — the KUB-BLA — at the International Defense Exhibition and Conference (IDEX) in Abu Dhabi. The small UAS is designed to have a maximum speed of about 80 miles per hour, an endurance of 30 minutes, and an explosive payload of 7 pounds against “remote ground targets.”
Loitering munitions can have a dwell time up to six hours and are equipped with sensors to allow the drones to detect and attack targets independently. Early 1980s-era examples include Israel Aircraft Industries’ Harpy suppression of enemy air defenses (SEAD) drone and the U.S. Air Force AGM-136 “Tacit Rainbow” SEAD system by Northrop Grumman — a $4 billion development program canceled in 1991.Harpy is an operational loitering attack weapon. The current version of Harpy is also deployed as a fire-and-forget weapon.
“Especially significant are the developments related to loitering munitions, which are able to operate for longer amounts of time and over larger areas in order to select and attack targets,” according to last month’s PAX for Peace report. “Major efforts related to swarm technologies multiply the potential of such weapons. These developments raise serious questions of how human control is guaranteed over these weapon systems.”
The Turkish state-owned firm STM is “improving the capabilities of its KARGU loitering munitions through using AI, including facial recognition,” the report said. “According to the company, the KARGU can ‘autonomously fire-and-forget through the entry of target coordinates.’ It has been suggested that these systems will be deployed on the border with Syria.” A September article in The New Scientist magazine reported that KARGU positions Turkey “to become the first nation to use drones able to find, track and kill people without human intervention.” The Turkish newspaper, Hürriyet, has said that some 30 STM “kamikaze” drones will be deployed early next year to the Turkish-Syrian border region.
The United States is not known to be developing LAWS currently, nor does it currently have LAWS in its inventory; however, there is no prohibition on the development, fielding, or employment of LAWS, according to congress report. DODD 3000.09 establishes department guidelines for the future development and fielding of LAWS to ensure that they comply with “the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.” This directive includes a requirement that LAWS be designed to “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” “Human judgment over the use of force” does not require manual human “control” of the weapon system, as is often reported, but instead requires broader human involvement in decisions about how, when, where, and why the weapon will be employed.
DODD 3000.09 requires that the software and hardware of all systems, including lethal autonomous weapons, be tested and evaluated to ensure they Function as anticipated in realistic operational environments against adaptive adversaries; complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement; and are sufficiently robust to minimize failures that could lead to unintended
engagements or to loss of control of the system to unauthorized parties. Any changes to the system’s operating state—for example,
due to machine learning—would require the system to go through testing and evaluation again to ensure that it has retained its safety features and ability to operate as intended.
Debate over LAWS
The debate regarding LAWS usually revolves around several assertions: ethical and legal issues backed by Non-Governmental Organisations (NGOs); political debate pertaining to the reduction of casualties during an armed conflict; military debate to maintain superiority on the battlefield by countering the enemy‟s autonomous weapons; and debate surrounding technological limitations as AI is not fully developed yet; and then there is the issue of cost.
“When we look at autonomous weapons, our concern is about the degree of human control over their targeting and attack functions. Those are the functions that we think should always be under human control, and that’s what the debate is coming down to,” says Mary Wareham, global coordinator of the Campaign to Stop Killer Robots.
The PAX for Peace report on LAWS listed 30 “high concern companies,” as the latter “work on increasingly autonomous weapon systems and do not appear to have a policy in place [to ensure meaningful human control over such weapons] and did not respond in a meaningful way to our survey.” Such companies include Lockheed Martin, Boeing, and Raytheon in the United States; China’s AVIC and CASC; Russia’s Rostec; Israel’s IAI, Elbit Systems, and Rafael; and Turkey’s STM, according to the report.
The U.S. government does not currently support a ban on LAWS and has addressed ethical concerns about the systems in a March 2018 white paper, “Humanitarian Benefits of Emerging Technologies in the Area of Lethal Autonomous Weapons.” The paper notes that “automated target identification, tracking, selection, and engagement functions can allow weapons to strike military objectives
more accurately and with less risk of collateral damage” or civilian casualties.
Ethical and Legal considerations of fully autonomous weapons
Article 36 of the First Additional Protocol to the Geneva Convention of August 12, 1949 is based on two rules. First, are these weapons indiscriminate? Second, do they inflict unnecessary pain? The first refers to the requirement that during an armed conflict, the general population and military targets must be distinguished. While the second one is about the limit which an army can cross to achieve its aims – civilian loss should not exceed military gains. It has been contended in a provision in the First Protocol, known as the Martens Clause that requires weapon frameworks and their use to meet the dictates of public conscience.
The development of fully autonomous weapons and platforms raises many fundamental ethical and principle questions:
- Can the decision over death and life be left to a machine, reducing humans to just objects ?
- Can fully autonomous weapons function in an ethically “correct” manner?
- Are machines capable of acting in accordance to international humanitarian law (IHL) or international human rights law (IHRL)?
- Are these weapon systems able to differentiate between combatants on the one side and defenceless and/or uninvolved persons on the other side?
- Can such systems evaluate the proportionality of attacks?
- Who can be held accountable: the programmer, commander or manufacturer of the robot ?
- Would Fully autonomous weapons would lower the threshold of war? The extensive use of drone strikes by the United States in Afghanistan and Pakistan is a case in point.
- Would autonomous weapons could be used to oppress opponents without fearing protest, conscientious objection, or insurgency within state security forces?
- What would be consequences if fully autonomous weapon systems fall into the hands of non-authorized persons? These LAWS can be acquired and used by non-state armed groups, including terrorist entities and rogue states. Christof Heyns, the UN‟s former Special Rapporteur on Lethal Autonomous Robotics (LARs) is of the opinion that LARs can also be hacked like other computer systems and, in that case, it would be impossible to measure or determine who should be held responsible for the damage as it would not be clear who was controlling the machine.
Others, including approximately 25 countries and 100 nongovernmental organizations, have called for a preemptive ban on LAWS due to ethical concerns such as a perceived lack of accountability for use and a perceived inability to comply with the proportionality and distinction
requirements of the laws of armed conflict. Some analysts have also raised concerns about the potential operational risks posed by lethal autonomous weapons. These risks could arise from “hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors.” Although such risks could be present in automated systems, they could be heightened in autonomous systems, in which the human operator would be unable to physically intervene to terminate engagements—potentially resulting in wider-scale or more numerous instances of fratricide, civilian casualties, or other unintended consequences
In April 2013, a group of non-governmental organizations launched the Campaign to Stop Killer Robots in London. The campaign seeks to establish a coordinated civil society call for a ban on the development of fully autonomous weapon systems and to address the challenges to civilians and the international law posed by these weapons. The campaign builds on previous experiences from efforts to ban landmines, cluster munitions, and blinding lasers. Noel Sharkey of the International Committee for Robot Arms Control explained the group’s intentions: “The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”
Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be benevolent. He argues that “any sufficiently advanced benevolence may be indistinguishable from malevolence.” Humans should not assume machines or robots would treat us favorably, because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of mankind, and would be extremely difficult to stop.
“We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it,” Elon Musk says. In the longer term, the technology entrepreneur has warned that AI is “our biggest existential threat”. This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.
Ethical AI
The Defense Innovation Board is working with ethicists within the Defense Department and is in the process of bringing together more relevant experts. The work began in July when Defense Secretary James Mattis asked the board to begin work on a set of AI principles the department can follow as it develops this nascent technology and begins to deploy it in the Pentagon and on the battlefield. The promise of AI is huge but so are the potential pitfalls if the technology is misused, such as introducing implicit biases during the development phase, as Defense and academic experts noted during the meeting. “The stakes are high in the field of medicine and in banking. But nowhere are they higher than in national security,” Marcuse said. “This process will involve law, policy, strategy, doctrine and practice… We are taking care to include not only experts who often work with the department, but AI skeptics, department critics and leading AI engineers who have never worked with DOD before.”
A military advisory committee has endorsed a list of principles for the use of artificial intelligence by the Department of Defense, contributing to an ongoing discussion on the ethical use of AI and AI-enabled technology for combat and noncombat purposes. “We do need to provide clarity to people who will use these systems, and we need to provide clarity to the public so they understand how we want the department to use AI in the world as we move forward,” Michael McQuade, vice president for research at Carnegie Mellon University, said during an Oct. 2019 discussion on AI ethics.
For the purpose of the report, AI was defined as “a variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task,” which the DIB said is comparable to how the department has thought about AI over the last four decades.
Here are the five principles endorsed by the board:
1. Responsible: Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use and outcomes of AI systems.
2. Equitable: The DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or noncombat AI systems that would inadvertently cause harm to individuals
3. Traceable: The DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes and operational methods of its AI systems. That includes transparent methodologies that can stand up to audits as well as data sources, concrete design procedures and documentation.
4. Reliable: AI systems should have an explicit, well-defined domain of use, and the safety, security and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
5. Governable: The DoD’s AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption. Human-executed or automatic means to disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior should exist
References and Resources also include:
- http://www.reachingcriticalwill.org/resources/fact-sheets/critical-issues/7972-fully-autonomous-weapons
- http://spectrum.ieee.org/automaton/robotics/military-robots/autonomous-weapons-could-be-developed-for-use-within-years
- https://www.csmonitor.com/USA/Military/2017/0831/Why-killer-robots-are-becoming-a-real-threat-and-an-ethics-test
- http://www.ipripak.org/wp-content/uploads/2018/01/art2gbj22.pdf
- https://www.news.com.au/technology/innovation/military/impossible-to-defend-china-goes-rogue-with-new-weapon/news-story/85a295cbb4928a2afecfe2ff91f86650
- https://assets.documentcloud.org/documents/6999083/Emerging-MIlitary-Technologies.pdf
- https://thebulletin.org/2022/03/russia-may-have-used-a-killer-robot-in-ukraine-now-what/