Can the laws of war constrain robot warriors? Is international humanitarian law adaptable to the use of weapons that possess artificial intelligence? To what extent can such weapon systems determine who is, and who is not, a combatant? To what extent must humans control the decision to kill the enemy?
These questions and others fostered a fascinating discussion at “Legal Implications of Autonomous Weapon Systems,” a workshop at the Naval War College in Newport, Rhode Island, this past Thursday and Friday. We four dozen or so attendees were drawn from the armed forces of the United States, Australia, Britain, Canada, and Israel, from the International Committee of the Red Cross, and from a global array of academic institutions.
As one who reserves just a couple days for the topic in my Laws of War course, I came to the workshop with more questions than answers about the actual and potential uses in armed conflict of robots, the shorthand term I’ll use here for “autonomous weapons systems.” The military, characteristically, prefers an acronym: AWS.
The actual use of such weapons already is significant. Smart missiles called JDAMs deliver munitions to a target, while a WALL·E-looking machine called SWORDS has, as the U.S. Department of Defense wrote in 2004, “march[ed] into battle” alongside troops.
In fact, such machines tend not to be used in a fully independent manner (though with a little reprogramming, some could be). They are, we were told, semi-autonomous – humans are kept “in” or “on” the loop leading to choice of target and other decisions.
This mention of human supervision, like the WALL·E-on-the-march metaphor above, pointed to a pivotal workshop topic:
► Is it appropriate, as a matter of law or of ethics, to indulge in the human tendency to anthropomorphize these machines?
Apparently, some lab robots can recognize – or at least can mimic the act of recognizing – themselves in a mirror. Does this mean they are, or soon will be, sufficiently human-like to conduct operations wholly without oversight by actual humans? Might human-like robots evolve an ability to refuse programmed orders – orders that limited action to the boundaries of international humanitarian law? The answers to these questions, like many at the workshop, seemed to be “perhaps yes, perhaps no.”
At one end of the spectrum, this uncertainty has spurred a call for an outright ban. Emblematic is the headline of a notice about the November 2012 release of the Human Rights Watch report, Losing Humanity:
‘Ban ‘Killer Robots’ Before It’s Too Late: Fully Autonomous Weapons Would Increase Danger to Civilians’
At the other end of the spectrum, some would prefer to let the technology develop before the onset of any new legal regulation.
Many seem to fall in between. Acknowledged were some challenges; for instance:
► Does compliance with the precautions requirement of Article 57 of the Additional Protocol I (1977) to the four Geneva Conventions (1949) preclude the use of a fully autonomous weapon?
► Would the robotic commission of a war crime be susceptible to sanctions by global justice mechanisms like the International Criminal Court, and if not, what effective sanctions and deterrents would there be?
Persons falling in the vast middle of the regulatory spectrum harbored concerns about such questions, yet seemed to lean toward the view that if due care is taken, international humanitarian law can – and should – be applied. Documents discussed in this vein included the:
► U.S. Department of Defense Directive 3000.09, ¶ 4(a) (November 12, 2012), which states as “DoD policy” the following:
‘Autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgement over the use of force.’
► April 9, 2013 report to the U.N. Human Rights Council by University of Pretoria Law Professor Christof Heyns, who’s served since 2010 as the Special Rapporteur on extrajudicial, summary or arbitrary executions. At ¶ 108 of his report, Heyns termed the 2012 Defense Directive as “imposing a form of moratorium” with respect to what he termed “lethal autonomous robotics,” or LARs. Heyns’ 2013 U.N. report (¶ 35) favored a broader scope for delay:
‘The present report … calls on States to impose national moratoria on certain activities related to LARs.’
A reprise of such issues likely will occur at the Meeting of Experts on Lethal Autonomous Weapons Systems set for May 13 to 16 in Geneva under the auspices of the 1980 Convention on Certain Conventional Weapons. Named in full the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects as amended on 21 December 2001, this treaty has 117 states parties, including the United States.
The Naval War College International Law Department workshop’s vital and timely discussion exposed many avenues for study – study sooner rather than later, so that the legal regulatory framework may be determined before fully autonomous robots are fully deployed.
Like this:
Like Loading...