technology

kinderLEIDEN – Children, or kinder, has been the watchword these days in this Dutch city, where Leiden University’s been hosting a whirlwind of activities to mark the 25th anniversary of the Convention on the Rights of the Child. A film festival, moot court competition,* art exhibit, and commemoration by Princess Beatrice were just some of the events.

I was honored to take part in “25 years CRC,” a 2-day conference that brought to Leiden hundreds of children’s rights experts, from Auckland to Zagreb and many places in between. Plenary presentations included Corinne Dettmeijer-Vermuelen‘s fascinating comparison of U.S. and Dutch laws against online sexual exploitation of children. Then scholars and practitioners met in early a dozen parallel sessions, where they tackled an array of topics.

The session I chaired featured: Claire Achmad‘s outline of her Ph.D. dissertation, a children’s rights approach to regulation of international commercial surrogacy; Mies Grijn‘s anthropological account of child marriage practices in a village in Java, Indonesia; and Emily Waller‘s discussion of children, sexual violence-related stigmatization, and reparations. A common thread in these talks was the difficulty of drafting, adapting, and enforcing laws meant to be applied in societies marked by changes and cultural variations.

In a session on children and armed conflict, Olga Jurasz explored the treatment of children in cases before the International Criminal Court. Aurélie Roche-Mair followed suit, with an emphasis on the interrelation between the Children’s Convention and the Rome Statute of the ICC. Concluding was Gloria Atiba-Davies, head of the Gender and Children Unit in the ICC Office of the Prosecutor. Together, their presentations underscored the legal and practical challenges to achievement of the goal of ending wartime crimes against children – a goal to which ICC Prosecutor Fatou Bensouda recommitted her office, in her October speech on “Children & International Criminal Justice,” and in a statement yesterday that marked the Convention’s anniversary. It’s a goal to be pursued as her office continues consultations with experts, in the course of developing its Policy Paper on Children.

* Congratulations to the Students of the Law Society of Ireland for winning 1st place at yesterday’s finals. And kudos to Leiden Professors Ton Liefaard and Julia Sloth-Nielsen for the vision and hard work that produced this amazing week.

nwc_leftCan the laws of war constrain robot warriors? Is international humanitarian law adaptable to the use of weapons that possess artificial intelligence? To what extent can such weapon systems determine who is, and who is not, a combatant? To what extent must humans control the decision to kill the enemy?

These questions and others fostered a fascinating discussion at “Legal Implications of Autonomous Weapon Systems,” a workshop at the Naval War College in Newport, Rhode Island, this past Thursday and Friday. We four dozen or so attendees were drawn from the armed forces of the United States, Australia, Britain, Canada, and Israel, from the International Committee of the Red Cross, and from a global array of academic institutions.

As one who reserves just a couple days for the topic in my Laws of War course, I came to the workshop with more questions than answers about the actual and potential uses in armed conflict of robots, the shorthand term I’ll use here for “autonomous weapons systems.” The military, characteristically, prefers an acronym: AWS.

The actual use of such weapons already is significant. Smart missiles called JDAMs deliver munitions to a target, while a WALL·E-looking machine called SWORDS has, as the U.S. Department of Defense wrote in 2004, “march[ed] into battle” alongside troops.

In fact, such machines tend not to be used in a fully independent manner (though with a little reprogramming, some could be). They are, we were told, semi-autonomous – humans are kept “in” or “on” the loop leading to choice of target and other decisions.

This mention of human supervision, like the WALL·E-on-the-march metaphor above, pointed to a pivotal workshop topic:

nwc_right►  Is it appropriate, as a matter of law or of ethics, to indulge in the human tendency to anthropomorphize these machines?

Apparently, some lab robots can recognize – or at least can mimic the act of recognizing – themselves in a mirror. Does this mean they are, or soon will be, sufficiently human-like to conduct operations wholly without oversight by actual humans? Might human-like robots evolve an ability to refuse programmed orders – orders that limited action to the boundaries of international humanitarian law? The answers to these questions, like many at the workshop, seemed to be “perhaps yes, perhaps no.”

At one end of the spectrum, this uncertainty has spurred a call for an outright ban. Emblematic is the headline of a notice about the November 2012 release of the Human Rights Watch report, Losing Humanity:

‘Ban ‘Killer Robots’ Before It’s Too Late: Fully Autonomous Weapons Would Increase Danger to Civilians’

At the other end of the spectrum, some would prefer to let the technology develop before the onset of any new legal regulation.

Many seem to fall in between. Acknowledged were some challenges; for instance:

► Does compliance with the precautions requirement of Article 57 of the Additional Protocol I (1977) to the four Geneva Conventions (1949) preclude the use of a fully autonomous weapon?

► Would the robotic commission of a war crime be susceptible to sanctions by global justice mechanisms like the International Criminal Court, and if not, what effective sanctions and deterrents would there be?

Persons falling in the vast middle of the regulatory spectrum harbored concerns about such questions, yet seemed to lean toward the view that if due care is taken, international humanitarian law can – and should – be applied. Documents discussed in this vein included the:

► U.S. Department of Defense Directive 3000.09, ¶ 4(a) (November 12, 2012), which states as “DoD policy” the following:

‘Autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgement over the use of force.’

heyns► April 9, 2013 report to the U.N. Human Rights Council by University of Pretoria Law Professor Christof Heyns, who’s served since 2010 as the Special Rapporteur on extrajudicial, summary or arbitrary executions. At ¶ 108 of his report, Heyns termed the 2012 Defense Directive as “imposing a form of moratorium” with respect to what he termed “lethal autonomous robotics,” or LARs. Heyns’ 2013 U.N. report (¶ 35) favored a broader scope for delay:

‘The present report … calls on States to impose national moratoria on certain activities related to LARs.’

A reprise of such issues likely will occur at the Meeting of Experts on Lethal Autonomous Weapons Systems set for May 13 to 16 in Geneva under the auspices of the 1980 Convention on Certain Conventional Weapons. Named in full the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects as amended on 21 December 2001, this treaty has 117 states parties, including the United States.

The Naval War College International Law Department workshop’s vital and timely discussion exposed many avenues for study – study sooner rather than later, so that the legal regulatory framework may be determined before fully autonomous robots are fully deployed.

‘[S]uggestions that cyber means and methods of warfare exist in an extra-normative space beyond the reach of IHL are completely counter-normative.’

Michael N. Schmitt, contributing a post to a series on “International Humanitarian Law & New Technologies” sponsored at Intercross, the blog of the International Committee of the Red Cross. Schmitt, who heads the U.S. Naval War College International Law Department and is a Senior Fellow at the NATO Cooperative Cyber Defence Centre of Excellence, is among the experts who maintain that Intercross ID logo_0international humanitarian law enjoys what he calls “inherent adaptability”; therefore, consideration of what uses of new technology are lawful ought to occur within the frame of that body of law. It’s the stance he took on release of the Tallinn Manual on the International Law Applicable to Cyber Warfare (2013), about which I previously posted, and to which he refers in his Intercross post. Schmitt does not argue that IHL is static. Rather, he predicts that some legal concepts may be “reinterpreted”; for instance, what constitutes an “attack” within cyberspace. What I’ve titled “human-free weapons” – that is, autonomous or robotic weapons, able to make targeting decisions without human intervention – pose particular interpretive challenges. Schmitt notes others’ posts in the series and “join[s] the ICRC in calling for further informed examination of the issues the systems arise.”