International human rights lawyers and military aviation specialists held their collective breath recently when, 80 miles off the coast of Virginia, "Salty Dog 502" executed a flawless landing on the deck of the USS Bush nuclear aircraft carrier catching the third wire before coming to a clean halt. The successful maneuver which to the untrained eye appeared rather unexceptional, ushered in a new era for weapons systems and international humanitarian law as it marked the first time an unmanned, autonomous drone landed on an aircraft carrier.
Other large, first generation drones currently deployed by the CIA and Air Force (which robot expert Peter W. Singer likens to the "Model T Ford or the Wright Brothers' Flyer") require a human pilot operating a joystick to fly but "Salty Dog 502," the culmination of an eight-year, $1.4 billion military project, is designed to launch, land and refuel in midair without human intervention.
"It is not often you get a chance to see the future, but that us what we got to see today," Navy Secretary Ray Mabus said shortly after the autonomous demonstrator touched down. "We didn't have someone ... with a stick and throttle and rudder to fly this thing," added program manager Rear Adm. Mat Winter. "We have automated routines and algorithms."
International human rights lawyers were not nearly as ebullient.
Human Rights Watch issued an unequivocal report last November calling for an absolute ban on the development, production and use of autonomous weapons systems. The report concluded that "such revolutionary weapons would not be consistent with international humanitarian law and would increase the risk of death or injury to civilians during armed conflict."
A report by the Special Rapporteur to the United Nations issued in April, came to a similar conclusion stating, "[autonomous weapons] may seriously undermine the ability of the international legal system to preserve a minimum world order."
Developing useful systems that pass principle of distinction muster is particularly problematic for the U.S. which, for years, has been engaged in asymmetrical, urban counterinsurgencies, where enemies are often indistinguishable from civilians. Soldiers engage enemies only after observing subtle, contextual factors or taking direct fire. In an environment where most individuals are not combatants (think: Baghdad or Kabul), autonomous weapons' inability to assess individual intention – i.e. a butcher chopping meat in a busy market or a child playing with a toy gun – make their presence on the battlefield an international legal liability.
Likewise, the proportionality of a military attack is predominately dictated by split-second, value-based judgments, limited by the requirement of "humanity." The sudden presence of a school bus, for instance, may change a human soldier's proportionality calculus, deterring him from engaging.
Human soldiers, however, are not perfect. In the heat of the battle, technical indicators have, at times, proven more reliable than human judgment. In 1988, for instance, the USS Vincennes shot down an Iranian airliner after the warship's crew believed the aircraft was descending to attack when, in fact, computers on-board accurately indicated it was ascending to pass by harmlessly.
And while lethal engagement can be restrained by human compassion it is just as often fueled by our basest instincts: rage and revenge. One need look no further than the civilian atrocities perpetrated by soldiers in Darfur, Rwanda, or Syria to see the possible effects of unchecked human emotions.
With the U.S. Department of Defense's stated goal of increasing the autonomy of weapons systems over the next decade, the real question then becomes how best to ensure compliance with customary international legal standards.