To start, human operators must be able to program the system's software with appropriate levels of doubt (or the likelihood that an object or person is a lawful target as well as the extent of potential collateral damage). In other words, the system would not target a person or object unless it could calculate within a sufficient, pre-determined threshold of, say, 98 percent certainty, that it was engaging a lawful option.
Until facial recognition software can be developed to the point which enables autonomous weapons to accurately identify specific, individual targets, weapons must only be deployed to areas where everyone is a combatant. Moreover, accountability, in the form of statutory criminal liability, must be determined for commanders, supervisors or programmers who direct systems to engage unlawful targets.
In effect, these preliminary requirements would limit the deployment of autonomous weapons to instances where more precise or discriminatory alternative options that would cause less collateral damage are unavailable to achieve specific military objectives. Further international operational guidelines and review standards must follow as the sophistication of technology progresses.
Seneca once said, "[a] sword is never a killer, it is a tool in the killer's hand." One day soon, Salty Dog may question that assertion ... quite literally.
Drew F. Cohen is a law clerk to the Chief Justice of the Constitutional Court of South Africa.