The United Nations is set to debate the use of “killer robots” during the UN Convention on Certain Conventional Weapons this week as human rights activists call for them to be banned.
“Shaking the Foundations: The Human Rights Implications of Killer Robots” is a new report by Human Rights Watch and Harvard Law’s International Human Rights Clinic, and calls for a ban on the development of killer robots before they lead to an inevitable 21st century arms race between nations.
The fully autonomous weapons described in the report don’t actually exist yet, but have been under inadvertent and direct development since the introduction of unmanned vehicles, or drones, into militaries worldwide.
According to an IBT report, fully autonomous weapons are defined as having “the ability to identify and fire on targets without human intervention,” and threaten to raise questions concerning their compliance with international humanitarian law.
“In policing, as well as war, human judgment is critically important to any decision to use a lethal weapon,” Human Rights Watch arms division director Steve Goose said in the report. “Governments need to say no to fully autonomous weapons for any purpose and to preemptively ban them now, before it is too late.”
The report raises legal concerns not only with the robots’ direct use but the legal ramifications of their potential mistakes. In terms of criminal liability, it would be very difficult to hold military, programmer, manufacturer or other personnel accountable if a machine killed unnecessarily.
It also highlighted the difficulty of addressing a robot’s lack of judgment, and asserted it would be hard to program a machine to simulate a human being’s capacity to assess information, reason, judge and make a nuanced, possibly moral decision with human life hanging in the balance.
The Universal Declaration of Human Rights, which was adopted by the UN shortly after World War II, addressed such concerns in its first Article long before any such technological development, stating ”All human beings … are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.”
Critics of autonomous weapons cited law enforcement as an example against such machines, and explained how officers’ chosen degree of force depends on a multitude of variables they must weigh including a suspect’s apparent mental state, background, history and demands.
They claim that without inherent compassion, sympathy and the ability to identify with others, machines could devalue life and actually undermine law enforcement.
“Proponents of fully autonomous weapons might argue that technology could eventually help address the problems identified in this report, and it is impossible to know where science will lead,” the report said.
Human rights activists aren’t the first to address the dangers posed by autonomous weapons – last year more than 270 experts in robotics, artificial intelligence development and computing experts from related fields across 37 countries endorsed a letter on the International Committee for Robot Arms Control (ICRAC)’s website calling for a ban on the development of such weapons.
“Given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment,” the letter said. “Decisions about the application of violent force must not be delegated to machines.”
“Governments need to listen to the experts’ warnings and work with us to tackle this challenge together …” ICRAC chair Professor Noel Sharkey said in the letter. ”It is urgent that international talks get started now to prevent the further development of autonomous robot weapons before it is too late.”