The Morality of Robotic War

Earlier this month, “Avengers: Age of Ultron” was released in theaters across the United States, featuring Marvel comics superheroes battling evil robots powered by artificial intelligence and hellbent on destroying humanity.

Sentient military machines still remain in the realm of science fiction, but some autonomous weapons are already technically possible today. And the role of autonomy in military systems is likely to grow as armies take advantage of the same basic technology used in self-driving cars and household robots. Autonomous weapons are not the same as drones. A drone is remotely piloted by a person, who makes the decision to fire its weapons. In contrast, an autonomous weapon is one that, once activated, can select and engage targets on its own.

In mid-April, 90 countries and dozens of nongovernmental organizations met in Geneva to discuss the challenges raised by lethal autonomous weapons, and a consortium of more than 50 NGOs called for a pre-emptive ban.

Advocates of a ban on autonomous weapons often claim that the technology today isn’t good enough to discriminate reliably between civilian and military targets, and therefore can’t comply with the laws of war. In some situations, that’s true. For others, it’s less clear. Over 30 countries already have automated defensive systems to shoot down rockets and missiles. They are supervised by humans but, once activated, select and engage targets without further human input. These systems work quite effectively and have been used without controversy for decades.

Autonomous weapons should not be banned based on the state of the technology today, but governments must start working now to ensure that militaries use autonomous technology in a safe and responsible manner that retains human judgment and accountability in the use of force.

Greater autonomy could even reduce…

Continue reading “The Morality of Robotic War”