Don't Let Robots Pull the Trigger


The killer machines are coming. Robotic weapons that target and destroy without human supervision are poised to start a revolution in warfare comparable to the invention of gunpowder or the atomic bomb. The prospect poses a dire threat to civilians—and could lead to some of the bleakest scenarios in which artificial intelligence runs amok. A prohibition on killer robots, akin to bans on chemical and biological weapons, is badly needed. But some major military powers oppose it.

The robots are no technophobic fantasy. In July 2017, for example, Russia's Kalashnikov Group announced that it had begun development of a camera-equipped 7.62-millimeter machine gun that uses a neural network to make “shoot/no-shoot” decisions. An entire generation of self-controlled armaments, including drones, ships and tanks, is edging toward varying levels of autonomous operation. The U.S. appears to hold a lead in R&D on autonomous systems—with $18 billion slated for investment from 2016 to 2020. But other countries with substantial arms industries are also making their own investments.

Military planners contend that “lethal autonomous weapons systems”—a more anodyne term—could, in theory, bring a detached precision to war fighting. Such automatons could diminish the need for troops and reduce casualties by leaving the machines to battle it out. Yet control by algorithm can potentially morph into “out of control.” Existing AI cannot deduce the intentions of others or make critical decisions by generalizing from past experience in the chaos of war. The inability to read behavioral subtleties to distinguish civilian from combatant or friend versus foe should call into question whether AIs should replace GIs in a foreseeable future mission. A killer robot of any kind would be a trained assassin, not unlike Arnold Schwarzenegger in The Terminator. After the battle is done, moreover, who would be held responsible when a machine does the killing? The robot? Its owner? Its maker?

With all these drawbacks, a fully autonomous robot fashioned using near-term technology could create a novel threat wielded by smaller nations or terrorists with scant expertise or financial resources. Swarms of tiny, weaponized drones, perhaps even made using 3-D printers, could wreak havoc in densely populated areas. Prototypes are already being tested: the U.S. Department of Defense demonstrated a nonweaponized swarm of more than 100 micro drones in 2016. Stuart Russell of the University of California, Berkeley, a prominent figure in AI research, has suggested that “antipersonnel micro robots” deployed by just a single individual could kill many thousands and constitute a potential weapon of mass destruction.

Since 2013 the United Nations Convention on Certain Conventional Weapons (CCW), which regulates incendiary devices, blinding lasers and other armaments thought to be overly harmful, has debated what to do about lethal autonomous weapons systems. Because of opposition from the U.S., Russia and a few others, the discussions have not advanced to the stage of drafting formal language for a ban. The U.S., for one, has argued that its policy already stipulates that military personnel retain control over autonomous weapons and that premature regulation could put a damper on vital AI research.

A ban need not be overly restrictive. The Campaign to Stop Killer Robots, a coalition of 89 nongovernmental organizations from 50 countries that has pressed for a such a prohibition, emphasizes that it would be limited to offensive weaponry and not extend to antimissile and other defensive systems that automatically fire in response to an incoming warhead.

The current impasse has prompted the campaign to consider rallying at least some nations to agree to a ban outside the forum provided by the CCW, an option used before to kick-start multinational agreements that prohibit land mines and cluster munitions. A preemptive ban on autonomous killing machines, with clear requirements for compliance, would stigmatize the technology and help keep killer robots out of military arsenals.

Since it was first presented at the International Joint Conference on Artificial Intelligence in Stockholm in July, 244 organizations and 3,187 individuals have signed a pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” The rationale for making such a pledge was that laws had yet to be passed to bar killer robots. Without such a legal framework, the day may soon come when an algorithm makes the fateful decision to take a human life.