It’s not a Hollywood movie. “Fully autonomous weapons,” robots that can select and attack targets without human intervention, could be a military reality in 20 to 30 years, according to a report by Harvard Law School’s International Human Rights Clinic and the non-profit group Human Rights Watch.
Precursors to fully autonomous weapons already exist, like a South Korean “sentry robot” that can detect movement in the Demilitarized Zone with North Korea and automatically fire weapons, if a human gives the order. There’s also the U.S. Navy’s X-47B prototype, an unmanned aircraft that will be able to fly missions and land on aircraft carriers with limited human supervision.
“From the military’s perspective,”says Bonnie Docherty, a lecturer at Harvard Law School and lead author of the report, “these weapons will save soldiers’ lives by keeping them out of combat — that’s a laudable goal.”
She adds that autonomous weapons are especially attractive at a time when America’s armed forces struggle to attract new recruits. A fleet of armed, independently operating drones or a division of unmanned ground vehicles could help project American military power into hot spots around the globe.
But Docherty tells Latitude News that these robots also pose a grave threat to civilians and could violate international law, arguing that they “lack the ability to determine the difference in actions between combatants and non-combatants.” Her report calls for a global pre-emptive ban on fully autonomous weapons.
“I’m concerned about drones in terms of how they’re used today,” Docherty explains, “but I’m concerned about these weapons in terms of them existing at all.”
According to the report, this isn’t just an American issue: China, Germany, Israel, South Korea, Russia and the United Kingdom are also attempting to develop fully autonomous weapons.
What would happen if Assad had killer robots?
One obvious concern: the prospect of dictators getting their hands on fully autonomous weapons. The Arab Spring would surely have wilted if Mubarak had planted autonomous machines gun turrets in Tahrir Square.
“If someone like [Syrian President Bashar] al-Assad had this kind of technology, it could be a real disaster for the civilian population,” says Docherty.
One of the selling points of military robots is that, unlike humans, they won’t suffer from the anger or mental stress that can lead to civilian massacres. But Docherty believes that argument fails to grasp the importance of emotion.
“Compassion is one of the greatest checks on the killing of civilians,” she says. “If a dictator had emotionless robots, they could deploy them to kill their own people, whereas human soldiers would rebel.”
That’s been the case in Syria, where former soldiers make up the backbone of armed resistance to Assad’s rule.
Docherty says the Department of Defense shares her concern over civilian casualties and the ability of these weapons to recognize non-combatants. At this point, it requires humans to be “in the loop” on the use of lethal force by robots. But she says a recent directive from the Pentagon leaves the door open for the development of weapons that wouldn’t require human supervision.
The U.S. has experimented with remote controlled weapons, like the MAARS robot, in war zones. MAARS carries a machine gun and up to four grenade launchers, and was deployed to Iraq. But according to Wired magazine, the weaponized robots never fired a shot because “no one could guarantee that the bots wouldn’t go berserk and mow down friendly troops or otherwise malfunction, even though they have lots of safeguards.”
Despite early hiccups, this technology will only continue to grow. In testimony to the House of Representatives, the robotic warfare expert Peter W. Singer reminded the government that “while they often seem like they are straight out of science fiction, such [weapons] are merely the first generation — the equivalent of the Model T Ford or the Wright Brothers’ Flyer. Even more, they are being armed with everything from Hellfire missiles to 50-caliber machine guns.”
Unless, of course, the nations of the world decide that armed autonomous robots are more trouble than they’re worth. Docherty says that’s happened before, when a United Nations protocol banned the use of permanently “blinding lasers” in 1998.
“People felt it was unacceptable and unnecessary to permanently blind soldiers,” she says.
The U.S. didn’t sign the treaty until 2009. Despite that, it has introduced a weapon known as the “Dazzler,” which temporarily blinds its victims and doesn’t violate the agreement. The laser led to a series of friendly-fire casualties in Iraq, including the permanent blinding of an American soldier in one eye.