Daniel Van Boom
Libyan forces were “hunted down and remotely engaged” by an autonomous drone, a UN report reads.
The fear of killer robots is as old as robots themselves — thinkers like Elon Musk and Sam Harris have long argued that AI poses a serious threat to human civilization.
But if you’re at all panicked about AI or robots, a new UN report may add to your anxiety, as it explains that a drone attacked (and possibly killed) soldiers all on its own.
It’s thought to be the first recorded case of an autonomous drone attack.
The incident occurred in March 2020 in Libya, a country that was in the midst of a civil war. Turkey, a key combatant in the war, deployed the STM Kargu-2 drone, according to the UN Security Council’s Panel of Experts on Libya report.
The drone, which the report refers to as a “lethal autonomous weapon,” then found and attacked Libya’s Haftar Armed Forces.
Logistics convoys and retreating forces were “hunted down and remotely engaged by lethal autonomous weapons systems such as the STM Kargu-2,” the report reads.
“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”
The creator of the Kargu drone, STM, says the device “can be effectively used against static or moving targets through its indigenous and real-time image processing capabilities and machine learning algorithms embedded on the platform.”
The UN in 2018 attempted to begin working on a treaty that would ban autonomous weapons, but the move was blocked by both the US and Russia, Politico reported at the time.
Human Rights Watch has been campaigning against such weapons since 2013, and has backed a campaign to stop their spread.
“Killer robot proliferation has begun,”tweeted Max Tegman, a machine learning researcher at MIT. “It’s not in humanity’s best interest that cheap slaughterbots are mass-produced and widely available to anyone with an axe to grind. It’s high time for world leaders to step up and take a stand.”
A Military Drone With a Mind of its Own Was Used in Combat
Military-grade autonomous drones can fly themselves to a specific location, pick their own targets and kill without the assistance of a remote human operator.
Such weapons are known to be in development, but until recently there were no reported cases of autonomous drones killing fighters on the battlefield.
Now, a United Nations report about a March 2020 skirmish in the military conflict in Libya says such a drone, known as a lethal autonomous weapons system — or LAWS — has made its wartime debut. But the report does not say explicitly that the LAWS killed anyone.
“If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial intelligence-based autonomous weapons being used to kill,” Zachary Kallenborn wrote in Bulletin of the Atomic Scientists.
The assault came during fighting between the U.N.-recognized Government of National Accord and forces aligned with Gen. Khalifa Haftar, according to the report by the U.N. Panel of Experts on Libya.
“Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 … and other loitering munitions,” the panel wrote.
The Kargu-2 is an attack drone made by the Turkish company STM that can be operated both autonomously and manually and that purports to use “machine learning” and “real-time image processing” against its targets.
The U.N. report goes on: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”
“Fire, forget and find” refers to a weapon that once fired can guide itself to its target.
The idea of a “killer robot” has moved from fantasy to reality
Drone warfare itself is not new. For years, military forces and rebel groups have used remote-controlled aircraft to carry out reconnaissance, target infrastructure and attack people. The U.S. in particular has used drones extensively to kill militants and destroy physical targets.
Azerbaijan used armed drones to gain a major advantage over Armenia in recent fighting for control of the Nagorno-Karabakh region. Just last month, the Israel Defense Forces reportedly used drones to drop tear gas on protesters in the occupied West Bank, while Hamas launched loitering munitions — so-called kamikaze drones — into Israel.
What’s new about the incident in Libya, if confirmed, is that the drone that was used had the capacity to operate autonomously, which means there is no human controlling it, essentially a “killer robot,” formerly the stuff of science fiction.
Not all in the world of security are concerned.
“I must admit, I am still unclear on why this is the news that has gotten so much traction,” Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, wrote on Twitter.
Franke noted that loitering munitions have been used in combat for “a while” and questioned whether the autonomous weapon used in Libya actually caused any casualties.
Jack McDonald, a lecturer in war studies at King’s College London, noted that the U.N. report did not make clear whether the Kargu-2 was operating autonomously or manually at the time of the attack.
While this incident may or may not represent the first battlefield killing by an autonomous drone, the idea of such a weapon is disquieting to many.
A global survey commissioned by the Campaign to Stop Killer Robots last year found that a majority of respondents — 62% — said they opposed the use of lethal autonomous weapons systems.