A recent report suggests that the deployment of artificial intelligence-powered drones that are capable of making autonomous decisions regarding whether to kill human targets is becoming a true possibility.
The New York Times recently reported that lethal autonomous weapons capable of selecting targets through the utilization of artificial intelligence are currently being developed in countries such as the United States, Israel, and China.
According to Business Insider, critics argue that the deployment of AI-powered drones, also described as “killer robots,” could present serious danger, as autonomous machines could eventually make lethal decisions without adequate human oversight.
The New York Times reported that multiple governments are lobbying the United Nations to pass a binding resolution that would restrict the use of AI killer drones. However, the United States, Israel, Russia, Australia, and other nations have resisted the idea of a binding resolution and have been more favorable of a non-binding resolution.
“This is really one of the most significant inflection points for humanity,” Austria’s chief negotiator on the issue of AI-powered drones, Alexander Kmentt, told The New York Times. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”
According to The New York Times, the Pentagon is currently working to deploy thousands of AI-powered drones.
In August, U.S. Deputy Secretary of Defense, Kathleen Hicks highlighted the Pentagon’s “Replicator” initiative, which includes the deployment of thousands of autonomous AI-controlled vehicles by 2026 to help the U.S. military “overcome the PRC’s biggest advantage, which is mass.”
“We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” Hicks said. She added, “So now is the time to take all-domain, attritable autonomy to the next level: to produce and deliver capabilities to warfighters at the volume and velocity required to deter aggression, or win if we’re forced to fight.”
U.S. Air Force Secretary Frank Kendall told The New York Times that the U.S. military’s AI-powered drones will need to have the ability to make decisions to execute targets with human supervision.
“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said. “I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.“
The New Scientist reported in October that AI-powered drones have been deployed by Ukraine in its military response to Russia’s invasion. However, Business Insider reported that it remains unclear whether the deployment of AI-powered drones has resulted in human casualties.