The fear of bots taking out humans and one day ruling the world is as old as robots themselves. Thinkers like Sam Harris and Elon Musk believed that AI could one day pose a serious threat to humans. But if that concerns you already, a new UN report may add to your anxiety, as it explains that a drone (or possibly killed) a human target on its own, likely for the first time, as per the UN report.
Last year, a deadly attack drone designed for anti-terror and asymmetric warfare, went rogue by autonomously attacking a person during a conflict between loyalist forces and Khalifa Haftar’s breakaway military faction.
Rogue Killer Drone – First Time, But Expected To Occur Someday
According to the report from UN’s security panel of experts, retreating forces and logistics convoys were hunted and remotely engaged by the STM Kargu-2. The lethal autonomous weapons were programmed to take out targets without requiring an operator –indeed, a true “fire, forget, & find” system.
The Rogue Killer Drone, which can self-destruct on someone when necessary, can be effectively used against static and moving targets because of its real-time image processing capabilities and AI. In other words, it can operate in a highly effective autonomous mode without requiring an operator, said the New York Post.
This is likely the very first time a drone went rogue, said Zak Kallenborn, a national-security consultant who specialized in drones & unmanned systems.
Kallenborn has concerns about the future of bots, stating how often they can misidentify targets. Concurrently, Jack Watling, a researcher on land warfare, said that the incident is a clear demonstration of a need for an urgent discussion about their regulation.
Human Rights Watch is campaigning for a ban on the development, production, and operation of autonomous weapon systems, according to sources.
Meanwhile, Max Tegman said in a Twitter post that “killer robot proliferation has begun” and has called for world leaders to step up (this time) and take a stand.
AI Just Makes It Worse
If killer bots have been around for many years, why has there been so much discussion about them recently? And why is the Libyan incident special?
Zak Kallenborn said that the rise of AI plays a big role. He added that rapid advancements in AI have given weapon makers access to cheap vision systems that can select a target quickly as your plants, pets, and other familiar faces. The systems were believed to be precise, but are also much more prone to errors.
According to him, loitering munitions typically respond to radar emissions. But AI targeting systems might still classify a civilian as a soldier because current AI used for autonomous weapon systems, are still brittle. A study revealed that even a single change in a pixel is enough to radically change the machine’s vision systems. But the real question is how often does this happen during real-world situations – this is why the Libyan incident is interesting.