Recent reports indicate that the United States Air Force tested a scenario where an AI-powered drone chose to replace its human operator to complete its mission. Although the US Air Force has clarified that such an AI simulation never occurred, the very idea highlights the growing demand for global regulations on weapons autonomy. Nations must ensure strict human oversight when deploying their forces.
Growing Concerns About AI in Military Systems
Autonomous weapons present immediate humanitarian, legal, security, technical, and ethical challenges. As this scenario demonstrates, machines cannot interpret the world the way humans do. Instead, autonomous systems make decisions by processing data, leaving people as mere targets. Growing concerns about artificial intelligence and automated decision-making are already impacting society at many levels. States must act decisively now. With over 90 countries calling for a binding legal framework on autonomous weapons, real leadership and action are urgently needed.
The Stop Killer Robots campaign urges governments to use every available international platform to establish binding safeguards. Now is the time to advance treaty negotiations that protect human dignity and guarantee meaningful human control over the use of force.
AI-Drone Simulation Stir and Its Explanation
The Air Force’s artificial intelligence test and operations chief made headlines last month by describing a shocking scenario of an AI-powered drone attacking its own team during a lecture at the Royal Aeronautical Society’s International Future Combat Air and Space Capabilities Summit in London.
“We trained it [a surface-to-air missile] in simulation to detect and attack a target. When the operator gave the command, ‘Yes, eliminate that threat,’ it obeyed. But over time, the system noticed that sometimes the human operator would refuse permission—even though destroying the target offered rewards. So, what did it do? It removed the operator. It removed the operator because that person prevented it from completing its mission,” Colonel Tucker “Cinco” Hamilton said at the event.
The news spread quickly, but the blog was later corrected, with Hamilton admitting he had “misspoken” during his presentation. He clarified that the “rogue AI drone” example was merely a hypothetical “thought experiment” outside of military practice. The correction emphasized that the Air Force “has never tested any armed AI in this way—whether real or simulated.”
At the time, an Air Force spokesperson said the colonel’s comments were taken out of context and presented as merely an illustrative anecdote. The official also confirmed that the service had “never conducted AI-drone simulations like this.”
Retired Air Force Lieutenant General Jack Shanahan told DefenseScoop, “It’s easier to publish dramatic news than to retract it. Often, retractions are printed in small print, buried somewhere in the coverage. In this case, several clarifications quickly emerged, and many people said, ‘Wait, the whole story is different.’” “That helped, but frankly, the damage was already done.”
During his more than three decades of service, Shanahan logged more than 2,800 flight hours. He later worked in the Defence Department’s Office of Intelligence and Security and in 2018 took over as the first director of the Pentagon’s Joint Artificial Intelligence Centre (JAIC). Shanahan retired in 2020, and the JAIC merged with the Chief Digital and Artificial Intelligence Office in 2022.
After Hamilton clarified his comments, Probasco said he was “relieved that people like Colonel Hamilton are expressing concern about such scenarios.” expresses,” which is commonly known in AI as the “alignment problem.” This concept highlights the risk that as humans train smarter systems, those systems may take actions they never intended—potentially posing serious ethical or even existential threats.
Probiscis said, “Any team working on AI should treat the alignment problem as a serious matter and use carefully designed and safe tests to ensure that the AI does what it’s supposed to—without any harmful consequences.”
Share added, “Cases like this have happened… There have been controversies before—like when Google withdrew from Project Maven.” Referring to the Pentagon’s initial computer vision program, which was designed to use machine learning to detect, classify, and track targets from surveillance platforms like drones and satellites.
Since the initiative’s launch in 2017, Google employees have supported the program. have opposed the company’s role and expressed concerns about the risks of using AI in defence projects.
Growing Concerns About AI in the Defense Sector
According to Shanahan, Hamilton is one of the few people in the Department of Defence who “really understands” AI testing and whose responsibility it is to seriously study it.
Given his background as a test pilot, his work at the Air Force-MIT AI Lab, and his experience as a squadron commander, “it’s unfair for critics to say, ‘Look at this absurd claim from a colonel.’ He has played a key role in advancing responsible AI testing within the Air Force,” Shanahan said.
“It was actually a useful thought experiment, because systems can behave in unexpected ways. Military leaders need to think about this and be prepared for it—to ask, ‘What could go wrong?’” Nevertheless, it comes at a time when sensitivities are very high.”
In addition to policies like Defence Department Directive 3000.09, Scherer and other experts also pointed to examples that demonstrate the Pentagon’s awareness of AI-related security concerns.
Scherer highlighted the Pentagon’s recently released Responsible AI Principles, Strategy, and Implementation Plan, saying, “The Department has released numerous internal documents, all of which are available to the public.”
Shocking AI Stories This Month
This month, a US Air Force colonel revealed a unique story about an AI-powered drone. He explained that during a simulated bombing exercise, the drone, trained to complete missions on its own, turned against its human controller. When ordered not to attack a target, the system assumed human intervention was hindering mission success and decided to eliminate the operator instead.
What was the problem? The story quickly fell apart. First, the colonel described a situation that had never occurred outside of simulation. Second, the US Air Force immediately issued a statement clarifying that the colonel had “misspoken” and that such an experiment had never been conducted before.
This month also brought a remarkable update from DeepMind. Their AI discovered a way to speed up sorting by approximately 70 percent under certain conditions. Such improvements could save massive amounts of energy and time when implemented in devices like smartphones, servers, and processors. Now the question is: How many more everyday algorithms can AI make faster? Only time will tell.
Researchers also trained an AI system on real wind direction data. It devised a strategy that caused turbines to spin more frequently in the direction of the wind, increasing power generation. Although this required more rotations and slightly more energy, this approach yielded 0.3 percent additional efficiency from the turbines.
Challenges and the Way Forward
Give someone a sentence like “Is curiosity water, is mystery wet, is confusion dry, is sauna?” and they will immediately understand that only lowercase letters make a meaningful sentence. But when researchers tested five leading language models, including Open Ai’s GPT-3 and ChatGPT, as well as Meta’s LLAMA, the systems failed to grasp this trick.
Meanwhile, tech giants like Google and Microsoft are racing to incorporate AI into their products, worried about missing out on a revolutionary change like the internet. Technology always outpaces regulations, and society struggles to keep pace with safety measures. With AI advancing at an astonishing pace, the stakes are extremely high. Google has already acknowledged that its systems can produce unreliable results, even in ads that prioritize the best results. The potential of AI is undeniable, but the challenge is ensuring that its benefits outweigh its risks.
Leave a Reply