Defense

Air Force Says Killer Drone Story Was ‘Anecdotal’, Official’s Remarks Were ‘Taken Out Of Context’

(Photo by John Moore/Getty Images)

Daily Caller News Foundation logo
Micaela Burrow Investigative Reporter, Defense
Font Size:

Editor’s note: Following the publication of this story, the U.S. Air Force denied that the simulation was actually run, and that Col. Hamilton’s comments were taken out of context. The story has since been updated to reflect this.

U.S. Air Force Col. Tucker Hamilton, at a conference in May, appeared to recount an experiment in which the Air Force trained a drone on artificial intelligence that eventually turned on its operator; however, the Air Force has since denied the simulation actually took place.

Hamilton described a scenario at a summit hosted by the United Kingdom-based Royal Aeronautical Society in which Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.

However, the Air Force said the simulation did not actually occur.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force Spokesperson Ann Stefanek told Fox News. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Hamilton also said the experiment never took place, and that the scenario was a hypothetical “thought experiment.”

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton told the Royal Aeronautical Society.

Hamilton, during the May conference, recounted the experiment as if it actually occurred in remarks shared by the Royal Aeronautical Society.

“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained. (RELATED: Google Ends AI Program With Pentagon After Employees Resign In Protest)

In Hamilton’s scenario, programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.”

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

In Hamilton’s scenario, programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.

The summit convened experts and defense officials from around the world to discuss the future of air and space combat and assess the impact of rapid technological advancements. AI-driven capabilities have exploded in use within the Pentagon and generated global interest for its ability to operate weapons systems, execute complex, high-speed maneuvers and minimize the number of troops in the line of fire, according to WIRED.

In January, the Department of Defense introduced revised autonomous weapons guidance to address the “the dramatic, expanded vision for the role of artificial intelligence in the future of the American military,” Pentagon Director of Emerging Capabilities Policy Michael Horowitz told Defense News. It also created oversight bodies to advise the Pentagon on ethics and good governance in the use of AI.

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.