Headline and text from article:
Hegseth’s AI Ultimatum
by Andrew Egger
Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies?
This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro.
But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1