Discussion
Loading...

Discussion

Log in
  • About
  • Code of conduct
  • Privacy
  • About Bonfire
Zhi Zhu 🕸️
Zhi Zhu 🕸️
@ZhiZhu@newsie.social  ·  activity timestamp 12 hours ago

"Defense Sect #PeteHegseth has grown unhappy with two elements of the DoD’s contract with #Anthropic. One, Anthropic won’t let its #AI be used to conduct mass surveillance of Americans. Two, it won’t let the #DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Dept, the idea that a contractor would be able to tie the military’s hands like this is outlandish"
https://www.thebulwark.com/i/189250040/hegseths-ai-ultimatum

#Tech #News #US #USA

Headline and text from article:
Hegseth’s AI Ultimatum

by Andrew Egger

Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies?

This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro.

But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1
Headline and text from article: Hegseth’s AI Ultimatum by Andrew Egger Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies? This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro. But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1
Headline and text from article: Hegseth’s AI Ultimatum by Andrew Egger Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies? This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro. But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1

AI Death Machines. No Human Oversight. What Could Go Wrong?

Pete Hegseth is trying to bully Anthropic out of objecting to “lethal autonomous weapons systems” and mass surveillance.
  • Copy link
  • Flag this post
  • Block
Zhi Zhu 🕸️
Zhi Zhu 🕸️
@ZhiZhu@newsie.social  ·  activity timestamp 12 hours ago

“Anthropic is trying to... put their own guardrails in place in the absence of legislation,” she added.“It should go without saying that #AI #technology should not be making potentially lethal decisions without human involvement. I fear what #America will become if the #DoD is given this unrestricted power.”

...it’s insane that such questions as “how much killing will we let the killer robots do on their own” are being hashed out as back-room handshakes"
https://www.thebulwark.com/i/189250040/hegseths-ai-ultimatum

#News #US #USA

Text from article:
“Anthropic is trying to do the right thing and put their own guardrails in place in the absence of legislation,” she added. “It should go without saying that AI technology should not be making potentially lethal decisions without human involvement. I fear what America will become if the DoD is given this unrestricted power.”

Maybe that’s the biggest takeaway from this whole crazy story: While it’s nice that Anthropic is digging in their heels here, it’s insane that such questions as “how much killing will we let the killer robots do on their own” are being hashed out as back-room handshakes between the military and its AI contractors in the first place. This seems like a matter of public policy if ever there was one. Have we got a legislature or what?
Text from article: “Anthropic is trying to do the right thing and put their own guardrails in place in the absence of legislation,” she added. “It should go without saying that AI technology should not be making potentially lethal decisions without human involvement. I fear what America will become if the DoD is given this unrestricted power.” Maybe that’s the biggest takeaway from this whole crazy story: While it’s nice that Anthropic is digging in their heels here, it’s insane that such questions as “how much killing will we let the killer robots do on their own” are being hashed out as back-room handshakes between the military and its AI contractors in the first place. This seems like a matter of public policy if ever there was one. Have we got a legislature or what?
Text from article: “Anthropic is trying to do the right thing and put their own guardrails in place in the absence of legislation,” she added. “It should go without saying that AI technology should not be making potentially lethal decisions without human involvement. I fear what America will become if the DoD is given this unrestricted power.” Maybe that’s the biggest takeaway from this whole crazy story: While it’s nice that Anthropic is digging in their heels here, it’s insane that such questions as “how much killing will we let the killer robots do on their own” are being hashed out as back-room handshakes between the military and its AI contractors in the first place. This seems like a matter of public policy if ever there was one. Have we got a legislature or what?

AI Death Machines. No Human Oversight. What Could Go Wrong?

Pete Hegseth is trying to bully Anthropic out of objecting to “lethal autonomous weapons systems” and mass surveillance.
  • Copy link
  • Flag this comment
  • Block

BT Free Social

BT Free is a non-profit organization founded by @ozoned@btfree.social . It's goal is for digital privacy rights, advocacy and consulting. This goal will be attained by hosting open platforms to allow others to seamlessly join the Fediverse on moderated instances or by helping others join the Fediverse.

BT Free Social: About · Code of conduct · Privacy ·
Bonfire social · 1.0.2-alpha.34 no JS en
Automatic federation enabled
Log in
Instance logo
  • Explore
  • About
  • Code of Conduct