Discussion
Loading...

#Tag

Log in
  • About
  • Code of conduct
  • Privacy
  • About Bonfire
Gerardo Lisboa boosted
Zhi Zhu 🕸️
Zhi Zhu 🕸️
@ZhiZhu@newsie.social  ·  activity timestamp 12 hours ago

"Defense Sect #PeteHegseth has grown unhappy with two elements of the DoD’s contract with #Anthropic. One, Anthropic won’t let its #AI be used to conduct mass surveillance of Americans. Two, it won’t let the #DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Dept, the idea that a contractor would be able to tie the military’s hands like this is outlandish"
https://www.thebulwark.com/i/189250040/hegseths-ai-ultimatum

#Tech #News #US #USA

Headline and text from article:
Hegseth’s AI Ultimatum

by Andrew Egger

Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies?

This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro.

But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1
Headline and text from article: Hegseth’s AI Ultimatum by Andrew Egger Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies? This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro. But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1
Headline and text from article: Hegseth’s AI Ultimatum by Andrew Egger Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies? This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro. But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1

AI Death Machines. No Human Oversight. What Could Go Wrong?

Pete Hegseth is trying to bully Anthropic out of objecting to “lethal autonomous weapons systems” and mass surveillance.
  • Copy link
  • Flag this post
  • Block
Zhi Zhu 🕸️
Zhi Zhu 🕸️
@ZhiZhu@newsie.social  ·  activity timestamp 12 hours ago

"Defense Sect #PeteHegseth has grown unhappy with two elements of the DoD’s contract with #Anthropic. One, Anthropic won’t let its #AI be used to conduct mass surveillance of Americans. Two, it won’t let the #DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Dept, the idea that a contractor would be able to tie the military’s hands like this is outlandish"
https://www.thebulwark.com/i/189250040/hegseths-ai-ultimatum

#Tech #News #US #USA

Headline and text from article:
Hegseth’s AI Ultimatum

by Andrew Egger

Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies?

This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro.

But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1
Headline and text from article: Hegseth’s AI Ultimatum by Andrew Egger Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies? This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro. But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1
Headline and text from article: Hegseth’s AI Ultimatum by Andrew Egger Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies? This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro. But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1

AI Death Machines. No Human Oversight. What Could Go Wrong?

Pete Hegseth is trying to bully Anthropic out of objecting to “lethal autonomous weapons systems” and mass surveillance.
  • Copy link
  • Flag this post
  • Block

BT Free Social

BT Free is a non-profit organization founded by @ozoned@btfree.social . It's goal is for digital privacy rights, advocacy and consulting. This goal will be attained by hosting open platforms to allow others to seamlessly join the Fediverse on moderated instances or by helping others join the Fediverse.

BT Free Social: About · Code of conduct · Privacy ·
Bonfire social · 1.0.2-alpha.34 no JS en
Automatic federation enabled
Log in
Instance logo
  • Explore
  • About
  • Code of Conduct