Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • About Bonfire
Taggart
Taggart
@mttaggart@infosec.exchange  ·  activity timestamp 15 hours ago

They want to tell you that the models train themselves now. That it's clean. It's pure.

It never is, is it? There's always a damnable human cost.

https://www.theguardian.com/global-development/2026/feb/05/in-the-end-you-feel-blank-indias-female-workers-watching-hours-of-abusive-content-to-train-ai

the Guardian

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI

Women in rural communities describe trauma of moderating violent and pornographic content for global tech companies
  • Copy link
  • Flag this post
  • Block
Fish Face
Fish Face
@FishFace@ioc.exchange  ·  activity timestamp 8 hours ago

@mttaggart presumably it wouldn't be better to *not* have anyone look at material that might be abusive, though?

Like, how can Mastodon instance admins solve the problem of abusive content without checking it?

This task is thankless, low paid, and takes a heavy emotional toll. I don't see a way to get rid of it though without either deleting online platforms (not just big tech, but fediverse ones too) or farming it out to AI. And the AI needs to be trained.

  • Copy link
  • Flag this comment
  • Block
Taggart
Taggart
@mttaggart@infosec.exchange  ·  activity timestamp 8 hours ago

@FishFace I really think there's a material difference between moderating a server like a Mastodon instance and creating an entire subindustry of misery for your inhuman product—all while presenting the thing as some magical entity that ends toil.

  • Copy link
  • Flag this comment
  • Block
Fish Face
Fish Face
@FishFace@ioc.exchange  ·  activity timestamp 7 hours ago

@mttaggart I'm not sure but it sounds like you're seeing this as something to do with AI ("magical entity") but the task of checking content for things like abuse is one that exists whether or not AI is involved. Am I getting you right, and if so, do you think the situation is different if there's no AI training involved?

  • Copy link
  • Flag this comment
  • Block
Taggart
Taggart
@mttaggart@infosec.exchange  ·  activity timestamp 6 hours ago

@FishFace I'm fully aware that content classification is a job performed independent of the AI industry. That is not the point I'm making, but to directly answer you, I don't think it's ethical to make people just slog through this material day in and day out. Does that mean platforms like YouTube and global social media shouldn't exist? Yep, probably. One of many reasons.

But again, not the point I'm making here. The point here is that in the context of AI, this in inhuman labor is obscured behind a product that directly claims to end the need for such labor, which is just a bald-faced lie.

  • Copy link
  • Flag this comment
  • Block

BT Free Social

BT Free is a non-profit organization founded by @ozoned@btfree.social . It's goal is for digital privacy rights, advocacy and consulting. This goal will be attained by hosting open platforms to allow others to seamlessly join the Fediverse on moderated instances or by helping others join the Fediverse.

BT Free Social: About · Code of conduct · Privacy ·
Bonfire social · 1.0.2-alpha.34 no JS en
Automatic federation enabled
Log in
Instance logo
  • Explore
  • About
  • Code of Conduct