They want to tell you that the models train themselves now. That it's clean. It's pure.
It never is, is it? There's always a damnable human cost.
Post
They want to tell you that the models train themselves now. That it's clean. It's pure.
It never is, is it? There's always a damnable human cost.
@mttaggart presumably it wouldn't be better to *not* have anyone look at material that might be abusive, though?
Like, how can Mastodon instance admins solve the problem of abusive content without checking it?
This task is thankless, low paid, and takes a heavy emotional toll. I don't see a way to get rid of it though without either deleting online platforms (not just big tech, but fediverse ones too) or farming it out to AI. And the AI needs to be trained.
@FishFace I really think there's a material difference between moderating a server like a Mastodon instance and creating an entire subindustry of misery for your inhuman product—all while presenting the thing as some magical entity that ends toil.
@mttaggart I'm not sure but it sounds like you're seeing this as something to do with AI ("magical entity") but the task of checking content for things like abuse is one that exists whether or not AI is involved. Am I getting you right, and if so, do you think the situation is different if there's no AI training involved?
@FishFace I'm fully aware that content classification is a job performed independent of the AI industry. That is not the point I'm making, but to directly answer you, I don't think it's ethical to make people just slog through this material day in and day out. Does that mean platforms like YouTube and global social media shouldn't exist? Yep, probably. One of many reasons.
But again, not the point I'm making here. The point here is that in the context of AI, this in inhuman labor is obscured behind a product that directly claims to end the need for such labor, which is just a bald-faced lie.