It will be fascinating to see if Ofcom will use its powers under the Online Safety Act 2023 against a service provider providing a service which is currently used by multiple branches of the UK government.
It will be fascinating to see if Ofcom will use its powers under the Online Safety Act 2023 against a service provider providing a service which is currently used by multiple branches of the UK government.
@neil If nothing else, if Twitter does get blocked it will prob be a net positive for the country, a reason to actually move to an alternative for more than 2 weeks
Something something force the network effect on an entire country
@david_chisnall @neil While I’m not conversant regarding the finer points of UK law in the digital space, it seems that the regulator might have various tools at its disposal, should it decide to use them.
@david_chisnall That wasn't Ofcom - that was the Internet Watch Foundation, and it is not done on the basis of a legal obligation.
It looks like the Internet watch foundation are saying that Grok is close to crossing the line but not quite there yet if I'm reading this correctly:
@neil honestly this is going to be fascinating to watch.
If the UK gov + others decide it’s too toxic to stay on, there could be a huge rebalancing of social
Media reach.
Having 200k+ followers on X won’t mean you have that elsewhere.
Either way, I’m going to sit back with a bag of popcorn and watch this all unfold. It’s so… so dumb. And broken. And gross.
I am also unclear if Ofcom actually has powers which it can wield under the OSA, in terms of Twitter and grok's CSAM/"nudification".
As I understand it - and boy have I not gone looking at the site - the content at issue on Twitter is content provided by Twitter itself (i.e. the output of its own tool).
It might - I'm not saying that it definitely does not - just that it didn't immediately jump out to me.
@neil strict liability crimes seem like a bit of an issue for reporting and enforcement!
@neil it is strange that due to a stupid law, I (as a middle aged man) cannot read my messages in Bluesky without sending confidential information to an untrusted US company for age verification[*] and that my teenage children are rapidly having information that they should have access to rapidly withdrawn; but any (age unverified) Twitter user can use their dodgy AI to make adult or CSAM content.
What a state the world is in.
[*] or without a workaround
@neil here in Ireland we have a law against knowingly produce/disseminate child porn, on the books since 1998
what we don't have is the political will to enforce it
@neil Yep, it's not clear whether it falls under s55(4) (user-generated) or s55(7) (provider-generated).
And if it does count as user-generated, does nudification count as 'humiliating or degrading' content and therefore caught by the rules on bullying content and priority content? (I would say yes!)
And "nudification" - I hate that term, and I wonder if "intimate image abuse" is more appropriate - is now a priority offence in itself.
@neil Yes I think 'intimate image abuse' is more appropriate (and I wonder if the operators of systems for generating them will ever be considered to have accessory liability as it wouldn't be possible to commit the offence without them).
Lastly - for now, I think, anyway - while many of us here do not use Twitter, my understanding is that lots of people do, including lots of people in the UK, for all sorts of everyday purposes that are nothing to do with CSAM/nudification.
I wonder if one can treat grok as a separate service. And even if one can, legally, can one differentiate grok *technically* when it comes to blocking a service.
I suspect that there will be considerable consideration as to whether it is appropriate to seek to block a large, widely-used, communications service, causing disruption to lots of people in the UK who use it for "normal" communications.
@neil on the UK government use of twitter / X. Wouldn’t it be better for the UK, and the government for its departments to switch to services in which it has some control, rather than relying on a US based service?
The Chinese are trying this with the Xinchuang policies.
As an ex-pat I’m a little out of touch, does the UK have something similar ? - I don’t know, but do they also use mastodon?
@neil probably the first case in history when government block would actually do good. So it won't happen
I am not a lawyer so this might be a silly question, but:
Could what happened to imgur be used as a precedent for this?
@neil “Too big to fail”? I hope not
Less "too big to fail", and more "what proportion of content needs to be problematic to justify blocking the whole site".
This came up in the Digital Economy Act 2017, in terms of how much pornography a site could host before it was a porn site. e.g. in the context of popular online encyclopaedias.
@neil As I understand it, there is still no understanding of how the OSA applies to popular online encyclopedias
@neil the irony is there are way too many government information services that only distribute time-sensitive info via X.
They shouldn’t, but they do.
And this might all be moot, as I understand that something has been turned off for some users.
But exactly what, and for whom, I do not know.
@neil
> "average curlystrawworld.net forum user is a paedophile" factoid is stastistical error. CSAM Greok, who posts 10,000 AI-generated CSAM images daily to the hobbyist-loops subforum, is an outlier adn should not have been counted
I'm not sure that'd be a convincing argument in most circumstances
@jackeric Strawmen don't always make good arguments :)
@neil hmm; AIUI it's produced *at the direction of* users though, so it's probably a blurry question
@ahnlak Indeed - that's why I say that I don't find it wholly clear.
@neil I don't really get why the OSA / Ofcom are involved anyway - some of the images (allegedly) generated would appear to be clearly illegal.
I imagine your local cycling forum generating deepfake CSAM would get more than a frown from Ofcom
@neil doesn't it then become an improperly age gated porn site
@neil and, if they don’t, will that inaction be cited in other cases they do bring?