Re-engineering AI for the Public Good

I was happy to help Greg Barradale at the Big Issue with his explainer report today on how the UK regulatory regime applies to the use of of X's Grok AI to generate and spread non-consensual sexualised images.

I restricted my comments to the practicalities of the law and the potential penalties for X because I think there has been a bit of confusion on this point in the public debate so far. (There is the broader debate about a social media ban for under-16s, which is related, but ultimately a different matter. And a further issue is that the UK has existing law that potentially applies, but the Government is now fully enacting some of that as well as discussing future legislation that could potentially apply to cases such as this.)

As for the situation right now, the Online Safety Act covers material that is illegal and material that is harmful to children and, importantly, it applies to user-generated content such as these Grok-generated images on X.

At the very least, under the OSA, X has a clear legal responsibility to act quickly to respond to user complaints and remove illegal content from its platform.

Beyond this, a key question is whether this feature of the Grok AI can itself be the target of any action by Ofcom. From what we know from public reports so far, it appears that X wants to continue with Grok's “nudify” feature but restrict its use to paid subscribers while in the meantime continue to try to remove illegal content users create with Grok, as part of its obligations under the OSA.

But it will be extremely difficult for X to identify and quickly remove all of the images and practically fulfil its legal obligations. And just as importantly, the Grok feature is now an integral part of X and is embedded in how the platform works.

So I think it is fair to ask whether it will be possible for X to comply with the OSA without re-engineering Grok's model to remove the “nudify” feature entirely.

As for the practicality of a UK restriction of access to X, should it come to that, there would be due regulatory and legal process under the OSA. It would initially need to focus clearly on whether X is actually effectively removing illegal content. This would involve a series of investigations, notices, and potentially fines from Ofcom. But if those do not work, the OSA would ultimately be enforced through restrictions on advertising, restrictions on payment processors, reducing app store availability, and potentially then restricting access at the level of UK internet service providers.

Have a read.