What a new law and an investigation could mean for Grok AI deepfakes

Zoe KleinmanTechnology editor
BBCHere’s me, at the end of a pier in Dorset in the summer.
Two of these images were generated using the artificial intelligence tool Grok, which is free to use and belongs to Elon Musk.
It’s pretty convincing. I’ve never worn the rather fetching yellow ski suit, or the red and blue jacket – the middle photo is the original – but I don’t know how I could prove that if I needed to, because of those pictures.
Of course, Grok is under fire for undressing rather than redressing women. And doing so without their consent.
It made pictures of people in bikinis, or worse, when prompted by others. And shared the results in public on the social network X.
There is also evidence it has generated sexualised images of children.
Following days of outrage and condemnation, the UK’s online regulator Ofcom has said it is urgently investigating whether Grok has broken British online safety laws.
The government wants Ofcom to get on with it – and fast.
But Ofcom will have to be thorough and follow its own processes if it wants to avoid criticism of attacking free speech, which has dogged the Online Safety Act from its earliest stages.
Elon Musk has been uncharacteristically quiet on the subject in recent days, which suggests even he realises how serious this all is.
But he did fire off a post accusing the British government of seeking “any excuse” for censorship.
Not everyone agrees that on this occasion, the defence is acceptable.
Back-and-forth
“AI undressing people in photos isn’t free speech – it’s abuse,” says campaigner Ed Newton Rex.
“When every photo a woman posts of themselves on X immediately attracts public replies in which they’ve been stripped down to a bikini, something has gone very, very wrong.”
With all this in mind, Ofcom’s investigation could take time, and a lot of back-and-forth – testing the patience of both politicians and the public.
It’s a major moment not only for Britain’s Online Safety Act, but the regulator itself.
It can’t afford to get this wrong.
Ofcom has previously been accused of lacking teeth. The Act, which was years in the making, only came fully into force last year.
It has so far issued three relatively small fines for non-compliance, none of which have been paid.
The Online Safety Act doesn’t specifically mention AI products either.
And while it is currently illegal to share intimate, non-consensual images, including deepfakes, it is not currently illegal to ask an AI tool to create them.
That’s about to change. The government will this week bring into force a law which will make it illegal to create these images.
And the UK says it will amend another law – currently going through parliament – which would make it illegal for companies to supply the tools designed to make them, too.
These rules have been around for a while, they’re not actually part of the Online Safety Act but a completely different piece of legislation called the Data (Use and Access) Act.
They’ve not been brought into enforcement despite repeated announcements from the government over many months that they were incoming.
Today’s announcement shows a government determined to quell criticisms that regulation moves too slowly, by showing it can act quickly when it wants to.
It’s not just Grok that will be affected.
A political bombshell?
The new law that will be enforced this week could prove to be a headache for other owners of AI tools which are technically mostly capable of generating these images as well.
And there are already questions around how on earth it will be enforced – Grok only came under the spotlight because it was publishing its output on X.
If a tool is used privately by an individual user, they find a way around the guardrails and the resulting content is only shared with those who want to see it, how will it come to light?
If X is found to have broken the law, Ofcom could issue it with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.
It could even seek to block Grok or X in the UK. But this could also be a political bombshell.
I sat at the AI Summit in Paris last year and watched Vice President JD Vance thunder that the US administration was “getting tired” of foreign countries attempting to regulate its tech companies.
His audience, which included a huge number of world leaders, sat in stony silence.
But the tech firms have a lot of firepower inside the White House – and several of them have also invested billions of dollars in AI infrastructure in the UK.
Can the country afford to fall out with them?


