Woman felt ‘dehumanised’ after Musk’s Grok AI used to digitally remove her clothes

A woman has told the BBC she felt “dehumanised and reduced into a sexual stereotype” after Elon Musk’s AI Grok was used to digitally remove her clothing.

The BBC has seen several examples on the social media platform X of people asking the chatbot to undress women to make them appear in bikinis without their consent, as well as putting them in sexual situations.

XAI, the company behind Grok, did not respond to a request for comment, other than with an automatically-generated reply stating “legacy media lies”.

Ms Smith shared a post on X about her image being altered, which was met with comments from women who had experienced the same – before others asked Grok to generate more images of her.

“Women are not consenting to this,” she said.

“While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.”

A Home Office spokesperson said it was legislating to ban nudification tools, and under a new criminal offence, anyone who supplied such tech would “face a prison sentence and substantial fines”.

The regulator Ofcom said tech firms must “assess the risk” of people in the UK viewing illegal content on their platforms, but did not confirm whether it was currently investigating X or Grok in relation to AI images.

Grok is a free AI assistant – with some paid for premium features – which responds to X users’ prompts when they tag it in a post.

It is often used to give reaction or more context to other posters’ remarks, but people on X are also able to edit an uploaded image through its AI image editing feature.

It has been criticised for allowing users to generate photos and videos with nudity and sexualised content, and it was previously accused of making a sexually explicit clip of Taylor Swift.

Clare McGlynn, a law professor at Durham University, said X or Grok “could prevent these forms of abuse if they wanted to”, adding they “appear to enjoy impunity”.

“The platform has been allowing the creation and distribution of these images for months without taking any action and we have yet to see any challenge by regulators,” she said.

XAI’s own acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner”.

In a statement to the BBC, Ofcom said it was illegal to “create or share non-consensual intimate images or child sexual abuse material” and confirmed this included sexual deepfakes created with AI.

It said platforms such as X were required to take “appropriate steps” to “reduce the risk” of UK users encountering illegal content on their platforms, and take it down quickly when they become aware of it.

Additional reporting by Chris Vallance.

Comments

Leave a Reply

Skip to toolbar