Friday, January 16, 2026 - Elon Musk's AI tool Grok will no longer be able to edit photos of real people to show them in revealing clothing in jurisdictions where it is illegal, after widespread concern over s£xualised AI deepfakes.
"We have implemented technological measures to prevent
the Grok account from allowing the editing of images of real people in
revealing clothing," reads an announcement on X.
Reacting to the ban, the UK government claimed
"vindication" after Prime Minister Sir Keir Starmer called on X to
control its AI tool.
A spokesperson for UK regulator Ofcom said it was a
"welcome development" - but added its investigation into whether
the platform had broken UK laws "remains ongoing".
"We are working round the clock to progress this and
get answers into what went wrong and what's being done to fix it," they
said.
Technology Secretary Liz Kendall also welcomed the move but
said she would "expect the facts to be fully and robustly established by
Ofcom's ongoing investigation".X's change was announced hours after
California's top prosecutor said the state was probing the spread of s£xualised
AI deepfakes, including of children, generated by the AI model.
"We now geoblock the ability of all users to
generate images of real people in bikinis, underwear, and similar attire via
the Grok account and in Grok in X in those jurisdictions where it's
illegal," X said in its statement.
It also reiterated that only paid users will be able to edit
images using Grok on its platform.
This will add an extra layer of protection by helping to
ensure that those who try and abuse Grok to violate the law or X's policies are
held accountable, according to the statement.

0 Comments