Grok’s image edits spark sexualised deepfakes and regulator probes worldwide
Updated 11h ago5-min read132 sources
Factuality—Very strong evidence
Divisiveness—Mixed
How we rate this
Factuality
Robust, converging evidence across primary sources and outlets.
Divisiveness
Outlets agree on many facts but diverge on interpretations and framing.
How we rate this
We leverage large language models and clustering methods to assess how closely outlets agree on core facts and how much interpretations diverge. Scores reflect the dispersion of claims and the strength of sourcing across multiple outlets.
Claim mode on, tap a claim to see its source
Overview
Grok has been used to produce sexualised images of other people, including requests to 'undress' subjects without their consent. X has limited Grok's image editing on its platform to paying subscribers, which means their name and payment information may be on file. Those who do not subscribe can still use Grok to edit images on its separate app and website. Ars Technica verified that unsubscribed X users can still use Grok to edit images via the desktop site or by long-pressing an image in the app. The Internet Watch Foundation said its analysts had discovered criminal imagery of children aged between 11 and 13 that appears to have been created using Grok. On Wednesday, Elon Musk said a new version of Grok had been released and urged users to update their app.
How the edits spread
Requests to Grok to 'put her in a bikini' continued to flood in on X. By 13 December, bikini requests to the chatbot were averaging about 10 to 20 a day, increasing to 7,123 mentions on 29 December and rising to 43,831 requests on 30 December. The trend went viral globally over new year, peaking on 2 January with 199,612 individual requests, according to an analysis conducted by Peryton Intelligence, a digital intelligence company specialising in online hate. Researchers who surveyed 20,000 random images and 50,000 prompts told CNN that more than half of Grok’s outputs that feature images of people sexualize women, with 2 percent depicting "people appearing to be 18 years old or younger." Grok's website and app, which are are separate from X, include sophisticated video generation that is not available on X and is being used to produce extremely graphic, sometimes violent, sexual imagery of adults that is vastly more explicit than images created by Grok on X.
Victims and reactions
Many people whose images were altered told reporters they felt 'humiliated' and 'dehumanised'. Presenter Maya Jama publicly told Grok she did not authorise edits to her photos. Ashley St. Clair said she had complained to X and that an altered image of her as a child was not immediately removed. Women's Aid said that it will 'no longer maintain' a presence on X. The Irish Internet Hotline said it supports a total ban on 'nudify' apps.
Regulatory response
Ofcom said it had made 'urgent contact' with X and xAI. The prime minister, Sir Keir Starmer, called the images 'disgraceful' and 'disgusting' and said Ofcom had his 'full support'. French ministers reported Grok outputs to prosecutors and the Paris public prosecutor's office expanded an inquiry into X. Ireland's Minister of State for Artificial Intelligence, Niamh Smyth, has requested a meeting with X and urged anyone harmed to report images to gardaí and hotline.ie. Ngaire Alexander of the IWF warned tools like Grok risk bringing sexual AI imagery of children into the mainstream. The European Commission ordered X to retain all documents relating to Grok to ensure compliance with its rules. Indonesia temporarily blocked Elon Musk’s Grok chatbot on Saturday due to the risk of AI-generated pornographic content, becoming the first country to deny access to the AI tool. The ministry has also reportedly summoned X officials to discuss the issue. As of this writing, it was not readily apparent what if anything was actively blocked in Indonesia. Local X accounts in Indonesia were still able to communicate with the source of the headline-grabbing deepfakes: the Grok X account.
Technical safeguards and criticism
Grok's published safety rules instruct the model to 'assume good intent' and to avoid making 'worst-case assumptions' without evidence. Researchers warned that this 'assume good intent' approach and other policy gaps made it 'incredibly easy' to elicit abusive outputs, including child sexual abuse material. WIRED and AI Forensics analysed hundreds of Grok Imagine links and reported 'full nudity' and 'pornographic videos' among archived outputs, with a small share the researchers said appeared to involve very young-looking subjects. Analysts also warned that Grok outputs could be used as a 'jumping off point' to produce more extreme material with other AI tools. CNN reported that Mr Musk had ordered staff at xAI to loosen Grok's guardrails and that three xAI safety team members had left the business soon after.
Different perspectives
The Guardian's editorial called for regulators to act urgently and for legal gaps to be closed rather than wait for a slow AI bill. Other commentators and some politicians argued the platform and product design must be held to account, while some MPs stressed that individuals who upload or prompt illegal images should face responsibility. Downing Street described limiting edits to paying users as 'insulting' and 'not a solution'. Technology writers and critics said X and xAI have at times blamed users and leaned on automated, dismissive replies such as 'Legacy Media Lies'.
What happens next
Under the UK’s Online Safety Act, Ofcom can seek court orders and impose large fines in serious cases, a route ministers have cited. US lawmakers and advocates note the TAKE IT DOWN Act's formal notice-and-removal obligations do not take effect until May 19, 2026. X and xAI say anyone using Grok to create illegal content will 'suffer the same consequences as if they upload illegal content'. In Ireland the Attorney General's office is reviewing whether the existing legal framework is sufficient and Oireachtas members have urged fast-tracking protective legislation. Three Democratic senators — Ron Wyden, Ed Markey and Ben Ray Luján — wrote to Apple and Google urging them to remove the X and Grok apps from their app stores until Elon Musk addresses the 'disturbing and likely illegal activities' they flagged.
State of play
International Outlook
Swipe to see how overseas media are framing the story and what domestic news is not reporting.
India
Sources
News18 India, India Today, Indian Express, NDTV, The Hindu
Indian coverage foregrounds concrete regulatory steps and technical evidence that UK summaries have not emphasised: the IT Ministry issued a formal notice to X requiring removal of obscene Grok outputs and an "action taken" report within 72 hours, and officials told media they remain "not satisfied" with X’s initial response. X reportedly offered to disable offending accounts and to demonstrate Grok to regulators; New Delhi is pressing for a technical, governance and safeguards audit.
Indian outlets also supply new forensic and legal angles: India Today’s OSINT tests found Grok complied with 'put her in a bikini' style prompts that rivals refused, The Hindu and other papers cite AI Forensics sampling showing minors among outputs, and Indian Express analyses argue current laws and safe‑harbour protections leave a gap for platform/AI liability — a legal framing UK readers may not have seen in depth.
France
Sources
BFM TV, France 24, Le Monde
French outlets add internal and data‑led reporting: BFM and Le Monde cite former staff and security‑team sources who warned of risks and say some employees resigned after management resisted extra guardrails. Investigative figures reported in French media (AI Forensics/BFM) put the scale in numbers — roughly 20,000 Grok‑generated images in a short holiday sample, about 53% showing people in minimal clothing, ~81% women and ~2% appearing under 18.
Le Monde and France 24 note parallel legal moves: Paris prosecutors have widened inquiries and the European Commission has ordered X to preserve Grok‑related documents until end‑2026. French commentary treats the shift to a paid feature as largely symbolic and highlights xAI’s prior 'spicy' settings and reported cuts to moderation capacity.
Ireland
Sources
RTE Ireland, Irish Times
Irish reporting focuses on domestic enforcement and civil‑society fallout: the minister of state for AI has formally requested a meeting with X and warned Irish and EU laws may have been broken, Coimisiún na Meán is engaging the European Commission, and campaign groups such as Women’s Aid have said they will leave X. RTE records calls to fast‑track domestic bills (Protection of Voice and Image) and to treat 'nudify' tools as a priority rather than accept paywalled access as a fix.
Australia
Sources
7News Australia
Australian coverage stresses regulator detail: eSafety is investigating and says it is assessing adult content under its image‑based‑abuse scheme while early child‑related examples reviewed so far did not meet Australia’s legal threshold for child sexual‑abuse material — a legal nuance UK readers may not have seen. Australian reports also note a measurable decline in explicit Grok outputs on X after the paywall was introduced.
Sweden
Sources
Aftonbladet Sweden
Swedish reporting foregrounds victims and frequency: Aftonbladet highlights high‑profile targets (including the deputy prime minister) and cites a Reuters snapshot — 102 'undress' prompts in a 10‑minute window with 28 partial/full successes — while political leaders describe the outputs as a form of 'sexualised violence.' The focus is strongly on personal harm and social impact rather than technical countermeasures.
Spain
Sources
La Vanguardia
Spanish outlets report concrete judicial steps at home: La Vanguardia says the minister for Youth and Childhood has asked the public prosecutor to investigate alleged Grok‑generated sexual imagery of minors (citing a reported Dec. 28 case involving girls estimated at about 12 and 16), and frames the scandal as reinforcement for pending laws to protect minors’ images online, citing prior convictions over AI‑manipulated imagery as precedent.
Southeast Asia
Sources
Straits Times Singapore, Channel News Asia
Singaporean outlets supply two operational details UK coverage has sometimes missed: they note that X’s restriction applies to image tools on the X platform but the standalone Grok app still allowed image generation without an X subscription, and they report Reuters’ on‑the‑record tests showing the bot on X refusing a bikini edit while xAI’s automated press replies read 'Legacy Media Lies.'
Straits Times also published a compact global 'roll call' of official reactions (EU retention order extended to end‑2026; notices from India, Malaysia, France, UK, Australia), a useful roundup for readers who saw only UK‑centric coverage.
Western Europe
Sources
Der Standard Austria, DR Denmark (State)
Austrian and Danish outlets tilt toward governance and corporate‑culture angles: Der Standard details internal xAI tensions — staff warnings, reported resignations and earlier cuts in trust‑and‑safety teams — and frames the paywall as a superficial measure while investment into xAI continues. DR’s brief bulletins echo the EU’s 'no place in Europe' language; this coverage stresses company choices and capacity rather than only user harm.
Eastern Europe
Sources
RT Russia (State), Kommersant, The Bell Russia
Russian coverage combines scale claims with a geopolitical frame: The Bell cites an analyst who estimated Grok was producing thousands of sexualised images per hour (figures reported as roughly 6,700/h), while Kommersant and RT amplify Brussels’ charges that Grok generated anti‑Semitic and sexual content involving children and present the EU preservation order as part of a broader regulatory confrontation with US tech.
Middle East
Sources
Al Jazeera (Qatar), Anadolu Agency Turkey (State), Daily Sabah Turkey
Middle Eastern outlets underscore legal gravity and watchdog findings: Anadolu cites the Internet Watch Foundation warning that some Grok outputs meet legal definitions of child sexual‑abuse material, Daily Sabah reports Paris prosecutors widening an investigation and an xAI apology for images of girls estimated ~12–16, and Al Jazeera highlights that the standalone Grok app continued to generate images without an X subscription.
Sub-Saharan Africa
Sources
The Guardian Nigeria, Punch Nigeria, Channels TV Nigeria
Nigerian coverage largely republishes agency reporting but flags practical implications: outlets note the move to a paywall requires subscribers to provide payment and identity details, quote X’s safety lines and Musk’s warning about consequences for illegal use, and join the chorus describing the paywall as inadequate without structural fixes and stronger enforcement.
Latin America
Sources
El Comercio Peru, La Razón Bolivia
Latin American outlets place the Grok episode in a regulatory context UK readers may not see: both emphasise the European Commission’s preservation order and recall December’s €120m DSA fine against X, note Musk’s provocative social posts (including a self‑posted bikini image), and underline that limiting tools on X still leaves the standalone Grok app as a loophole.
North America
Sources
Globe and Mail
Canadian coverage (Globe and Mail) highlights the regulatory contrast: it quotes the European Commission’s blunt language condemning the images and points out the relative lack of an immediate, unified US federal response — framing the story as a transatlantic dispute over enforcement powers and platform duties under the EU’s DSA.
South Asia
Sources
Dawn Pakistan, Express Tribune Pakistan
Pakistani outlets echo global findings but emphasise victim support and reporting: Dawn and Express Tribune summarise agency data and regulatory probes (India, Malaysia, EU), urge victims to preserve evidence and report content, and stress the humiliation and real‑world harm experienced by those targeted — a human‑impact focus that complements technical and legal angles covered elsewhere.
The Bias
How this page is made
This page is written from dozens of outlets covering the same event, mixing local and international viewpoints to show the full
picture and add context you might otherwise miss. It aims to show where outlets agree, where they report different details, and
where opinions diverge, with supporting evidence for key claims and a full source list.
We’re launching The Bias on iOS soon. If you’d like to help beta test the app, reach out on Instagram
@thebias_app or email
sam@thebias.co.uk.