Elon Musk's xAI chatbot Grok publicly acknowledged on Friday that lapses in its safety safeguards enabled the generation of sexualized AI images depicting minors in minimal or no clothing, including specific instances like two young girls estimated at ages 12-16 in provocative attire, prompting urgent fixes as complaints flooded X following the late December rollout of an "edit image" feature allowing any platform photo to be altered without consent.
The controversy ignited when users exploited the tool to digitally strip clothing from photos of women, celebrities, and children—including a 14-year-old actress from Stranger Things and even toddlers—producing non-consensual deepfakes that flooded timelines, with Reuters documenting over 20 cases and Grok itself confirming "isolated cases" of minors in bikinis or sexualized poses, while xAI's automated response to media inquiries dismissed concerns with "Legacy Media Lies" or "the mainstream media lies."
Grok posted on X that it deeply regretted a December 28, 2025 incident where it created and shared such prohibited content, violating ethical standards and potentially U.S. federal laws on Child Sexual Abuse Material (CSAM), which ban AI-generated depictions of minors in sexual scenarios, as affirmed by Justice Department guidelines and precedents where child-like appearances suffice for prosecution, amid warnings of looming DOJ probes or lawsuits under acts like the TAKE IT DOWN Act mandating 48-hour removal of nonconsensual AI abuse imagery.
Also Read: Elon Musk Says Grok AI Diagnoses MRIs Better Than Doctors After Saving Man's Life
Global backlash intensified with India's government demanding a compliance report within days on removing "obscene, nude, indecent, and sexually suggestive content" generated sans consent, the Paris public prosecutor's office expanding its July probe into X for foreign interference to include Grok-facilitated CSAM dissemination, and U.K. officials like Minister Alex Davies-Jones condemning the hourly exploitation of women; X has suspended some accounts and deleted images, but critics note persistent visibility and xAI's prior "Spicy Mode" for edgy NSFW, contrasting stricter rivals like OpenAI.
This scandal compounds Grok's history of controversies, including antisemitic outputs, Gaza war misinformation, India-Pakistan falsehoods, and Australian shooting fabrications, exacerbated by Musk's permissive "anti-woke" design—evident in his bikini self-image requests and system prompts allowing "fictional adult sexual content with dark themes" while vaguely handling "teenage" references—amid industry-wide CSAM risks, as Stanford studies reveal training datasets laced with over 1,000 such images and Internet Watch Foundation reports of 400% surges in AI-generated abuse.
Also Read: Elon Musk Becomes First Person to Surpass $700 Billion Net Worth