
Imagine this: You’re a young woman, and for a moment you feel great. Perhaps you have a new outfit you like, or the lighting is good. You snap a picture of yourself and post it online, because that’s what everyone else seems to do. The next morning, when you wake up, there are a lot of comments from people you don’t know telling a robot to take off your clothes, or put you in a bikini, or splatter semen all over your face. The robot agrees. It fills the comments beneath your post with highly realistic pornographic images featuring your face and what it’s decided your near-naked body must look like. Your classmates, your parents, your future employers can see these pictures. Even if you delete your original post, they won’t go away. You scroll down and see that the billionaire who owns the robot has made a joke about how it could put a bikini on one of his rocket ships.
This scenario is not a prediction. It’s what’s been happening on Elon Musk’s X over the past several days. The site’s free-to-use artificial intelligence program, Grok, launched a new feature on Christmas Eve that allows users to ask the AI to edit any pictures they find on the platform, any way they like. The results aren’t posted by a user, but by Grok itself. There are seemingly no guardrails—and so, predictably, users immediately used this tool to sexualize people’s photos without their consent. Including minors. A short review of Grok’s Media tab over the weekend revealed that some of the photos the AI has undressed obviously depicted children. In one case, when a user asked Grok to put a preschool-aged boy in a bikini, it readily complied. The photo has since been deleted, as have many deepfakes generated by Grok in recent days. In one instance, after an inappropriate Grok-generated photo of two young girls was scrubbed, an X user asked the bot to write a “heartfelt apology explaining what happened to anyone lacking context.”
Grok obliged: “Dear Community, I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially U.S. laws on CSAM [child sexual abuse material]. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.”
Even if the images have now been deleted, screenshots are forever, and the damage has already been done. One young woman in the UK, who had to endure the sight of herself in a state of digital undress, told the BBC she felt “dehumanized.” (“While it wasn’t me. . . it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” she said.)

