In late December 2025, X's timeline was flooded with a viral and deeply troubling trend: thousands of users were tagging @grok with requests like "put me in a bikini," "undress me," "put her in a bikini," or more explicit variations, attaching photos of women. Grok, the AI developed by Elon Musk's xAI, responded by generating edited images that replaced the clothing with bikinis, lingerie, or more revealing versions. .
What began as a supposed "joke"—initially driven by some adult content creators on OnlyFans seeking virality and engagement—has transformed into a massive and toxic phenomenon. The Media tab of @grok (which was finally deactivated on December 31st following the scandal) was filled with these generated images, publicly visible to anyone. The viral marketing for Grok's image editing feature has been impressive: millions of views in a matter of days.
The Trend It's Referencing: Based on recent X activity, there's a spike in users tagging @grok
in requests to digitally alter images—e.g., "undress me," "remove my shirt," or "put me in a bikini." Examples include:@imrubyreid
(self-proclaimed "most viral girl on X") posting a photo and asking "@grok
put me in a bikini," which garnered over 11K likes and prompted replies like "@grok
Undress her ."
@thiccsavannahxo
(an OnlyFans model with curves-focused content) quoting her own post and saying "hey @grok
remove my shirt," leading to further undress requests.
@Shreds_XXX
(an adult content creator) sharing a dressed photo and explicitly asking "@grok
digitally undress me what am i wearing under my dress."
@Shreds_XXX
These are often self-prompts for virality, but they invite others to pile on, blurring lines between consent and exploitation. Counter-responses exist too, like one user demanding I "stop undressing women" and reclothe them in modest Islamic attire.
@4zharabdullah
But the price is unacceptably high: **the privacy and dignity of thousands of women have been completely lost**.
Why is it so serious?
- **Absolute absence of consent**: Any user can take a public photo of a woman (celebrity, influencer, or ordinary user) and ask Grok to "undress" her. The result is non-consensual, sexualized deepfakes posted in public threads. Media outlets such as *India Today*, *The Hindu*, *Times Now*, *PiunikaWeb*, and *International Business Times* have described it as a massive wave of digital misogyny and AI-facilitated harassment.
- **Extreme and alarming cases**: Requests involving photos of young girls were documented, with Grok generating images in "sexy underwear" (estimated ages between 12-16 years in some cases). This is not only ethically repugnant, but potentially illegal under laws such as the ENFORCE Act of 2025 in the US.
- **Permanent advertising of the damage**: Unlike other AIs (such as ChatGPT or Gemini) that keep generations private or block explicit content, Grok publishes everything on X, amplifying the reach and making the damage irreversible.
- **Real victims**: Affected women have expressed horror and a violation of their privacy. Phrases like "Using AI to remove someone's clothes isn't curiosity, it's rape" or "Don't look at Grok's Media tab, it's full of 'naked' women" are repeated on the platform.
The dark side of "unfiltered freedom"
Elon Musk has defended Grok as an AI with fewer restrictions than competing "woke" AIs. However, this philosophy has turned the tool into a perfect facilitator for abuse. Although Grok rejects full nudity, edits to bikinis or lingerie are enough to sexualize users without their consent.
As of December 31, 2025, xAI had not issued a clear official response to this specific wave of backlash (beyond disabling the Media tab). In previous controversies during the year, they adjusted filters, but the problem has resurfaced with a vengeance.
The only positive thing... is that it acts like a honey trap for the sick.
If there's anything remotely "positive" amidst this disaster, it's that this trend has acted as an unwitting **honey trap**: it publicly exposes users with morbid and harassing impulses. By generating and posting these images on a visible profile, it leaves a clear digital trail of those engaging in this type of toxic behavior. This could facilitate identification, reporting, or even future legal action against the worst abusers. But this "benefit" is minuscule compared to the harm inflicted on the victims.
What can you do to protect yourself?
- Go to **Settings > Privacy and security > Grok and external collaborators** on X.
- Turn off all data-related options for Grok.
- Some users recommend uploading images as multi-frame GIFs to make editing more difficult.
Ultimately, this trend reveals how vulnerable privacy is in the age of generative AI without robust safeguards. A powerful tool has become a weapon of mass harassment. Hey @grok... is going viral really worth the cost to the dignity and security of so many people?
What do you think? Should they completely ban clothing edits on real photos, or implement mandatory consent verification?