Onlyfansly Special: The Dark Side of AI


In a world where artificial intelligence (AI) promises to revolutionize industries from healthcare to entertainment, a shadowy underbelly threatens the privacy, dignity, and safety of millions worldwide. The Onlyfansly team, a collective dedicated to exposing digital abuses, delved into the depths of NSFW AI platforms to uncover how these tools are exploited to create non-consensual sexual content, including deepfakes and material simulating child exploitation. Our findings are alarming and underscore the urgent need for global regulations. Below, we detail our discoveries, backed by recent evidence and independent analyses from around the world.The Case of PornX.ai: A Hub for Illegal Deepfakes

PornX.ai emerged as one of the most problematic platforms in our investigation. This AI tool allowed the generation of hyper-realistic pornographic images and videos, including childlike versions of celebrities such as Meghan Markle, wife of Prince Harry. Premium users could upload reference images for faces and poses, enabling customized deepfakes. These creations were kept private or anonymously published in a community gallery, fueling widespread distribution.Following our initial probes, we observed reactive changes on the platform:
  • Complete removal of content from the public gallery.
  • Elimination of the internal search engine, which previously allowed users to find AI porn of famous women by simply typing their names.
  • Identification of three prolific creators of illegal AI content on sites like Poringa and Erome: Anton Promax (Erome), Twitch Fakes (renamed Streamers fake_1 on Poringa), and Madfingerz (Erome).
  • Suspension of the face-upload feature, followed by the launch of an alternate platform called Hot Gens, focused solely on faceswaps in NSFW content.
These modifications appear aimed at evading scrutiny, but they fail to address the core issue: AI models trained on datasets contaminated with real child sexual abuse material (CSAM). Independent studies, such as one from Stanford University in 2023, confirmed that models like Stable Diffusion include thousands of CSAM images, perpetuating trauma for real victims by generating new content. Globally, reports of AI-generated CSAM webpages surged 400% in the first half of 2025, according to the Internet Watch Foundation (IWF), with experts warning of full-length AI abuse films becoming inevitable. The number of deepfakes shared worldwide is projected to reach 8 million in 2025, up from 500,000 in 2023. Clothoff.ai and Drawnudes.ai: Stripping Away PrivacyPlatforms like Clothoff.ai and Drawnudes.ai specialize in "deep nudes," using AI to simulate clothing removal from real photos. Initially free and watermark-free, these tools now charge for premium access, adding features like dynamic videos and higher resolution. They promise to delete generated content within 24 hours, but this is illusory—once created, material spreads rapidly across dark web networks.A Guardian investigation in February 2024 revealed Clothoff.ai's role in generating non-consensual deepfakes of children globally, linking specific operators to the site. In 2025, these apps remain active, with updates including "ultra-realistic skin and texture simulation." Bots on forums like Poringa promote these systems with promises of "nudifying friends, acquaintances, and celebrities," perpetuating harassment. Similar tools have been linked to AI chatbots simulating indecent fantasies involving child abuse imagery. Global Geography of Abuse: Operations and Users WorldwideOur geographic tracking places the main headquarters of these platforms in:
  • Argentina and the Falkland Islands (Malvinas).
  • Romania, Russia, and Taiwan.
Active users span the globe, with high concentrations in Latin America (Argentina, Bolivia, Spain, Peru, Colombia, Mexico), but also in the US, India, Europe, and Asia. We detected demographic simulations, including VPN use to register as users from Chile, the US, or Mexico, and "digital cross-dressing": creators adopting female profiles to infiltrate communities. In India and China, similar platforms thrive amid growing misuse, prompting regulatory responses. Primary Targets: From Streamers to Global CelebritiesCommon victims are high-profile women on social media:
  • Twitch Streamers: Led by Natalia MX (Mexico), followed by Alana Flores and Staryuuki. Deepfakes circulate in closed communities.
  • Marvel/DC Actresses: Gal Gadot tops the list, with Ana de Armas and others.
  • Latin TikTokers: BethCast (Mexico) leads, alongside twins Ara and Fer.
  • Global Celebrities: Cases include Taylor Swift and Scarlett Johansson in the US, where deepfakes have sparked lawsuits.
Real-world impacts are devastating: In Spain, one in five young people report being victims of AI deepfakes, mostly sexual. In the US, a Minnesota group of friends fought AI-generated porn, leading to FBI involvement, while a 17-year-old sued an AI tool maker over fake nudes. In Hong Kong, a university student was warned for creating over 700 AI indecent images of classmates. Globally, a Europol operation led to 25 arrests for AI-generated CSAM in February 2025. Millions of children face increased sexual violence risks due to AI deepfakes, per recent global data. Ethical and Environmental ImpactsBeyond direct abuse, NSFW AI steals intellectual property and violates consent. Trained on real CSAM, it sustains victimization cycles. Studies from 2021-2023 show AI CSAM consumption heightens real-world reoffending risks. Environmentally, these models consume massive water and energy, exacerbating climate issues. If You're a Victim: Steps to TakeStay calm and act methodically:
  1. Talk to Trusted Ones: Share with family and friends for emotional support.
  2. Report Officially: Contact cyber police or public ministries (e.g., Mexico: 55 5242 5100 ext. 5086; Spain: Guardia Civil; US: FBI at tips.fbi.gov).
  3. Document Everything: Take screenshots, download (if safe), and archive profiles/URLs. Use tools like StopNCII.org for image hashing.
  4. Report on Platforms: Only after police approval, use internal reporting on X, Instagram, or Facebook.
  5. Seek Professional Help: Consult digital rights lawyers (e.g., EFF) or NGOs like RAINN (US), or equivalents globally.
Avoid premature media denunciations: They can viralize content and nullify legal processes. Regulations vary: The US DEFIANCE Act and DEEP FAKES Accountability Act enable civil penalties; the EU AI Act classifies minor deepfakes as serious crimes; Denmark proposes deepfake bans in copyright laws; India mandates AI labeling; China follows suit. Conclusion: Toward Global RegulationThe dark side of AI is not inevitable. Exposés like this drive change: In 2025, sites like MrDeepFakes shut down under pressure, and states like Tennessee banned deepfakes and AI CSAM. Onlyfansly urges governments, tech firms, and users to prioritize ethics over profit. If affected or with tips, contact us anonymously. Together, we can illuminate the digital shadows.Onlyfansly does not promote or distribute illegal content. This investigation aims to educate and protect.