Stanford University report uncovers networks of minors promoting self-generated child sexual abuse images. Meta has started a task force to investigate how its photo-sharing app Instagram facilitates the spread and sale of child sexual abuse material.
The new effort by the Facebook parent company follows a report from the Stanford Internet Observatory which found that large networks of accounts that appeared to be operated by minors openly advertising self-generated child sexual abuse material for sale.
Buyers and sellers of self-generated child sexual abuse material connected through Instagram’s direct messaging feature and Instagram’s recommendation algorithms made the advertisements of the illicit material more effective, the researchers found.
“Due to the widespread use of hashtags, relatively long life of seller accounts and, especially, the effective recommendation algorithm, Instagram serves as the key discovery mechanism for this specific community of buyers and sellers,” the researchers wrote.
The findings offer more insight into how internet companies have struggled for years to find and prevent sexually explicit images that violate its rules from spreading on its social network. Experts have highlighted how intimate image abuse or so-called revenge porn rose sharply during the pandemic, prompting tech companies, porn sites, and civil society to bolster their moderation tools.
The Stanford researchers said the overall size of the seller network ranges between 500 and 1000 accounts at a given time. They said they started their investigation following a tip from the Wall Street Journal, which first reported on the findings.
Meta said it has strict policies and technology to prevent predators from finding and interacting with teens. In addition to the task force, the company said it had dismantled 27 abusive networks between 2020 and 2022, and in January, disabled more than 490,000 accounts for violating its child safety policies.
“Child exploitation is a horrific crime,” Meta spokesman Andy Stone said in a statement. “We work aggressively to fight it on and off our platforms and to support law enforcement in its efforts to arrest and prosecute the criminals behind it.”
While Instagram is a central player in facilitating the spread and sale of child-sexualized imagery, other tech platforms also played a role, the report found. For instance, the report found that accounts promoting self-generated child sexual abuse material were also heavily prevalent on Twitter, although the platform appears to be taking them down more aggressively.
Some of the Instagram accounts also advertised links to groups on Telegram and Discord, some of which appeared to be managed by individual sellers, the report found.
Ofcom Sets July 2025 Age Check Deadline for Adult Sites
As the UK’s powerful media and communications regulator, the Office of Communications—better known as Ofcom—has officially laid down a hard deadline for adult websites to introduce robust age verification systems for UK users.
Established by the Office of Communications Act 2002 and empowered through the Communications Act 2003, Ofcom serves as the government-approved authority overseeing the UK’s broadcasting, telecoms, internet, and postal sectors. With a statutory duty to protect consumers and uphold content standards, Ofcom regulates a wide range of services including TV, radio, broadband, video-sharing platforms, and wireless communications. One of its core responsibilities is ensuring that the public—particularly minors—is shielded from harmful or inappropriate material online.
In its latest move under the UK’s Online Safety Act, Ofcom announced that all pornography providers accessible from the UK must implement “highly effective” age verification processes by July 25, 2025. On April 24, the regulator issued letters to hundreds of adult sites warning them of non-compliance consequences and clarifying that the law applies even to platforms based outside the UK.
“If people are visiting your site from the UK, you’ll likely be in scope, wherever in the world you’re based,” the agency stated.
The action builds on earlier requirements directed at porn content producers who self-host, some of whom were already expected to comply earlier this year. The July deadline now puts the entire online adult sector under one enforcement umbrella.
In addition to enforcing universal age checks, Ofcom is requiring any platform that only verifies age for part of its content to complete a children’s risk assessment for remaining accessible sections. This assessment must be submitted by July 24, just one day before the compliance deadline.
Sites found to be in breach of the new requirements face significant penalties—fines of up to 10% of global annual revenue or £18 million, whichever is greater. Ofcom also signaled the possibility of escalating enforcement by seeking court orders to compel third parties like banks and internet service providers to block access to non-compliant platforms.
As part of its broader safety initiative, Ofcom is exploring the use of AI-driven facial age estimation tools to support verification processes, a move reflecting the increasing intersection between artificial intelligence and adult content regulation.
Earlier this year, the UK government also announced plans to make the country the first in the world to criminalize the creation, possession, or distribution of AI tools intended to generate child sexual abuse material (CSAM), signaling an even more aggressive stance toward digital harms involving minors.
Ofcom’s July deadline now stands as a critical compliance milestone for the global adult industry. For any site with UK traffic, there is no longer room for delay—age verification must be implemented, or the consequences will be severe.
Alibaba’s latest AI video generation model, Wan 2.1, was meant to be a breakthrough in open-source technology. However, within a day of its release, it was adopted by AI porn creators, sparking concerns over its potential for misuse. While open AI modelsdemocratize access to powerful tools, they also raise ethical issues, particularly in the creation of non-consensual content. The rapid adoption of Wan 2.1 highlights this ongoing challenge.
Alibaba, the Chinese tech giant, recently released its new AI video generation model, Wan 2.1, making it freely accessible to those with the necessary hardware and expertise. While this open-source approach empowers developers and researchers, it also comes with a dark side. Within just 24 hours, the AI porn community seized the opportunity to produce and share dozens of explicit videos using the new software.
Even more concerning is the reaction from a niche online community dedicated to creating nonconsensual AI-generated intimate media of real people. Users on Telegram and similar platforms quickly celebrated Wan 2.1’s capabilities, praising its ability to handle complex movements and enhance the quality of AI-generated adult content. One user, referring to Tencent’s Hunyuan AI model (another tool popular in these circles), noted, “Hunyuan was released just in December, and now we have an even better text-to-video model.”
This is the ongoing dilemma of open AI models. On one hand, they offer groundbreaking possibilities, allowing developers to experiment, innovate, and improve AI technology. On the other, they can be easily exploited to create unethical and harmful content, including deepfake pornography.
Rapid Adoption in AI Porn Communities
The speed at which Wan 2.1 was adapted for explicit content was staggering. The first modifications of the model appeared almost immediately on Civitai, a site known for hosting AI-generated models. By the time initial reports surfaced, multiple variations of Wan 2.1 had already been downloaded hundreds of times. Users on Civitai enthusiastically shared AI-generated pornographic videos, many of which were created using these modified models.
Civitai’s policies prohibit the sharing of nonconsensual AI-generated pornography, but loopholes remain. While the site does not host nonconsensual content directly, it allows users to download models that can be used elsewhere for illicit purposes. Previous investigations have shown that once these models are accessible, there is little stopping users from misusing them in private or unregulated online spaces.
The Bigger Issue: Ethics of Open AI Models
The release of open-source AI models like Wan 2.1 is a double-edged sword. Open models promote innovation, allowing developers to refine AI technology for legitimate purposes such as filmmaking, animation, and content creation. However, as seen with Wan 2.1, early adopters often push the boundaries of ethical use, leading to misuse in inappropriate or even illegal ways.
Despite mounting concerns, Alibaba has remained silent on the issue. The company has yet to respond to inquiries regarding the misuse of its AI model. This raises questions about the responsibilities of tech giants when it comes to the unintended consequences of their AI releases. Should companies impose stricter regulations on how their AI models are used? Or is it the responsibility of platforms and communities to enforce ethical guidelines?
What Comes Next?
As AI-generated content becomes increasingly sophisticated, the challenge of regulating its use grows more complex. Open-source AI models are powerful tools, but they must be released with safeguards in place to prevent misuse. Without proper oversight, the line between innovation and exploitation will continue to blur, leaving room for ethical dilemmas and legal concerns.
For now, Wan 2.1 stands as yet another example of how quickly AI technology can be both a breakthrough and a battleground. The question remains—how will companies like Alibaba address these issues moving forward?
SexLikeReal (SLR) has launched SLR For Women, its first dedicated VR porn vertical offering a female-first perspective. This initiative utilizes the platform’s chroma suit passthrough technology to create immersive experiences tailored for female viewers.
A New Approach to VR Adult Content
SLR For Women debuted with a VR porn scene featuring Danny Steele and Alicia Williams, filmed using chroma passthrough technology. The female performer wears a chroma suit, allowing only her genitals to remain visible, maintaining a first-person perspective experience.
While female-perspective VR porn exists across various platforms, SLR’s entry is notable due to its technological advancements and strong user engagement. The company is inviting female users to submit scripts, with the best ideas set to be produced as POV VR scenes by its top production team.
Future Expansion & User Involvement
Currently, the SLR For Women section features just one scene, posted over three weeks ago. Although no rush of female subscribers is expected yet, SLR has indicated plans for more female-focused content and encourages user feedback to shape its future releases.
SLR has previously introduced AI-powered passthrough technology, allowing non-chroma-shot videos to be converted into passthrough VR, as well as the world’s first AR cam rooms for live streaming. Whether this new venture will receive continued investment remains to be seen, but the launch signals an industry shift towards more inclusive VR experiences.
Warning: Undefined variable $user_ID in /home/bcamsmagazine/public_html/wp-content/themes/zox-news/comments.php on line 49
You must be logged in to post a comment Login