In the last 10 years, technology has made such huge advances that it is hard to imagine life without it. By looking around, there are plenty of examples of how it has made life easier. Artificial Intelligence and Machine Learning are being used to make processes simpler and allow us to do more things at once. However, with the increased usage of technology, the amount of frauds has also increased.
Have you heard of or even experienced any type of scam? Common ones include password phishing emails and spoof websites, but with AI, the scams become more dangerous. Although these cases may sound like something out of a movie, they are real and people have suffered from them. Let’s take a look at some examples.
Voice Cloning
If you weren’t aware that it was a scammer posing as your boss, you might transfer the funds without questioning it. However, it is important to be aware of potential scams, as they may not be who they say they are.
In 2019, a British energy company fell victim to a scammer posing as a German executive, resulting in the transfer of around $240,000 to an unknown bank account.
It is believed that the scammers utilized AI software to replicate the voice of the German executive, including their unique prosody and accent, based on a few audio recordings. Although this is not a widely used scamming tactic, it can be expected that it will become more common as AI software becomes more accessible and does not take a lot of resources or time to utilize.
Deep Fakes
A video of Ukrainian President Volodymr Zelesky asking the army to lay down their weapons during the Russia and Ukraine war circulated on social media, but it was quickly debunked and removed from a Ukrainian news website by hackers when the entire country was under the attack of Russian troops.
Thanks to deep fakes, a combination of voice cloning and AI-generated videos, criminals can create compromising videos of victims to blackmail them, as well as videos asking families to transfer money. This technique is used widely by criminals who use readily-available selfies and personal images from the internet.
It is becoming increasingly difficult to detect the deep fakes created by criminals; they even possess the proficiency to deceive facial recognition software.
Recruitment fraud
As more of the world has gone digital, it is now commonplace to find job postings online. Unfortunately, this has also opened the door for scammers to take advantage of unsuspecting people and steal their money
People are tricked into giving away their data and personal information in exchange for money when scammers post job openings online. Those looking for employment are asked to transfer money in order to secure the job, with a promise that it will be returned when they start working. This not only applies to full-time jobs, but part-time positions as well.
Fake Images
AI has made fake images or morphed images worse than they had been in the past, similar to the deep fakes. Lensa AI, for example, can produce non-consensual soft porn images when provoked, and it can be trained to create photos like sketches, cartoons, animes, and watercolors. This means that if an image with a different body and face is fed into the application, it will automatically generate a new image.
It is always advisable to double check the source of any sites, emails or messages before taking action, despite the benefits of AI. This is just one example; there are many other applications utilized by scammers for producing images for the purpose of blackmailing their victims.
Ofcom Sets July 2025 Age Check Deadline for Adult Sites
As the UK’s powerful media and communications regulator, the Office of Communications—better known as Ofcom—has officially laid down a hard deadline for adult websites to introduce robust age verification systems for UK users.
Established by the Office of Communications Act 2002 and empowered through the Communications Act 2003, Ofcom serves as the government-approved authority overseeing the UK’s broadcasting, telecoms, internet, and postal sectors. With a statutory duty to protect consumers and uphold content standards, Ofcom regulates a wide range of services including TV, radio, broadband, video-sharing platforms, and wireless communications. One of its core responsibilities is ensuring that the public—particularly minors—is shielded from harmful or inappropriate material online.
In its latest move under the UK’s Online Safety Act, Ofcom announced that all pornography providers accessible from the UK must implement “highly effective” age verification processes by July 25, 2025. On April 24, the regulator issued letters to hundreds of adult sites warning them of non-compliance consequences and clarifying that the law applies even to platforms based outside the UK.
“If people are visiting your site from the UK, you’ll likely be in scope, wherever in the world you’re based,” the agency stated.
The action builds on earlier requirements directed at porn content producers who self-host, some of whom were already expected to comply earlier this year. The July deadline now puts the entire online adult sector under one enforcement umbrella.
In addition to enforcing universal age checks, Ofcom is requiring any platform that only verifies age for part of its content to complete a children’s risk assessment for remaining accessible sections. This assessment must be submitted by July 24, just one day before the compliance deadline.
Sites found to be in breach of the new requirements face significant penalties—fines of up to 10% of global annual revenue or £18 million, whichever is greater. Ofcom also signaled the possibility of escalating enforcement by seeking court orders to compel third parties like banks and internet service providers to block access to non-compliant platforms.
As part of its broader safety initiative, Ofcom is exploring the use of AI-driven facial age estimation tools to support verification processes, a move reflecting the increasing intersection between artificial intelligence and adult content regulation.
Earlier this year, the UK government also announced plans to make the country the first in the world to criminalize the creation, possession, or distribution of AI tools intended to generate child sexual abuse material (CSAM), signaling an even more aggressive stance toward digital harms involving minors.
Ofcom’s July deadline now stands as a critical compliance milestone for the global adult industry. For any site with UK traffic, there is no longer room for delay—age verification must be implemented, or the consequences will be severe.
Alibaba’s latest AI video generation model, Wan 2.1, was meant to be a breakthrough in open-source technology. However, within a day of its release, it was adopted by AI porn creators, sparking concerns over its potential for misuse. While open AI modelsdemocratize access to powerful tools, they also raise ethical issues, particularly in the creation of non-consensual content. The rapid adoption of Wan 2.1 highlights this ongoing challenge.
Alibaba, the Chinese tech giant, recently released its new AI video generation model, Wan 2.1, making it freely accessible to those with the necessary hardware and expertise. While this open-source approach empowers developers and researchers, it also comes with a dark side. Within just 24 hours, the AI porn community seized the opportunity to produce and share dozens of explicit videos using the new software.
Even more concerning is the reaction from a niche online community dedicated to creating nonconsensual AI-generated intimate media of real people. Users on Telegram and similar platforms quickly celebrated Wan 2.1’s capabilities, praising its ability to handle complex movements and enhance the quality of AI-generated adult content. One user, referring to Tencent’s Hunyuan AI model (another tool popular in these circles), noted, “Hunyuan was released just in December, and now we have an even better text-to-video model.”
This is the ongoing dilemma of open AI models. On one hand, they offer groundbreaking possibilities, allowing developers to experiment, innovate, and improve AI technology. On the other, they can be easily exploited to create unethical and harmful content, including deepfake pornography.
Rapid Adoption in AI Porn Communities
The speed at which Wan 2.1 was adapted for explicit content was staggering. The first modifications of the model appeared almost immediately on Civitai, a site known for hosting AI-generated models. By the time initial reports surfaced, multiple variations of Wan 2.1 had already been downloaded hundreds of times. Users on Civitai enthusiastically shared AI-generated pornographic videos, many of which were created using these modified models.
Civitai’s policies prohibit the sharing of nonconsensual AI-generated pornography, but loopholes remain. While the site does not host nonconsensual content directly, it allows users to download models that can be used elsewhere for illicit purposes. Previous investigations have shown that once these models are accessible, there is little stopping users from misusing them in private or unregulated online spaces.
The Bigger Issue: Ethics of Open AI Models
The release of open-source AI models like Wan 2.1 is a double-edged sword. Open models promote innovation, allowing developers to refine AI technology for legitimate purposes such as filmmaking, animation, and content creation. However, as seen with Wan 2.1, early adopters often push the boundaries of ethical use, leading to misuse in inappropriate or even illegal ways.
Despite mounting concerns, Alibaba has remained silent on the issue. The company has yet to respond to inquiries regarding the misuse of its AI model. This raises questions about the responsibilities of tech giants when it comes to the unintended consequences of their AI releases. Should companies impose stricter regulations on how their AI models are used? Or is it the responsibility of platforms and communities to enforce ethical guidelines?
What Comes Next?
As AI-generated content becomes increasingly sophisticated, the challenge of regulating its use grows more complex. Open-source AI models are powerful tools, but they must be released with safeguards in place to prevent misuse. Without proper oversight, the line between innovation and exploitation will continue to blur, leaving room for ethical dilemmas and legal concerns.
For now, Wan 2.1 stands as yet another example of how quickly AI technology can be both a breakthrough and a battleground. The question remains—how will companies like Alibaba address these issues moving forward?
SexLikeReal (SLR) has launched SLR For Women, its first dedicated VR porn vertical offering a female-first perspective. This initiative utilizes the platform’s chroma suit passthrough technology to create immersive experiences tailored for female viewers.
A New Approach to VR Adult Content
SLR For Women debuted with a VR porn scene featuring Danny Steele and Alicia Williams, filmed using chroma passthrough technology. The female performer wears a chroma suit, allowing only her genitals to remain visible, maintaining a first-person perspective experience.
While female-perspective VR porn exists across various platforms, SLR’s entry is notable due to its technological advancements and strong user engagement. The company is inviting female users to submit scripts, with the best ideas set to be produced as POV VR scenes by its top production team.
Future Expansion & User Involvement
Currently, the SLR For Women section features just one scene, posted over three weeks ago. Although no rush of female subscribers is expected yet, SLR has indicated plans for more female-focused content and encourages user feedback to shape its future releases.
SLR has previously introduced AI-powered passthrough technology, allowing non-chroma-shot videos to be converted into passthrough VR, as well as the world’s first AR cam rooms for live streaming. Whether this new venture will receive continued investment remains to be seen, but the launch signals an industry shift towards more inclusive VR experiences.
Warning: Undefined variable $user_ID in /home/bcamsmagazine/public_html/wp-content/themes/zox-news/comments.php on line 49
You must be logged in to post a comment Login