The surge in nonconsensual deepfake pornography has ignited widespread concern, affecting thousands, from Twitch streamers to gamers. With over 13,000 copyright complaints lodged against deepfake websites, the push for stricter action against these platforms is growing. The Digital Media Copyright Act (DMCA) plays a crucial role, yet victims and advocates argue for more robust measures to protect individuals’ digital integrity and privacy.
Since 2017, the internet has seen a troubling explosion of nonconsensual deepfake pornography, deeply impacting thousands, including Twitch streamers, gamers, and various content creators. These individuals have reached out to Google in droves, seeking the removal of such harmful content from search results. A detailed examination by WIRED into the copyright claims against deepfake porn sites unveils that thousands of takedown notices have been filed, revealing an upward trend in complaints. Specifically, Google has dealt with over 13,000 copyright complaints, targeting nearly 30,000 URLs across the most notorious deepfake platforms.
The actions taken under the Digital Media Copyright Act (DMCA) have seen a significant number of these invasive videos pulled from the internet. Notably, two leading deepfake video websites have attracted more than 6,000 and 4,000 complaints, respectively. Overall, an impressive 82 percent of these complaints have led to content removal from Google’s search results, indicating a strong response to the issue. Despite this, many believe Google and other tech giants should intensify their efforts against these sites, including their total removal from search results, to better protect individuals’ rights and dignity.
Legal experts and organizations fighting against digital abuse have pointed out the distressing purposes of these websites, which aim to exploit and harm individuals’ reputations. They argue for more decisive actions, questioning why sites with a high volume of infringement notices remain accessible. The DMCA, while instrumental in addressing some aspects of the issue, is viewed by many as insufficient in the face of evolving digital threats like deepfakes, which often target women to harass or shame them.
Amidst slow reactions from tech companies and legislators, the ease of creating deepfakes has only increased, thanks to advancements in machine learning. The content ranges from altered consensual pornography to apps capable of generating entirely new deepfake images, highlighting the urgent need for updated legal frameworks and stronger protective measures for victims.
Despite the DMCA’s role in combatting this form of abuse, victims face significant challenges in having their content removed. The process is complicated by the nature of deepfake creation, which can alter the original material to such an extent that copyright ownership becomes a murky issue. Advocates for victims argue for a shift in copyright ownership to those depicted in illegal works, allowing them greater control over the situation.
As the fight against nonconsensual deepfake porn intensifies, it’s clear that current measures are inadequate. The call for more stringent actions, both from tech platforms and through legislative change, underscores the importance of safeguarding individual privacy and integrity in the digital age.
Apple rolls out UK age verification with iOS 26.4 after Meta and Google child safety fines
Apple has introduced age verification for iPhone and iPad users in the UK with iOS 26.4 and iPadOS 26.4, adding a new layer of checks for accounts that require confirmation that the user is 18 or older.
According to the report, UK users may now be asked to verify their age by adding a credit card or scanning an ID, unless Apple has already confirmed that information. Apple says the process is required by law in some countries and regions for actions tied to an Apple Account, including downloading apps, changing certain settings, or accessing specific features. When verification is needed, a prompt appears in the Settings menu.
The rollout comes at a time when child safety rules are tightening across the UK. While current UK law does not specifically require device-level age verification, adult websites, including pornography platforms, are already expected to carry out age checks. That has led to wider discussion about whether verification should also happen at the device level, rather than only on individual sites.
The timing is especially notable because it follows a major child safety case involving Meta and Google. The companies were reportedly ordered to pay $6 million after a lawsuit in Los Angeles claimed that platforms including Facebook, WhatsApp, and YouTube had a serious impact on a young woman’s mental health.
Apple’s move may also reflect broader regulatory pressure. The UK government is reportedly considering stronger restrictions for under-16s on social media, similar to measures seen in Australia. Reports also indicate Apple has been working with Ofcom as these safety tools develop.
For users who cannot verify an adult identity, Apple suggests that some features may be limited or that the account may need to be placed under Family Sharing with a parent or guardian. The exact restrictions could vary depending on the situation.
Australia has begun enforcing stricter age-verification rules for online adult content, requiring platforms to take meaningful steps to stop under-18s from accessing pornography and other age-restricted material. The Age-Restricted Material Codes for services including social media, relevant electronic services, equipment providers, and designated internet services came into effect on March 9, 2026.
Under the new framework, some services may now require proof of age before allowing access to legal adult content. Australia’s eSafety Commissioner says the accepted methods can vary by platform, but any age-assurance process must be accurate, reliable, and compliant with Australian privacy law. eSafety has said the changes are intended to reduce children’s exposure to pornography, high-impact violence, and other harmful age-inappropriate material online.
The rollout has already affected access to some major adult platforms in Australia, while debate continues over privacy risks and how effective the rules will be in practice. Recent reporting has also linked the changes to rising interest in VPN services as some users look for ways around the restrictions.
Apple: Age-Verification Tools Expand Worldwide With New 18+ Download Blocks
Apple is expanding its age-verification system in more countries to match stricter child-protection laws. The changes mainly affect how people download 18+ (adult-rated) apps and how developers confirm whether a user is a minor or an adult—without collecting sensitive personal details.
What’s changing for users
New 18+ download blocks: In Brazil, Australia, and Singapore, users must confirm they are 18 or older before downloading apps rated 18+.
Less access for minors to adult content: This is meant to stop children from downloading adult-only apps through the App Store.
What’s changing for developers
Declared Age Range API (updated): Apple is updating an API that lets apps know only an age category (example: minor vs adult), not the person’s exact age.
Developers do not receive private data, such asdate of birth.
The app receives a simple “category signal” to follow local rules.
Parental control options: For child accounts, parents/guardians can choose whether to share age information and whether permission is required in certain situations.
Loot boxes and “gambling-like” features
Apple is also targeting apps with features regulators often consider risky for minors, such as loot boxes.
In Brazil, if an app includes loot boxes, Apple may automatically rate it 18+.
That means minors can’t download it, because the App Store will treat it as adult-only.
U.S. states: Utah and Louisiana
Apple is adding tools to help apps comply with state-level child safety laws:
In Utah and Louisiana, Apple can share a new user’s age category with developers.
The system can also flag when parental permission is required, including for major app updates.
Why Apple says it’s doing this
Apple’s message is: protect kids + respect privacy.
The App Store handles most of the verification.
Apps get only a yes/no type age signal (minor/adult), not personal identity details.
The goal is to comply with various laws without forcing developers to collect sensitive data.
If you want, paste the original version you want to keep, word-for-word, and I’ll rewrite it just as clear while keeping the same word count.
You must be logged in to post a comment Login