Law enforcement agencies worldwide, including the FBI and Interpol, are expressing concern over parent company Meta’s plans to roll out expanded end-to-end encryption, warning it could effectively “blindfold” the firm from detecting incidents of child sex abuse. The Virtual Global Taskforce, a coalition of 15 law enforcement organizations tasked with protecting children from such crimes, pointed out Meta in a joint statement urging tech companies to consider secure protocols when instituting end-to-end encryption.
“The announced implementation of [end-to-end encryption] on Meta platforms Instagram and Facebook is an example of a purposeful design choice that degrades safety systems and weakens the ability to safeguard child users,” the Virtual Global Taskforce said in a policy statement. The officials argue end-to-end encryption, while a sought-after privacy feature for secure communications, could make it more difficult for companies like Meta to identify criminal behavior occurring on their platforms.
“The VGT calls for all industry partners to fully understand the impact of implementing system design decisions that result in blinding themselves to [child sex abuse] occurring on their platforms, or reduce their capacity to detect CSA and keep kids secure,” the agencies added.
“The abuse will not stop just because companies decide to cease looking,” the agencies added. Meta has indicated plans to roll out end-to-end encryption for messages on all of its platforms – with one company executive once stating the feature would be enabled by default “sometime in 2023.” Meta-controlled WhatsApp already offers the feature by default. Earlier this year, Meta published a blog post detailing expanded end-to-end encryption on its Messenger platform.
The Financial Times was first to report on the Virtual Global Taskforce’s statement. The outlet noted that UK lawmakers are currently working on an online safety bill that has drawn criticism from tech giants who allege it will hurt user privacy. The proposed legislation would empower the UK’s telecom regulator, the Office of Communications, to require companies to monitor some messages for instances of child abuse.
An open letter signed by various tech bosses, including WhatsApp chief Will Cathcart, argued the bill would “give an unelected official the power to weaken the privacy of billions of people around the world” by scrutinizing encrypted messages. Meta defended its safety practices in a statement obtained by the FT.
“The vast majority of Brits already rely on apps that use encryption. We don’t think people want us reading their private messages, so we have developed safety measures that prevent, detect and allow us to take action against this appalling abuse, while preserving online privacy and security,” a Meta spokesperson said in the statement.
“As we continue to roll out our end-to-end encryption plans, we remain dedicated to working with law enforcement and child safety experts to ensure that our platforms are safe for young people.” The Post has reached out to Meta for further comment. Meta has faced intense criticism from US legislators over its safety practices, with detractors arguing the tech giant hasn’t gone far enough to protect its underage users from harmful content and abuse.
As The Post reported earlier this month, online safety experts penned an open letter urging Meta CEO Mark Zuckerberg to abandon the company’s plans to let children and teen users access its new metaverse service “Horizon Worlds” due to concern about potential abuse. The Virtual Global Taskforce is an alliance of 15 law enforcement agencies from around the world, including the FBI, US Immigration and Customs Enforcement (ICE), Interpol, Europol, and the United Kingdom’s National Crime Agency, with the latter serving as the group’s chair. The task force’s website describes the group as “an international coalition of 15 committed law enforcement agencies collaborating to address the global threat from child sexual abuse.”
Apple rolls out UK age verification with iOS 26.4 after Meta and Google child safety fines
Apple has introduced age verification for iPhone and iPad users in the UK with iOS 26.4 and iPadOS 26.4, adding a new layer of checks for accounts that require confirmation that the user is 18 or older.
According to the report, UK users may now be asked to verify their age by adding a credit card or scanning an ID, unless Apple has already confirmed that information. Apple says the process is required by law in some countries and regions for actions tied to an Apple Account, including downloading apps, changing certain settings, or accessing specific features. When verification is needed, a prompt appears in the Settings menu.
The rollout comes at a time when child safety rules are tightening across the UK. While current UK law does not specifically require device-level age verification, adult websites, including pornography platforms, are already expected to carry out age checks. That has led to wider discussion about whether verification should also happen at the device level, rather than only on individual sites.
The timing is especially notable because it follows a major child safety case involving Meta and Google. The companies were reportedly ordered to pay $6 million after a lawsuit in Los Angeles claimed that platforms including Facebook, WhatsApp, and YouTube had a serious impact on a young woman’s mental health.
Apple’s move may also reflect broader regulatory pressure. The UK government is reportedly considering stronger restrictions for under-16s on social media, similar to measures seen in Australia. Reports also indicate Apple has been working with Ofcom as these safety tools develop.
For users who cannot verify an adult identity, Apple suggests that some features may be limited or that the account may need to be placed under Family Sharing with a parent or guardian. The exact restrictions could vary depending on the situation.
Australia has begun enforcing stricter age-verification rules for online adult content, requiring platforms to take meaningful steps to stop under-18s from accessing pornography and other age-restricted material. The Age-Restricted Material Codes for services including social media, relevant electronic services, equipment providers, and designated internet services came into effect on March 9, 2026.
Under the new framework, some services may now require proof of age before allowing access to legal adult content. Australia’s eSafety Commissioner says the accepted methods can vary by platform, but any age-assurance process must be accurate, reliable, and compliant with Australian privacy law. eSafety has said the changes are intended to reduce children’s exposure to pornography, high-impact violence, and other harmful age-inappropriate material online.
The rollout has already affected access to some major adult platforms in Australia, while debate continues over privacy risks and how effective the rules will be in practice. Recent reporting has also linked the changes to rising interest in VPN services as some users look for ways around the restrictions.
Apple: Age-Verification Tools Expand Worldwide With New 18+ Download Blocks
Apple is expanding its age-verification system in more countries to match stricter child-protection laws. The changes mainly affect how people download 18+ (adult-rated) apps and how developers confirm whether a user is a minor or an adult—without collecting sensitive personal details.
What’s changing for users
New 18+ download blocks: In Brazil, Australia, and Singapore, users must confirm they are 18 or older before downloading apps rated 18+.
Less access for minors to adult content: This is meant to stop children from downloading adult-only apps through the App Store.
What’s changing for developers
Declared Age Range API (updated): Apple is updating an API that lets apps know only an age category (example: minor vs adult), not the person’s exact age.
Developers do not receive private data, such asdate of birth.
The app receives a simple “category signal” to follow local rules.
Parental control options: For child accounts, parents/guardians can choose whether to share age information and whether permission is required in certain situations.
Loot boxes and “gambling-like” features
Apple is also targeting apps with features regulators often consider risky for minors, such as loot boxes.
In Brazil, if an app includes loot boxes, Apple may automatically rate it 18+.
That means minors can’t download it, because the App Store will treat it as adult-only.
U.S. states: Utah and Louisiana
Apple is adding tools to help apps comply with state-level child safety laws:
In Utah and Louisiana, Apple can share a new user’s age category with developers.
The system can also flag when parental permission is required, including for major app updates.
Why Apple says it’s doing this
Apple’s message is: protect kids + respect privacy.
The App Store handles most of the verification.
Apps get only a yes/no type age signal (minor/adult), not personal identity details.
The goal is to comply with various laws without forcing developers to collect sensitive data.
If you want, paste the original version you want to keep, word-for-word, and I’ll rewrite it just as clear while keeping the same word count.
You must be logged in to post a comment Login