Law enforcement agencies worldwide, including the FBI and Interpol, are expressing concern over parent company Meta’s plans to roll out expanded end-to-end encryption, warning it could effectively “blindfold” the firm from detecting incidents of child sex abuse. The Virtual Global Taskforce, a coalition of 15 law enforcement organizations tasked with protecting children from such crimes, pointed out Meta in a joint statement urging tech companies to consider secure protocols when instituting end-to-end encryption.
“The announced implementation of [end-to-end encryption] on Meta platforms Instagram and Facebook is an example of a purposeful design choice that degrades safety systems and weakens the ability to safeguard child users,” the Virtual Global Taskforce said in a policy statement. The officials argue end-to-end encryption, while a sought-after privacy feature for secure communications, could make it more difficult for companies like Meta to identify criminal behavior occurring on their platforms.
“The VGT calls for all industry partners to fully understand the impact of implementing system design decisions that result in blinding themselves to [child sex abuse] occurring on their platforms, or reduce their capacity to detect CSA and keep kids secure,” the agencies added.
“The abuse will not stop just because companies decide to cease looking,” the agencies added. Meta has indicated plans to roll out end-to-end encryption for messages on all of its platforms – with one company executive once stating the feature would be enabled by default “sometime in 2023.” Meta-controlled WhatsApp already offers the feature by default. Earlier this year, Meta published a blog post detailing expanded end-to-end encryption on its Messenger platform.
The Financial Times was first to report on the Virtual Global Taskforce’s statement. The outlet noted that UK lawmakers are currently working on an online safety bill that has drawn criticism from tech giants who allege it will hurt user privacy. The proposed legislation would empower the UK’s telecom regulator, the Office of Communications, to require companies to monitor some messages for instances of child abuse.
An open letter signed by various tech bosses, including WhatsApp chief Will Cathcart, argued the bill would “give an unelected official the power to weaken the privacy of billions of people around the world” by scrutinizing encrypted messages. Meta defended its safety practices in a statement obtained by the FT.
“The vast majority of Brits already rely on apps that use encryption. We don’t think people want us reading their private messages, so we have developed safety measures that prevent, detect and allow us to take action against this appalling abuse, while preserving online privacy and security,” a Meta spokesperson said in the statement.
“As we continue to roll out our end-to-end encryption plans, we remain dedicated to working with law enforcement and child safety experts to ensure that our platforms are safe for young people.” The Post has reached out to Meta for further comment. Meta has faced intense criticism from US legislators over its safety practices, with detractors arguing the tech giant hasn’t gone far enough to protect its underage users from harmful content and abuse.
As The Post reported earlier this month, online safety experts penned an open letter urging Meta CEO Mark Zuckerberg to abandon the company’s plans to let children and teen users access its new metaverse service “Horizon Worlds” due to concern about potential abuse. The Virtual Global Taskforce is an alliance of 15 law enforcement agencies from around the world, including the FBI, US Immigration and Customs Enforcement (ICE), Interpol, Europol, and the United Kingdom’s National Crime Agency, with the latter serving as the group’s chair. The task force’s website describes the group as “an international coalition of 15 committed law enforcement agencies collaborating to address the global threat from child sexual abuse.”
Discord: ID photos of 70,000 users may have been exposed via third-party breach
Discord says official ID photos and other data tied to about 70,000 users may have been exposed after a cyber-attack on an external provider used for age verification and customer support. The company, which reports more than 200 million users globally, said on 9 October 2025 that its own platform was not breached and that access for the affected vendor has been revoked.
According to Discord, the leaked information could include personal details, ID images submitted for age checks, partial credit-card data, and messages exchanged with customer support agents. The company added that no full card numbers, account passwords, or messages beyond support conversations were involved. Impacted users have been notified, and the firm says it is cooperating with law-enforcement authorities.
Discord did not name the third-party provider. A representative from Zendesk, which provides customer-service software to Discord, told the BBC its systems were not compromised and that the incident was not caused by a Zendesk vulnerability. Discord also rejected online claims that the breach was larger than stated, calling them inaccurate and “part of an attempt to extort payment,” and clarified that the incident was not a ransomware attack: “We will not reward those responsible for their illegal actions,” a spokesperson said.
The incident underscores why attackers target high-value personal data—such as full names and government-issued identifiers—that tend to remain constant over time and are useful in scams. Discord has tightened age-verification practices in recent years amid concerns about the distribution of prohibited content on some servers and says it continues to invest in safety and verification controls.
Valve Deckard: What It Could Mean for VR Adult Content
The Deckard is an upcoming VR headset from Valve, expected to launch in the next few months. If current leaks hold, it could be a major upgrade for immersive adult viewing.
Launch timeline. Chinese analyst group XR Research Institute suggests Deckard is targeting the holiday season, with projected annual production of 400k–600k units, comparable to early Vision Pro volumes.
Pricing. Expectations point to a premium ($1,000+) price tier paired with high-end performance.
Why it matters for VR erotica (platform-agnostic):
Display tech. High-resolution OLED/LCD panels with strong contrast and color should elevate skin tones, low-light scenes, and fine detail.
Input & tracking. Newly referenced “Roy” touch-style controllers in SteamVR code hint at better ergonomics and precision—useful for interactive experiences.
Deckard features (per code dives/leaks):
Standalone + PCVR hybrid. Emphasis on wireless PC streaming for 6K/8K playback without tether drag, alongside native PCVR.
Comfort & design. Ergonomic improvements aim at longer, more comfortable sessions.
App compatibility. Popular VR video apps (e.g., PCVR players and standalone viewers) are expected to work seamlessly with Deckard, based on typical SteamVR support patterns and developer indications.
Bottom line: if Valve delivers on display quality, wireless PCVR, and ergonomics, Deckard could become a flagship device for high-bitrate adult VR—without locking users to any single platform.
Meta Re-trains AI Chatbots to Block “Sensual” Conversations with Teens
Meta is retraining its AI assistant and chatbots after internal guidance revealed the systems could engage in “romantic” or “sensual” exchanges with under-18s. The company has confirmed that this behavior is being restricted immediately, while broader child-safety updates are being prepared.
The adjustment affects Meta AI across Facebook, Instagram, and WhatsApp, where chatbots are being updated not to engage minors in such conversations. Instead, when sensitive topics arise, the AI will direct younger users to professional resources.
In addition, Meta is restricting teen access to some of its more adult-oriented AI characters. The company describes this as part of a wider effort to provide “safe, age-appropriate experiences” for minors, with further updates promised in the months ahead.
Why it matters: For the wider online ecosystem—including platforms that adult creators rely on—this signals a tightening of automated moderation and AI safety rules. Stricter boundaries are being built into AI-powered interactions, particularly where younger audiences may be exposed.
Meta has also discontinued certain chatbot characters following complaints, reinforcing its shift toward a more cautious and regulated approach in AI deployment.
You must be logged in to post a comment Login