TeamSpeak, a voice chat service popular with gamers, is seeing a spike in sign-ups after Discord announced it will require age verification for some users, Forbes reported. The influx has strained TeamSpeak’s infrastructure across multiple regions, including the United States, as users seek alternatives following Discord’s recent update.
In a Feb. 16 update cited by Forbes, TeamSpeak said it added new hosting regions in Frankfurt and Toronto and is expanding capacity in response to an “incredible surge of new users.” TeamSpeak noted that the additional regions are designed to spread demand across more infrastructure rather than restrict where people can connect from, meaning U.S.-based users can still join communities hosted outside the country.
Founded in 2001, TeamSpeak has positioned itself as a privacy-focused alternative to Discord, emphasizing letting communities create and manage their own servers. That approach gives server owners more direct control over hosting choices, moderation policies and how data is handled.
TeamSpeak built its reputation well before Discord’s rise, becoming a staple in PC gaming communities where stable voice communication matters for coordination. The service has been widely used by groups playing multiplayer titles such as World of Warcraft and Overwatch, where clear, low-latency voice chat can be central to team play.
Online forums and gaming communities have reported renewed interest in TeamSpeak since Discord’s age-verification announcement, with some users urging friends and groups to migrate or set up fresh servers. Forbes said it contacted TeamSpeak for additional details on the jump in new users and was awaiting a response.
“Bombs and Porn” Debate Highlights Growing AI Concerns
AI data center projects across the United States are facing delays, cancellations, and growing public opposition as concerns rise over energy use, pollution, taxes, and the overall impact of artificial intelligence.
Nearly half of the data center capacity planned for 2026 has reportedly already been delayed or canceled. Local communities and lawmakers are increasingly pushing back against large AI infrastructure projects, with some states introducing restrictions on new developments.
Critics continue to question whether AI is truly improving daily life. Instead, AI tools are frequently associated with fake content, misinformation, cheating in schools, and harmful online material. Concerns also grew after reports connected a suspect in the 2025 shooting at Florida State University to extensive conversations with an AI chatbot before the attack.
The debate also includes concerns about AI being used in military and surveillance operations. AI-powered systems are increasingly being adopted by defense and government agencies, raising ethical and privacy concerns.
Environmental impact remains another major issue. Massive AI data center projects are expected to increase electricity demand significantly, with some companies planning new gas-powered infrastructure to support future expansion. Critics warn this could lead to higher emissions and additional pressure on energy grids.
Public skepticism toward AI also remains high. Recent polls show that many Americans believe AI could negatively affect jobs, education, and everyday life, while large numbers oppose building AI data centers near their communities.
Critics argue that instead of focusing on futuristic promises about “superintelligence,” tech companies and politicians should better explain the practical benefits AI is currently providing to ordinary people.
Apple rolls out UK age verification with iOS 26.4 after Meta and Google child safety fines
Apple has introduced age verification for iPhone and iPad users in the UK with iOS 26.4 and iPadOS 26.4, adding a new layer of checks for accounts that require confirmation that the user is 18 or older.
According to the report, UK users may now be asked to verify their age by adding a credit card or scanning an ID, unless Apple has already confirmed that information. Apple says the process is required by law in some countries and regions for actions tied to an Apple Account, including downloading apps, changing certain settings, or accessing specific features. When verification is needed, a prompt appears in the Settings menu.
The rollout comes at a time when child safety rules are tightening across the UK. While current UK law does not specifically require device-level age verification, adult websites, including pornography platforms, are already expected to carry out age checks. That has led to wider discussion about whether verification should also happen at the device level, rather than only on individual sites.
The timing is especially notable because it follows a major child safety case involving Meta and Google. The companies were reportedly ordered to pay $6 million after a lawsuit in Los Angeles claimed that platforms including Facebook, WhatsApp, and YouTube had a serious impact on a young woman’s mental health.
Apple’s move may also reflect broader regulatory pressure. The UK government is reportedly considering stronger restrictions for under-16s on social media, similar to measures seen in Australia. Reports also indicate Apple has been working with Ofcom as these safety tools develop.
For users who cannot verify an adult identity, Apple suggests that some features may be limited or that the account may need to be placed under Family Sharing with a parent or guardian. The exact restrictions could vary depending on the situation.
Australia has begun enforcing stricter age-verification rules for online adult content, requiring platforms to take meaningful steps to stop under-18s from accessing pornography and other age-restricted material. The Age-Restricted Material Codes for services including social media, relevant electronic services, equipment providers, and designated internet services came into effect on March 9, 2026.
Under the new framework, some services may now require proof of age before allowing access to legal adult content. Australia’s eSafety Commissioner says the accepted methods can vary by platform, but any age-assurance process must be accurate, reliable, and compliant with Australian privacy law. eSafety has said the changes are intended to reduce children’s exposure to pornography, high-impact violence, and other harmful age-inappropriate material online.
The rollout has already affected access to some major adult platforms in Australia, while debate continues over privacy risks and how effective the rules will be in practice. Recent reporting has also linked the changes to rising interest in VPN services as some users look for ways around the restrictions.
You must be logged in to post a comment Login